CN101673343B - System and method for increasing signal real-time mode recognizing processing speed in DSP+FPGA frame - Google Patents

System and method for increasing signal real-time mode recognizing processing speed in DSP+FPGA frame Download PDF

Info

Publication number
CN101673343B
CN101673343B CN200910197183XA CN200910197183A CN101673343B CN 101673343 B CN101673343 B CN 101673343B CN 200910197183X A CN200910197183X A CN 200910197183XA CN 200910197183 A CN200910197183 A CN 200910197183A CN 101673343 B CN101673343 B CN 101673343B
Authority
CN
China
Prior art keywords
dsp
fpga
ram
signal
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200910197183XA
Other languages
Chinese (zh)
Other versions
CN101673343A (en
Inventor
杨辉
陆小锋
张颖
金臻
袁承宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kai Yi Electronic Technology Co., Ltd.
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN200910197183XA priority Critical patent/CN101673343B/en
Publication of CN101673343A publication Critical patent/CN101673343A/en
Application granted granted Critical
Publication of CN101673343B publication Critical patent/CN101673343B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a system and a method for increasing the signal real-time mode recognizing processing speed in a DSP+FPGA frame. The system structure of the invention is a signal real-time mode recognizing core set up by a DSP hardware chip, an FPGA hardware chip, an SDRAM hardware chip and an FLASH hardware chip, wherein DSP is used as a main processing chip, FPGA is used as an auxiliary processing chip, SDRAM is used as a main memory and provides memory support when DSP works, and FLASH is used as an auxiliary memory. DSP in the invention uses multi-thread for cooperating with the whole signal processing flow, and four treads which are respectively a main thread, a signal collecting thread, a signal processing thread and a result processing thread are realized. The method improves the parallelism of system data processing, increases the signal processing speed of the system, and provides a solution scheme based on the DSP+FPGA frame for the inlaid-type real-time high-speed signal mode recognizing system.

Description

In the DSP+FPGA framework, improve the system and method for signal real-time mode recognizing processing speed
Technical field
The present invention relates to utilize external memory interface (EMIF) completion DSP and the signal transmission between the FPGA of DSP and replace DSP and accomplish a solution that pattern classification improves the data processing speed of whole PRS with FPGA.Belong to electronic information field.
Background technology
Increasingly extensive along with the embedded technology application, and the inevitable Intelligent Developing Trends of embedded technology, also increasing for the demand of embedded mode identification technology.Occasion for needs carry out pattern-recognition often can collect great deal of information, needs at short notice these information to be concluded refinement, thereby obtains accurate, the terse description of target.The bottleneck of embedded pattern-recognition just is to be difficult to guarantee the speed of signal mode identification processing; The have relatively high expectations application scenario of (such as video signal, network data flow etc.) of, result real-time big for amount of input information, traditional technological means often can't satisfy the requirement on the processing speed.
In present embedded system, based on the Architecture characteristic of DSP+FPGA be exactly FPGA with its flexibly programmability be responsible for external interface and sequential control part, main signal Processing computing is then accomplished by DSP, has made full use of the arithmetic capability of DSP like this.In this framework, though considered the advantage of FPGA on clock signal produces, the advantage of FPGA in parallel computation do not obtain utilizing; And simple sequential control often can only be used in the FPGA resource seldom; A large amount of resources is by idle, and this programme is considered this point exactly, in FPGA, puts into the neural network classifier that is particularly suitable for parallel computation; FPGA can replace DSP to accomplish the classification link in its pattern-recognition like this; Improved the concurrency of whole signal processing flow, thereby improved the processing speed of system, the application scenario that the real-time mode recognizing of satisfying the demand requires.
DSP in the scheme uses the TMS320C6000 series of TI company, and this serial EMIF interface is supported the seamless link of various external devices, comprises SRAM, SDRAM, ROM, FIFO and the outside device or the like of sharing.External memory space is divided into four independently storage spaces (CE space), selects CE line and corresponding CE spatial control register controlled by 4 outer plate.
Summary of the invention
The objective of the invention is to deficiency, a kind of system and method that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed is provided, can make the conversion speed of this framework improve 30~50% to the prior art existence.
This method is utilized the computation capability of FPGA, and the neural network classifier that script need be handled in DSP is put among the FPGA to be handled, and has shared the burden of DSP.Use the EDMA mode on the EMIF bus, to communicate between DSP and FPGA, communication process does not take the CPU time sheet, and DSP goes up the use multithreading, and DSP is idle when guaranteeing in FPGA, to carry out classification processing, accomplishes the parallel processing of DSP and FPGA.
For realizing above-mentioned purpose, design of the present invention is:
Embedded Real-Time PRS based on the DSP+FPGA framework; With DSP is main process chip, and FPGA is association's process chip, and storer is furnished with SDRAM and FLASH; SDRAM is as primary memory; DSP is provided the internal memory support in when work, and FLASH is as supplementary storage, utilizes its characteristics of obliterated data not of cutting off the power supply to preserve the weights data of the neural network in startup vectoring information, routine data and the FPGA of DSP.DSP, FPGA, SDRAM and FLASH all are connected on the EMIF bus of DSP, make things convenient for them to carry out data interaction mutually.System also is furnished with peripheral modules such as signals collecting, control automatically, output demonstration and man-machine interaction except above core, but irrelevant with core content of the present invention, so introduce no longer in detail.
In 4 external memory spaces (CE space) of DSP, CE0 is configured to synchronous space, distributes to main storage device SDRAM, and CE1 and CE2 are configured to asynchronous space, distribute to the internal RAM of FLASH and FPGA respectively.Address wire, data line and the control line of the EMIF interface of DSP also need be connected on the pin of FPGA except the respective pins that will be connected to SDRAM and FLASH.
For signal to be identified, its entire process flow process is following:
1.DSP obtain signal to be identified through the signals collecting thread.
2.DSP carry out pre-service for the signal that collects, obtain several interested targets in the signal, these targets are exactly the main body that need carry out pattern-recognition.
3.DSP carry out feature extraction for each interested target, the characteristic packing with extracting sends to FPGA through the EMIF bus with enhancement mode direct memory visit (EDMA) mode and carries out discriminator.
4.FPGA utilize ram in slice to receive the characteristic bag that DSP transmits; With corresponding being input in the neural network classifier module in the FPGA of characteristic; Through the classification results after this module again anti-pass pull over and temporarily store in the ram in slice, when DSP needs, this result is sent to DSP through the EMIF bus.
5.DSP visit the address of RAM in the FPGA through the mode of inquiry; The classification results number of also not brought back by DSP in the FPGA has been write down in this address; If should count greater than 0, then DSP reads a classification results from FPGA, and this result is carried out corresponding subsequent processing and output control.
According to the foregoing invention design, the present invention adopts following technical proposals:
A kind of system that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed.It is characterized in that system architecture is to be built into the signal real-time mode recognizing core with DSP, FPGA, SDRAM and 4 chips of FLASH; Wherein DSP is as main process chip; FPGA is as association's process chip; SDRAM is as primary memory, the internal memory support when DSP work is provided, and FLASH is as supplementary storage.DSP, FPGA, SDRAM and FLASH all are connected on the EMIF bus of DSP, make things convenient for them to carry out data interaction mutually.
The EMIF interface of above-mentioned DSP has a plurality of CE space; Be CE0~CE3; Connect the main storage device SDRAM of DSP with one of them, another CE space connects auxiliary storage device FLASH, and the 3rd CE space then connects an External memory equipment by said FPGA ram in slice simulation; Data line, address wire and read-write control line in the EMIF interface of said DSP all also needs all to be connected on the pin of said FPGA together with the route selection of corresponding C E spatial pieces except said SDRAM of being connected to of routine and said FLASH again.
A kind of method that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed adopts the above-mentioned system that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed to carry out signal Processing, it is characterized in that whole signal processing flow is:
1. signals collecting is accomplished by DSP;
2. Signal Pretreatment and feature extraction accomplished by DSP;
3. neural network classification is accomplished by FPGA;
4. the treatment classification result is accomplished by DSP.
For cooperating above-mentioned flow process, use multithreading to realize 4 threads among the DSP, be respectively main thread, signals collecting, signal Processing and result treatment thread:
Main thread is the higher management of other 3 threads, and its flow process is:
1. accomplish the DSP initialization;
2. start other 3 threads;
3. entering waiting status.
The signals collecting thread is accomplished the collection of input signal, and its flow process is:
1. initialization collecting device;
2. open the collection port;
3. waiting signal input if having, then gets into step 4; Otherwise continue to wait for;
4. the signal that collects is put into a formation on main storage device SDRAM---input signal formation, get back to step 3 then.
The signal Processing thread is accomplished the pre-service and the feature extraction of signal, and its flow process is:
1. judge whether the input signal formation is empty,, then continue to judge if empty; Otherwise get into step 2;
2. from the input signal formation, read one group of input signal;
3. input signal is carried out pre-service;
4. the interesting target in the detection input signal, these targets are exactly the main body that need carry out pattern-recognition;
5. judge the also quantity of untreated interesting target, if quantity greater than 0, then gets into step 6; Otherwise get back to step 1;
6. to a untreated interesting target, it is carried out feature extraction;
7. with the characteristic generating feature bag that extracts in the step 6;
8. trigger the enhancement mode direct memory visit (EDMA) between DSP and the FPGA, the characteristic bag is passed to FPGA through the EMIF bus, get back to step 5 then.
The result treatment thread is accomplished the processing of classification results, and its flow process does
1. read the value of a ram register on the FPGA, this register is writing down the classification results number that also is not processed;
2. whether the value of reading in the determining step 1 is greater than 0, if then get into step 3, otherwise get back to step 1;
3. change the value of the ram register on the FPGA in the step 1, make it reduce 1;
4. from FPGA, read the result of a neural network classification;
5. classification results is carried out handled;
6. carry out man-machine interaction and Decision Control, get back to step 1 then.
In addition, in whole pattern-recognition flow process, FPGA is the work that DSP has shared neural network classifier, and its workflow is:
1. during system start-up, said FPGA reads in the weights data of neural network through the EMIF bus from FLASH, accomplish this work by the weights initialization module in the FPGA;
2. when said DSP triggers EDMA transmission characteristic bag data to FPGA; FPGA receives these data by RAM and RAM control module; Wherein the RAM module receives the data on the EMIF data line; The RAM control module receives the signal on EMIF address wire and the control line, and the write address that produces RAM supplies the RAM module to use;
3. after receiving the characteristic bag data that DSP sends; Neural network classifier module in the FPGA is read these characteristic bag data from the RAM module; Carry out behind the neural network classification result being sent back in the RAM module; In this process, the neural network classifier module need be used the weights in the weights initialization module, and simultaneously the RAM control module is responsible for the read/write address coordinating and control the read-write state of RAM and RAM is provided;
4. when DSP need read the classification results among the FPGA; FPGA sends these data by RAM and RAM control module; Wherein the RAM module sends to data on the EMIF data line, and the RAM control module receives the signal on EMIF address wire and the control line, and the address of reading that produces RAM supplies the RAM module to use.
The present invention compares with existing correlation technique, has following advantage:
1. existing DSP+FPGA scheme only limits to aspects such as input and output control, sequential control or signal switching for the utilization of FPGA; Do not make full use of the advantage of FPGA parallel computation; The present invention has well improved this problem; Realize being more suitable for being put among the FPGA, make FPGA become association's process chip truly in the neural network classifier of parallel computation.
2.DSP on utilize multithreading; Can well cooperate with FPGA, carry out the branch time-like at FPGA, DSP need not wait for its result; But can do other thing; Because FPGA is the work that DSP has shared sorter, makes the signal Processing cycle of DSP shorten, the parallel signal of DSP and FPGA is handled the speed of pattern-recognition is promoted greatly like this.
Description of drawings
Fig. 1 system forms structural representation.
Fig. 2 signal processing flow synoptic diagram.
Fig. 3 DSP main thread flow process figure.
Fig. 4 DSP signals collecting thread process flow diagram.
Fig. 5 DSP signal Processing thread process flow diagram.
Fig. 6 DSP result treatment thread process flow diagram.
Fig. 7 FPGA internal module structural drawing.
Embodiment
Describe in detail below in conjunction with the accompanying drawing specific embodiments of the invention.
Referring to Fig. 1; This system that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed is built into the signal real-time mode recognizing core with DSP, FPGA, SDRAM and 4 hardware chips of FLASH; Wherein DSP is as main process chip, and FPGA is as association's process chip, and SDRAM is as primary memory; Internal memory support when DSP work is provided, FLASH is as supplementary storage.DSP, FPGA, SDRAM and FLASH all are connected on the EMIF bus of DSP, make things convenient for them to carry out data interaction mutually.
System at Fig. 1 forms in the structural representation; Selecting dsp chip is the TMS320DM642 of TI, and fpga chip is the EP2C20 of Altera, and SDRAM is with 4 MT48LC16M16A2FG; Amount to capacity 128MB and 64 position datawires, FLASH selects AM29LV033C-WD (4MB).
The CE0 of DSP links the CS pin of 4 SDRAM, and CE1 links the CS pin of FLASH, and CE2 inserts FPGA.In addition, the data line of EMIF, address wire and control line link to each other with data line, address wire, the control line of SDRAM and FLASH respectively, and they also insert FPGA together simultaneously.
Shown in Figure 2 is whole signal processing flow, the characteristic bag that DSP obtains carrying out pattern classification through signals collecting, pre-service and feature extraction, and DSP sends to FPGA through the EDMA mode from the EMIF bus with this characteristic bag.FPGA temporarily stores the result after characteristic is carried out neural network classification.DSP reads classification results in FPGA when the needs classification results carries out subsequent treatment, export after accomplishing corresponding subsequent treatment.In the described flow process, signals collecting is accomplished by the signals collecting thread of DSP, and pre-service and feature extraction are accomplished by the signal Processing thread of DSP, and neural network classification is accomplished by the FPGA internal module, and the treatment classification result is accomplished by the result treatment thread of DSP.
Fig. 3 is the main thread flow process figure of DSP, and main thread starts other thread after accomplishing the DSP initialization, and self gets into waiting status.
Fig. 4 is the signals collecting thread process flow diagram of DSP; At the initialization collecting device with after opening the collection port; Thread begins the waiting signal input, in case new signal input is arranged, then the data content with this signal joins in the input signal formation; This formation is one section global memory on the SDRAM, and we realize that with program it has the data structure of FIFO characteristic.
Fig. 5 is the signal Processing thread process flow diagram of DSP, if the input signal formation is not empty, then from wherein reading an input signal; This signal is carried out pre-service; Detect interesting target wherein,, then each interesting target is carried out feature extraction if having some interesting targets in the signal; After the characteristic packing of extracting, send to FPGA through the EDMA mode.
Fig. 6 is the result treatment thread process flow diagram of DSP; Thread judges through the value that reads a ram register on the FPGA whether FPGA has the classification results that also is not processed, if having, then from FPGA, reads a classification results; It is carried out result treatment, export at last and Decision Control.
Fig. 7 is a FPGA internal module structural drawing; Relevant with the present invention among the FPGA have RAM module, RAM control module, weights initialization module and a neural network classifier module; The solid line line of each intermodule is a data bus connection among the figure, and the dotted line line is the control line, and the connected mode of intermodule is specially:
1.RAM module externally is the FPGA pin; 64 data lines that connect the EMIF interface of DSP; The read-write control line of RAM and address wire connect the RAM control module; The data line of RAM is also connected to the neural network classifier module except connecing the FPGA pin, the input data of neural network classifier module and the output data that receives the neural network classifier module are provided.
2.RAM control module externally is the FPGA pin; The address wire, read-write control line and the CE space select lines (CE2) that connect the EMIF interface of DSP; The RAM control module is the status signal of Connection Neural Network classifier modules and the read-write RAM signal of neural network classifier module also; Through receiving DSP and neural network classifier module read-write state and the demand to RAM, the RAM control module produces read-write control signal and the address signal of RAM, produces the control signal of neural network classifier module simultaneously.
3. the neural network classifier module connects the data line of RAM module, its state and the read-write demand of RAM is connected into the RAM control module through internal signal wire, and simultaneously, the neural network classifier module receives the control signal of RAM control module to it.In the neural network classification process, neural network classifier module value initialization as a matter of expediency module is read in the required weights of computing.
4. the weights initialization module externally is the FPGA pin, FLASH associated data line, address wire and control line in the EMIF interface of connection DSP.In system start-up, the weights initialization module is read the weights of neural network from FLASH.When the neural network classifier module was carried out computing, the weights initialization module provided weights.It should be noted that; The external pin of FPGA of the external pin of the FPGA of weights initialization module and RAM module and RAM control module is in that physically some is identical; But this can't cause conflict; Because the external pin of weights initialization module only works in system start-up, can not use afterwards, the external pin of FPGA with RAM module and RAM control module does not conflict in time.
This improves signal real-time mode recognizing processing speed in the DSP+FPGA framework method adopts said system to carry out signal Processing, it is characterized in that whole signal processing flow is:
1. signals collecting is accomplished by DSP;
2. Signal Pretreatment and feature extraction accomplished by DSP;
3. neural network classification is accomplished by FPGA;
4. the treatment classification result is accomplished by DSP.
For cooperating above-mentioned flow process, use multithreading to realize 4 threads among the DSP, be respectively main thread, signals collecting, signal Processing and result treatment thread:
Main thread is the higher management of other 3 threads, and its flow process is:
1. accomplish the DSP initialization;
2. start other 3 threads;
3. entering waiting status.
The signals collecting thread is accomplished the collection of input signal, and its flow process is:
1. initialization collecting device;
2. open the collection port;
3. waiting signal input if having, then gets into step 4; Otherwise continue to wait for;
4. the signal that collects is put into a formation on main storage device SDRAM---input signal formation, get back to step 3 then.
The signal Processing thread is accomplished the pre-service and the feature extraction of signal, and its flow process is:
1. judge whether the input signal formation is empty,, then continue to judge if empty; Otherwise get into step 2;
2. from the input signal formation, read one group of input signal;
3. input signal is carried out pre-service;
4. the interesting target in the detection input signal, these targets are exactly the main body that need carry out pattern-recognition;
5. judge the also quantity of untreated interesting target, if quantity greater than 0, then gets into step 6; Otherwise get back to step 1;
6. to a untreated interesting target, it is carried out feature extraction;
7. with the characteristic generating feature bag that extracts in the step 6;
8. trigger the enhancement mode direct memory visit (EDMA) between DSP and the FPGA, the characteristic bag is passed to FPGA through the EMIF bus, get back to step 5 then.
The result treatment thread is accomplished the processing of classification results, and its flow process does
1. read the value of a ram register on the FPGA, this register is writing down the classification results number that also is not processed;
2. whether the value of reading in the determining step 1 is greater than 0, if then get into step 3, otherwise get back to step 1;
3. change the value of the ram register on the FPGA in the step 1, make it reduce 1;
4. from FPGA, read the result of a neural network classification;
5. classification results is carried out handled;
6. carry out man-machine interaction and Decision Control, get back to step 1 then.
In addition, in whole pattern-recognition flow process, FPGA is the work that DSP has shared neural network classifier, and its workflow is:
1. during system start-up, said FPGA reads in the weights data of neural network through the EMIF bus from FLASH, accomplish this work by the weights initialization module in the FPGA;
2. when said DSP triggers EDMA transmission characteristic bag data to FPGA; FPGA receives these data by RAM and RAM control module; Wherein the RAM module receives the data on the EMIF data line; The RAM control module receives the signal on EMIF address wire and the control line, and the write address that produces RAM supplies the RAM module to use;
3. after receiving the characteristic bag data that DSP sends; Neural network classifier module in the FPGA is read these characteristic bag data from the RAM module; Carry out behind the neural network classification result being sent back in the RAM module; In this process, the neural network classifier module need be used the weights in the weights initialization module, and simultaneously the RAM control module is responsible for the read/write address coordinating and control the read-write state of RAM and RAM is provided;
4. when DSP need read the classification results among the FPGA; FPGA sends these data by RAM and RAM control module; Wherein the RAM module sends to data on the EMIF data line, and the RAM control module receives the signal on EMIF address wire and the control line, and the address of reading that produces RAM supplies the RAM module to use.
Certainly; The above only is a kind of preferred implementation of the present invention; Should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the principle of the invention; Can also make some improvement and retouching, these improvement and retouching also should be regarded as protection scope of the present invention.

Claims (4)

1. method that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed; The system architecture that this method adopts is: be built into the signal real-time mode recognizing core with DSP, FPGA, SDRAM and 4 chips of FLASH; Wherein DSP is as main process chip, and FPGA is as association's process chip, and SDRAM is as primary memory; Internal memory support when DSP work is provided, FLASH is as supplementary storage; DSP, FPGA, SDRAM and FLASH all are connected on the EMIF bus of DSP, make things convenient for them to carry out data interaction mutually, it is characterized in that whole signal processing flow is:
(1) signals collecting is accomplished by DSP;
(2) Signal Pretreatment and feature extraction are accomplished by DSP;
(3) neural network classification is accomplished by FPGA;
(4) treatment classification result is accomplished by DSP;
Said Signal Pretreatment and feature extraction are in said DSP, to use the signal Processing thread to accomplish, and its flow process is:
A. judge whether the input signal formation is empty,, then continue to judge if empty; Otherwise get into step b;
B. from the input signal formation, read one group of input signal;
C. input signal is carried out pre-service;
D. detect the interesting target in the input signal, these targets are exactly the main body that need carry out pattern-recognition;
E. judge the also quantity of untreated interesting target, if quantity greater than 0, then gets into step f; Otherwise get back to step a;
F. to a untreated interesting target, it is carried out feature extraction;
G is with the characteristic generating feature bag that extracts among the step f;
F. trigger the enhancement mode direct memory visit EDMA between DSP and the FPGA, the characteristic bag is passed to FPGA through the EMIF bus, get back to step e then;
Said neural network classification is to accomplish with said FPGA, and its flow process is:
A. during system start-up, said FPGA reads in the weights data of neural network through the EMIF bus from FLASH, accomplish this work by the weights initialization module in the FPGA;
B. when said DSP triggers EDMA transmission characteristic bag data to FPGA; FPGA receives these data by RAM and RAM control module; Wherein the RAM module receives the data on the EMIF data line; The RAM control module receives the signal on EMIF address wire and the control line, and the write address that produces RAM supplies the RAM module to use;
C. after receiving the characteristic bag data that DSP sends; Neural network classifier module in the FPGA is read these characteristic bag data from the RAM module; Carry out behind the neural network classification result being sent back in the RAM module; In this process, the neural network classifier module need be used the weights in the weights initialization module, and simultaneously the RAM control module is responsible for the read/write address coordinating and control the read-write state of RAM and RAM is provided;
D. when DSP need read the classification results among the FPGA; FPGA sends these data by RAM and RAM control module; Wherein the RAM module sends to data on the EMIF data line, and the RAM control module receives the signal on EMIF address wire and the control line, and the address of reading that produces RAM supplies the RAM module to use.
2. the method that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed according to claim 1; It is characterized in that said DSP is in order to cooperate whole signal processing flow; Adopt multithreading, realize 4 threads altogether, be respectively main thread, signals collecting, signal Processing and result treatment thread; Wherein main thread is the higher management of other 3 threads, and the flow process of main thread is:
(a) accomplish the DSP initialization;
(b) start other 3 threads;
(c) get into waiting status.
3. the method that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed according to claim 1 is characterized in that said flow process (1) signals collecting is in said DSP, to use the signals collecting thread to accomplish, and its flow process is:
1) initialization collecting device;
2) open the collection port;
3) waiting signal input if having, then gets into step 4); Otherwise continue to wait for;
4) signal that collects is put into a formation on main storage device SDRAM---input signal formation, get back to step 3) then.
4. the method that in the DSP+FPGA framework, improves signal real-time mode recognizing processing speed according to claim 1 is characterized in that said flow process (4) treatment classification result uses the result treatment thread to accomplish in said DSP, its flow process is:
A) read the value of a ram register on the FPGA, this register is writing down the classification results number that also is not processed;
B) whether the value read in a) of determining step is greater than 0, if then get into step c), otherwise get back to step a);
C) value of the ram register on the FPGA in the change step a) makes it reduce 1;
D) from FPGA, read the result of a neural network classification;
E) classification results is carried out handled;
F) carry out man-machine interaction and Decision Control, get back to step a) then.
CN200910197183XA 2009-10-15 2009-10-15 System and method for increasing signal real-time mode recognizing processing speed in DSP+FPGA frame Expired - Fee Related CN101673343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910197183XA CN101673343B (en) 2009-10-15 2009-10-15 System and method for increasing signal real-time mode recognizing processing speed in DSP+FPGA frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910197183XA CN101673343B (en) 2009-10-15 2009-10-15 System and method for increasing signal real-time mode recognizing processing speed in DSP+FPGA frame

Publications (2)

Publication Number Publication Date
CN101673343A CN101673343A (en) 2010-03-17
CN101673343B true CN101673343B (en) 2012-11-07

Family

ID=42020564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910197183XA Expired - Fee Related CN101673343B (en) 2009-10-15 2009-10-15 System and method for increasing signal real-time mode recognizing processing speed in DSP+FPGA frame

Country Status (1)

Country Link
CN (1) CN101673343B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831090B (en) * 2012-05-07 2015-06-10 中国科学院空间科学与应用研究中心 Address line for space-borne DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) communication interfaces and optimization method for address line
CN103226328B (en) * 2013-04-21 2015-06-24 中国矿业大学(北京) Synchronous control method of multithreading data acquisition system in acquisition times control mode
CN103984235B (en) * 2014-05-27 2016-05-11 湖南大学 Space manipulator Control System Software framework and construction method based on C/S structure
CN104216324B (en) * 2014-09-09 2017-02-08 中国电子科技集团公司第三十八研究所 Related methods of synthetic aperture radar task management controller
CN105743668A (en) * 2014-12-09 2016-07-06 中兴通讯股份有限公司 Method and device for achieving function of package transmitting and receiving
US10140572B2 (en) * 2015-06-25 2018-11-27 Microsoft Technology Licensing, Llc Memory bandwidth management for deep learning applications
CN107958285A (en) * 2017-11-21 2018-04-24 深圳普思英察科技有限公司 The mapping method and device of the neutral net of embedded system
CN109001688B (en) * 2018-05-28 2022-08-02 中国电子科技集团公司第二十九研究所 Intermediate data storage method and device based on radar signal parallel processing
CN111929615A (en) * 2020-09-27 2020-11-13 天津飞旋科技有限公司 Performance detection device and method of magnetic suspension control line and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101145201A (en) * 2007-10-08 2008-03-19 北京科技大学 Quick target identification and positioning system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101145201A (en) * 2007-10-08 2008-03-19 北京科技大学 Quick target identification and positioning system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
符鸿亮.基于DSP和FPGA的自动指纹识别系统硬件设计与实现.《中国优秀硕士学位论文全文数据库》.2007,1138-1182. *

Also Published As

Publication number Publication date
CN101673343A (en) 2010-03-17

Similar Documents

Publication Publication Date Title
CN101673343B (en) System and method for increasing signal real-time mode recognizing processing speed in DSP+FPGA frame
CN104407997B (en) With instruction dynamic dispatching function and NOT-AND flash single channel isochronous controller
CN101594299B (en) Method for queue buffer management in linked list-based switched network
CN105426160A (en) Instruction classified multi-emitting method based on SPRAC V8 instruction set
CN110619595A (en) Graph calculation optimization method based on interconnection of multiple FPGA accelerators
CN102750257B (en) On-chip multi-core shared storage controller based on access information scheduling
CN103221918A (en) Context switch method and apparatus
CN109144702A (en) One kind being used for row-column parallel calculation coarse-grained reconfigurable array multiple-objection optimization automatic mapping dispatching method
CN106569727A (en) Shared parallel data reading-writing apparatus of multi memories among multi controllers, and reading-writing method of the same
CN114239859B (en) Power consumption data prediction method and device based on transfer learning and storage medium
CN108984283A (en) A kind of adaptive dynamic pipeline parallel method
CN110175152A (en) A kind of log inquiring method, transfer server cluster and log query system
CN104778025A (en) First in first out storer circuit structure based on random access memory (RAM)
CN104050193A (en) Message generating method and data processing system for realizing method
CN102279729A (en) Method, buffer and processor for dynamic reconfigurable array to schedule configuration information
CN104156316B (en) A kind of method and system of Hadoop clusters batch processing job
CN103970714A (en) Apparatus and method for sharing function logic and reconfigurable processor thereof
CN106649067B (en) A kind of performance and energy consumption prediction technique and device
CN102195361B (en) Method for acquiring and processing data of intelligent distribution terminal of multi-core single chip
CN104598917B (en) A kind of support vector machine classifier IP kernel
CN110648356A (en) Multi-target tracking processing optimization method based on visual digital signal processing
Xin et al. Real-time algorithm for SIFT based on distributed shared memory architecture with homogeneous multi-core DSP
CN107609576A (en) Merge the template matches Parallel Implementation method and device of large form figure
CN1900731B (en) Logic module detecting system and method
CN105183628A (en) Log collecting device, recording system and method for embedded system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHANGHAI KAICONG ELECTRONIC TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SHANGHAI UNIVERSITY

Effective date: 20150506

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200444 BAOSHAN, SHANGHAI TO: 201914 CHONGMING, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20150506

Address after: 201914 A1-305 room, No. 58 Fumin Road, Chongming County, Shanghai, Shanghai

Patentee after: SHANGHAI KAICONG ELECTRONIC TECHNOLOGY CO., LTD.

Address before: 200444 Baoshan District Road, Shanghai, No. 99

Patentee before: Shanghai University

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160803

Address after: 202150 A1-848 room, No. 58 Fumin Road, Chongming County, Shanghai, Shanghai

Patentee after: Shanghai Kai Yi Electronic Technology Co., Ltd.

Address before: 201914 A1-305 room, No. 58 Fumin Road, Chongming County, Shanghai, Shanghai

Patentee before: SHANGHAI KAICONG ELECTRONIC TECHNOLOGY CO., LTD.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121107

Termination date: 20181015

CF01 Termination of patent right due to non-payment of annual fee