CN102542525B - Information processing equipment and information processing method - Google Patents

Information processing equipment and information processing method Download PDF

Info

Publication number
CN102542525B
CN102542525B CN201010600263.8A CN201010600263A CN102542525B CN 102542525 B CN102542525 B CN 102542525B CN 201010600263 A CN201010600263 A CN 201010600263A CN 102542525 B CN102542525 B CN 102542525B
Authority
CN
China
Prior art keywords
data
buffer cell
storage unit
sends
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201010600263.8A
Other languages
Chinese (zh)
Other versions
CN102542525A (en
Inventor
王佐
于辰涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201010600263.8A priority Critical patent/CN102542525B/en
Publication of CN102542525A publication Critical patent/CN102542525A/en
Application granted granted Critical
Publication of CN102542525B publication Critical patent/CN102542525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to information processing equipment and an information processing method. The information processing equipment comprises a storage unit, a first buffer unit, a calculation unit, a second buffer unit and a control unit, wherein the storage unit is used for storing data; the first buffer unit is connected with the storage unit and used for acquiring first data from the storage unit and temporarily storing the first data; the calculation unit is connected with the first buffer unit and used for acquiring the first data from the first buffer unit, processing the first data and generating second data; the second buffer unit is connected with the calculation unit and the storage unit and used for acquiring the second data from the calculation unit, temporarily storing the second data and sending the second data to the storage unit; and the control unit is used for executing control, so that at least part of a first time period of the next first data, which is acquired from the storage unit by the first buffer unit, and a third time period of the second data, which is acquired from the calculation unit by the second buffer unit, is overlapped, and/or at least part of a second time period of the next first data, which is acquired from the first buffer unit by the calculation unit, and a fourth time period in which the second buffer unit sends the second data to the storage unit is overlapped.

Description

A kind of messaging device and information processing method
Technical field
The present invention relates to a kind of messaging device and information processing method, more specifically, the present invention relates to control messaging device and the information processing method of data read/write process in image processing process.
Background technology
In the last few years, when user uses the terminal device such as computing machine or portable equipment, figure or image processed that to account for the proportion of data processing increasing.Therefore, existing certain operations system (as, android, WP7 etc.) in, conventionally provide standardized image to process function or storehouse (as, skia storehouse of android operating system etc.) view data processed.In this case, application program only needs to call that these unified images are processed functions or storehouse just can be drawn image or figure.
Yet, these standardized images are processed functions or storehouse and for the concrete structure of hardware, are not optimized setting, therefore when using these standardized images to process function or storehouse rendering image or figure, conventionally the efficiency that image is processed is not high, thereby wasted the resource of hardware, and often can not provide the smoothness of image or figure to show to user.
Summary of the invention
In order to overcome above-mentioned technical matters of the prior art, according to an aspect of the present invention, provide a kind of messaging device, described messaging device comprises: storage unit, data are stored in configuration; The first buffer cell, is connected with described storage unit, and configuration obtains and store temporarily the first data from described storage unit; Computing unit, is connected with described the first buffer unit, and configuration obtains and process described the first data from described the first buffer cell, and generates the second data; The second buffer cell, is connected with described storage unit with described processing unit, and configuration obtains and store temporarily described the second data from described computing unit, and described the second data are sent in described storage unit; And control module, control is carried out in configuration, described the first buffer cell is obtained from the very first time section of next the first data of described storage unit and described the second buffer cell, obtain from the 3rd time period of described second data of described calculating reason unit overlapping at least partly, and/or described computing unit is overlapping at least partly from the second time period of next the first data described in described the first buffer cell obtains and the 4th time period that described the second buffer cell sends to described the second data described storage unit.
In addition, according to a further aspect in the invention, provide a kind of information processing method, described information processing method comprises: the first buffer cell obtains and interim the first data of storing from described storage unit, computing unit obtains and processes described the first data from described the first buffer cell, generates the second data, the second buffer cell obtains and interim described the second data of storing from described computing unit, and described the second data are sent in described storage unit, wherein, described the first buffer cell obtains from the very first time section of next the first data of described storage unit and described the second buffer cell acquisition overlapping at least partly from the 3rd time period of described second data of described calculating reason unit, and/or described computing unit from described the first buffer cell obtain the second time period of described next the first data and the 4th time period that described the second buffer cell sends to described the second data described storage unit overlapping at least partly.
In addition, according to another aspect of the present invention, provide a kind of messaging device method, comprising: the first buffer cell obtains and interim the first data of storing from described storage unit; Computing unit obtains and processes described the first data from described the first buffer cell, generates the second data; The second buffer cell obtains and interim described the second data of storing from described computing unit, and described the second data are sent in described storage unit; Described the first buffer cell obtains and interim the 3rd data of storing from described storage unit; Described computing unit obtains and processes described the 3rd data from described the first buffer cell, generates the 4th data; Described the second buffer cell obtains and interim described the 4th data of storing from described computing unit, and described the 4th data are sent in described storage unit; Wherein, described the first buffer cell obtains and from the very first time section of the 3rd data of described storage unit and described the second buffer cell, to obtain that from described calculating, to manage the 3rd time period of described the second data of unit overlapping at least partly, and/or described computing unit from described the first buffer cell obtain the second time period of described the 3rd data and the 4th time period that described the second buffer cell sends to described the second data described storage unit overlapping at least partly.
By above-mentioned configuration, it is overlapping by the first buffer cell was obtained from the time period of next the first data and the time period of unit is managed in the acquisition of the second buffer cell the second data from calculating of storage unit, and make computing unit from the first buffer cell obtain the time period of next the first data and time period that the second buffer cell sends to the second data described storage unit overlapping, can make data input and output concurrent process, reduce thus the shared time of data input and output.Therefore, in the situation that carry out image such as rendering image, process, due to reducing T.T. of the I/O of view data, improved thus the efficiency that image is processed, thereby do not had in vicissitudinous situation at hardware, improved the speed of rendering image.
Accompanying drawing explanation
Fig. 1 is that diagram is according to the block scheme of the messaging device 1 of exemplary embodiment of the present invention;
Fig. 2 is the schematic diagram of the processing carried out of the messaging device of prior art; And
Fig. 3 is that diagram is the process flow diagram of method according to the information processing of the embodiment of the present invention.
Embodiment
Describe in detail with reference to the accompanying drawings according to each embodiment of the present invention.Here, it should be noted that in the accompanying drawings, identical Reference numeral is given and substantially had ingredient identical or similar structures and function, and will omit being repeated in this description about them.
Fig. 1 is that diagram is according to the block scheme of the messaging device 1 of exemplary embodiment of the present invention.As shown in Figure 1, the messaging device 1 such as handheld device (as, mobile phone) or portable equipment (as, panel computer) and so on comprises storer 11, the first buffer memory 12, the second buffer memory 13 and processor 14.As shown in the figure, the first buffer memory 12 is connected with storer 11, and processor 14 is connected with the second buffer memory 13 with the first buffer memory 12, and the second buffer memory 13 is also connected with storer 11.
According to one embodiment of present invention, storer 11 can by primary memory (as, SDRAM, DDR internal memory) realize, and for storing the data such as routine data or application data;
The first buffer memory 12 can be realized by high-speed cache (cache), and for the data from storer 11 obtains and interim storage of processor 14 may be used, that is, inputs data or can obtain data to send data to storer 11 from processor 14.For example, in serial mode, the first buffer memory 12 can be used as read/write buffer memory, and in parallel schema, the first buffer memory 12 is as reading buffer memory (by being described below serial and the parallel schema of messaging device 1).
Processor 14 can by central processing unit arbitrarily (as, 8250 of high pass, 8650 processors etc.), microprocessor realizes, and for to processing from the data of the first buffer memory 12, and based on preset program, each ingredient of messaging device 1 is controlled.As shown in Figure 1, processor 14 can further include computing module 141 and control module 142.For example, according to one embodiment of present invention, computing module 141 and control module 142 all can be by central processing unit or microprocessor are realized based on preset program arbitrarily.Here, computing module 141 is connected with the first buffer memory 12, and can obtain input data from the first buffer memory 12, and the processing that these input data are scheduled to (as, image processing etc.), and produce output data.Control module 142 can control store 11, the data I/O process of the first buffer memory 12, computing module 141 and the second buffer memory 13 to be to reduce the delay being caused due to data I/O.Therefore,, when computing module 141 is carried out the data processing such as image processing, can improve the speed of above-mentioned data processing.
The second buffer memory 13 can be realized by high-speed cache (cache).According to embodiments of the invention, in parallel schema, the second buffer memory 13 is as the buffer memory of writing of the speed of expedited data output (storage).In this case, the second buffer memory 12 can be from the output data (that image is processed, being the view data of exporting) that computing module 141 obtains and interim storage computing module 141 produces, and this can be exported to data and send to storer 11.Here, it should be noted that if independent high-speed cache is divided into two parts, and play respectively the first buffer memory 12 and the second buffer memory 13, can also in independent high-speed cache, realize this first buffer memory 12 and the second buffer memory 13.Here, the frequency that it should be noted that the first buffer memory 12 of being realized by high-speed cache and the second buffer memory 13 higher than storer frequency, therefore the read/write speed of the first buffer memory 12 and the second buffer memory 13 is greater than the read/write speed of storer 11.In addition, it should be noted that the first buffer memory 12, the second buffer memory 13 and processor 14 can be integrated in a chip to form central processing element, and storer such as SDRAM or DDR internal memory is arranged on outside this central processing element.
According to embodiments of the invention, 142 pairs of storeies 11 of control module, the first buffer memory 12, computing module 141 and the second buffer memory 13 are controlled, the time period that the first buffer memory 12 is obtained from storage unit 11 next time process required input data and the second buffer memory 13 obtain from computing module 141 that to work as time period of the output data that pre-treatment produces overlapping at least partly, and make computing module 141 from the first buffer memory 11 obtain process the time period of required input data next time and time period that the second buffer memory sends to storer 11 by these output data overlapping at least partly.
Next, will the processing of carrying out according to the messaging device 1 of the embodiment of the present invention be described.
Before the processing of carrying out at descriptor treatment facility 1, first will briefly describe in the prior art, carry out in the situation of image processing the processing that messaging device is carried out utilizing standardized image to process function or storehouse (as, skia storehouse).Fig. 2 is the schematic diagram of the processing carried out of the messaging device of prior art.
Here, when computing module need to be processed the data such as view data, if this processes required data in storer time, in step (1), first then data are sent in buffer memory, from storer, in step (2), computing module obtains this and processes required data from this buffer memory, that is, and and input data.After output data are processed and produced to computing module to input data, in step (3), computing module sends (storage) to buffer memory by produced output data.Then, in step (4), this buffer memory sends to storer (4) by the output data of this generation.Here, due in the prior art, buffer memory need to be carried out from storer and obtain data, to computing module, sends data, from computing module, obtains data, and sends the operations such as data to storer, and when buffer memory is occupied, cannot carry out other operation.Therefore, in the prior art, can only carry out serially the input and output (read/write) that above-mentioned steps (1) to (4) realizes data.
To the data flow of processing according to messaging device 1 executing data of the embodiment of the present invention be described below.According to one embodiment of present invention, the control module 142 of messaging device 1 can for example, based on the () unit interval data I/O amount, or control module 142 detect need the function of high data I/O or function (as, while the image displaying function in skia storehouse) being called, control module 142 control information treatment facilities 1 enter parallel schema, otherwise messaging device 1 is in serial mode.
In serial mode, control module 142 is not enabled the second buffer memory 13 (its as write buffer memory), and only enables the first buffer memory and be used as read/write buffer memory.Here, because the serial mode of messaging device 1 is consistent previously in the description of carrying out for prior art, therefore omitted the detailed description about serial mode here.
The messaging device 1 of describing according to the embodiment of the present invention is carried out to the data flow that parallel data is processed below.
Here, it should be noted that, in parallel schema, if do not exist computing module 141 to carry out the required input data of data processing (in storer 11, storer 11 is miss) time, need to these input data be called in storer 11 from other memory devices of messaging device 1 (as, hard disk etc.).Due to above-mentioned data call in process identical with alignment processing of the prior art, therefore omitted the description of calling in processing about above-mentioned data here, and only for the situation that exists computing module 141 to carry out the required input data of data processing in storer 11, be described in the present embodiment.
Different from flow chart of data processing of the prior art, according to embodiments of the invention, under parallel schema, enable the second buffer memory 13 (as writing buffer memory) with the I/O of expedited data.The treatment scheme under parallel schema by descriptor treatment facility 1 below.
For example, after computing module 141 has produced output data based on input data, control module 142 is sent pre-reading command to storer 11.After receiving the pre-reading command that control module 142 sends, storer 11 sends computing module 141 to the first buffer memory 12 (as reading buffer memory) and processes required input data next time.
During sending computing modules 141 at storer 11 to the first buffer memory 12 and once processing required input data on carrying out, control module 142 is sent storage instruction to computing module 141.After receiving this storage instruction, the output data that computing module 141 produces this data processing send (storage) in the second buffer memory 13 (as writing buffer memory).
Here, due to the operating frequency of the second buffer memory 13 operating frequency higher than storer 11, namely, the speed that computing module 141 writes output data to described the second buffer memory 13 is greater than storer 11 and to the first buffer memory 12, sends the speed of next time processing required input data, therefore, for identical data volume, computing module 141 writes the output required time period of data to described the second buffer memory 13 and is less than storer 11 and sends and process the required required time period of input data next time to the first buffer memory 12.In this case, on random time point send during once processing required input data to the first buffer memory 12 at storer 11 in, control module 142 can be sent storage instruction to computing module 141 and send (storage) in the second buffer memory 13 with the output data that this data processing is produced, thereby guarantees that above-mentioned two time periods are overlapping at least partly.
Then, the first buffer memory 12 is after storer 11 has obtained and processed required input data next time, and storer 11 is released, namely, and can be to storer 11 data writings.Therefore, now the second buffer memory 13 can send (storage) by the output data of this data processing generation of computing module 141 to storer 11.
In this case, control module 142 can be sent storage instruction to the second buffer memory 13.After the second buffer memory 13 receives storage instruction, the second buffer unit 13 sends to storer 11 the output data that this data processing produces.
During the second buffer unit 13 sends the output data of this data processing generation to storer 11, control module 142 is sent reading command to the first buffer memory 12.After receiving this reading command, the first buffer memory 12 sends to required input data of data processing next time in computing module 141.
Here, due to the operating frequency of the first buffer memory 12 operating frequency higher than storer 11, namely, the speed that the first buffer memory 12 sends to computing module 141 by required input data of data processing is next time greater than the second buffer unit 13 and to storer 11, sends the speed of the output data that these data processings produce, therefore, for identical data volume, the first buffer memory 12 sends to the required time period of computing module 141 by required input data of data processing next time and is less than the second buffer unit 13 and sends to storer 11 the required time period of output data that these data processings produce.In this case, on random time point in during the output data that produce to storer 11 these data processings of transmission at the second buffer unit 13, control module 142 can be sent reading command so that required input data of data processing are next time sent in computing module 141 to the first buffer memory 12, thereby guarantees that above-mentioned two time periods are overlapping at least partly.
By above-mentioned configuration, under parallel schema, compare with the prior art that reading and storing (I/O) of executing data serially, at least partly the reading and reading of executing data concurrently.For example, can the step for description of the Prior Art (1) and step (3) is overlapping according to the messaging device 1 of the embodiment of the present invention, and can step (2) and step (4) is overlapping, thereby have shortened reading and storing (I/O) time of data.Here, for example, in the situation that messaging device 1 carries out image processing, the shortening that reading and storing (I/O) time of data can improve speed and the efficiency that image is processed, thus in the situation that do not change the hardware configuration of messaging device 1, further improve the frame per second of image, can provide more smooth image to show to user thus.
The operation of carrying out according to the messaging device 1 of the embodiment of the present invention has been described in the above, yet, the invention is not restricted to this.For example, according to another embodiment of the invention, can be only by step (1) and step (3) is overlapping or only that step (2) and step (4) is overlapping, and this can shorten reading and storing (I/O) time of data equally.
In addition, the invention is not restricted to this, control module 142 is the data I/O amount based on the unit interval or need the function of high data I/O or function to switch serial/parallel pattern not, namely control module 142 can not carried out above-mentioned judgement, and messaging device 1 can be always under above-mentioned parallel schema.
Next, with reference to Fig. 3, describe according to the information processing method of the embodiment of the present invention.Here, the control module 142 of messaging device 1 can or need function or the function of high data I/O to switch serial/parallel pattern based on data I/O amount.Because serial mode and the corresponded manner of the prior art of messaging device 1 are similar, therefore serial mode is not described in detail.
Fig. 3 is that diagram is according to the process flow diagram of the information processing method of the embodiment of the present invention (parallel schema).
As shown in Figure 3, at step S301, the input data of computing module 141 based on from the first buffer memory 12 have produced output data.
At step S302, control module 142 is sent pre-reading command to storer 11.
At step S303, after receiving this pre-reading command, storer 11 sends computing module 141 to the first buffer memory 12 and processes required input data next time.
At storer 11, to during once processing required input data on the first buffer memory 12 sends, at step S304, control module 142 is sent storage instruction to computing module 141.
At step S305, after receiving this storage instruction, the output data that computing module 141 produces this data processing send (storage) in the second buffer memory 13.
Then, after having obtained from storer 11 at the first buffer memory 12 and once processing required input data carrying out, at step S306, control module 142 is sent storage instruction to the second buffer memory 13.
At step S307, after the second buffer memory 13 receives storage instruction, the second buffer unit 13 sends to storer 11 the output data that this data processing produces.
During the second buffer unit 13 sends the output data of this data processing generation to storer 11, at step S308, control module 142 is sent reading command to the first buffer memory 12.
At step S309, after receiving this reading command, the first buffer memory 12 sends to required input data of data processing next time in computing module 141.
Then, at step S310, whether judgement processing finishes.If processed, not yet finish, process and turn back to step S301.
Information processing method shown in Fig. 3 has been described in the above in a sequential manner, yet, the invention is not restricted to this, as long as can access desired result, can carry out above-mentioned processing with the order different from foregoing description order (as, the order of exchange some of them step).In addition, can also carry out some steps wherein in the mode walking abreast.
Yet, the invention is not restricted to this, according to another embodiment of the invention, information processing shown in Fig. 3 is that method can not comprise the parallel processing of step S302 to the parallel processing of S305 or step S306 to step S309, namely, with serial mode execution step S302, to S305 or step S306, arrive the processing of step S309.
In addition, in storage unit 11, store the first and the 3rd data (continuous on this first and the 3rd mathematical logic, and need to be called over) situation under, the method shown in Fig. 3 can also be revised as: the first buffer memory 12 obtain and interim storage from the first data of storer 11, computing module 141 obtains and processes described the first data from the first buffer memory 12, and generates the second data, the second buffer memory 13 obtains and interim the second data of storing from computing module 141, and these second data are sent in storer 11, described the first buffer memory 12 obtains and interim the 3rd data of storing from storer 11, computing module 142 obtains and processes described the 3rd data from the first buffer memory 12, and generates the 4th data, the second buffer memory 13 obtains and interim described the 4th data of storing from computing module 141, and the 4th data are sent in storer 11, wherein during above-mentioned processing, control module 142 is carried out and is controlled, make the first buffer memory 12 acquisitions overlapping at least partly from the 3rd time period of the second data of computing module 141 from very first time section and the acquisition of the second buffer memory of the 3rd data of storer 11, and/or make computing module 141 from the first buffer memory 12 obtain the second time period of the 3rd data and the 4th time period that the second buffer memory 14 sends to the second data storer 11 overlapping at least partly.
A plurality of embodiment of the present invention has been described in the above, yet, it should be noted that the mode that embodiments of the invention can adopt whole hardware implementation, whole implement software or comprise combination of hardware realizes.For example, in certain embodiments, can implement embodiments of the invention by the mode of mounting software in computer system, it is including (but not limited to) firmware, embedded software, microcode etc.In addition, the present invention adopts and can be made for carrying out according to the form of the computer program of the disposal route of the embodiment of the present invention by computing machine or any command execution system, and described computer program is stored in computer-readable medium.The example of computer-readable medium comprises semiconductor or solid-state memory, tape, detachable borne computer disk, random access memory (RAM), ROM (read-only memory) (ROM), hard disk and CD etc.For example, according to one embodiment of present invention, in the situation that image processing is carried out in the skia storehouse that messaging device 1 is used android operating system to provide, the order that is read/stored by the data in skia storehouse modifies to realize the information processing method shown in Fig. 3, and processing unit 14 that can messaging device 1 (as, central processing unit) can the program based on revised be realized the function of control module 142.In addition, the invention is not restricted to this, the present invention can also be applied to the image that provides in other operating systems or graphic package and process in function or storehouse image is shown and accelerated.In addition, owing to can shortening reading and storing (I/O) time of data according to the messaging device of the embodiment of the present invention and information processing method, therefore, the present invention can also be applied to other the Data processing that reads and store that comprises data with the efficiency of raising data processing.
As mentioned above, describe particularly each embodiment of the present invention in the above, but the invention is not restricted to this.It should be appreciated by those skilled in the art, can carry out various modifications, combination, sub-portfolio or replacement according to designing requirement or other factors, and they are in the scope of claims and equivalent thereof.

Claims (7)

1. a messaging device, comprising:
Storage unit, data are stored in configuration;
The first buffer cell, is connected with described storage unit, and configuration obtains and store temporarily the first data from described storage unit;
Computing unit, is connected with described the first buffer cell, and configuration obtains and process described the first data from described the first buffer cell, and generates the second data;
The second buffer cell, be connected with described storage unit with described computing unit, configuration obtains and stores temporarily described the second data from described computing unit, and described the second data are sent in described storage unit, wherein, described the first buffer cell is different with the second buffer cell; And
Control module, control is carried out in configuration, described the first buffer cell is obtained from the very first time section of next the first data of described storage unit and described the second buffer cell, obtain from the 3rd time period of described second data of described computing unit overlapping at least partly, and/or described computing unit is overlapping at least partly from the second time period of next the first data described in described the first buffer cell obtains and the 4th time period that described the second buffer cell sends to described the second data described storage unit.
2. messaging device as claimed in claim 1, wherein
The operating frequency of described the first buffer cell and described the second buffer cell is greater than the operating frequency of described storage unit.
3. messaging device as claimed in claim 1, wherein
After described computing unit generates the second data, described control module sends the first instruction to described storage unit, to control described storage unit, to described the first buffer cell, sends described next first data; And
During described storage unit sends described next first data to described the first buffer cell, described control module sends the second instruction to described computing unit, to control described computing unit, to described the second buffer cell, sends described the second data.
4. messaging device as claimed in claim 3, wherein
After described the first buffer cell obtains described next first data, described control module sends the 3rd instruction to described the second buffer cell, to control described the second buffer cell, to described storage unit, sends described the second data; And
During described the second buffer cell sends described the second data to described storage unit, described control module sends the 4th instruction to described the first buffer cell, to control described the first buffer cell, to described computing unit, sends described next first data.
5. an information processing method, comprising:
The first buffer cell obtains and interim the first data of storing from storage unit;
Computing unit obtains and processes described the first data from described the first buffer cell, generates the second data;
The second buffer cell obtains and interim described the second data of storing from described computing unit, and described the second data are sent in described storage unit;
Wherein, described the first buffer cell obtains and from the very first time section of next the first data of described storage unit and described the second buffer cell, to obtain from the 3rd time period of described second data of described computing unit overlappingly at least partly, and/or described computing unit is overlapping at least partly from the second time period of next the first data described in described the first buffer cell obtains and the 4th time period that described the second buffer cell sends to described the second data described storage unit.
6. information processing method as claimed in claim 5, wherein
After described computing unit is processed described the first data and generated the second data, control module sends the first instruction to described storage unit, to control described storage unit, to described the first buffer cell, sends described next first data; And
During described storage unit sends described next first data to described the first buffer cell, described control module sends the second instruction to described computing unit, to control described computing unit, to described the second buffer cell, sends described the second data.
7. information processing method as claimed in claim 6, wherein
After described the first buffer cell obtains described next first data, described control module sends the 3rd instruction to described the second buffer cell, to control described the second buffer cell, to described storage unit, sends described the second data; And
During described the second buffer cell sends described the second data to described storage unit, described control module sends the 4th instruction to described the first buffer cell, to control described the first buffer cell, to described computing unit, sends described next first data.
CN201010600263.8A 2010-12-13 2010-12-13 Information processing equipment and information processing method Active CN102542525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010600263.8A CN102542525B (en) 2010-12-13 2010-12-13 Information processing equipment and information processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010600263.8A CN102542525B (en) 2010-12-13 2010-12-13 Information processing equipment and information processing method

Publications (2)

Publication Number Publication Date
CN102542525A CN102542525A (en) 2012-07-04
CN102542525B true CN102542525B (en) 2014-02-12

Family

ID=46349351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010600263.8A Active CN102542525B (en) 2010-12-13 2010-12-13 Information processing equipment and information processing method

Country Status (1)

Country Link
CN (1) CN102542525B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107065900B (en) * 2017-01-17 2020-04-28 清华大学 Unmanned aerial vehicle flight control parameter updating system
JP7052518B2 (en) * 2018-04-17 2022-04-12 カシオ計算機株式会社 Programs, information processing methods and information terminals
CN110188067B (en) * 2019-07-15 2023-04-25 北京一流科技有限公司 Coprocessor and data processing acceleration method thereof
CN113495669B (en) * 2020-03-19 2023-07-18 华为技术有限公司 Decompression device, accelerator and method for decompression device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1094525A (en) * 1993-04-19 1994-11-02 电子科技大学 A kind of high-capacity and high-speed data acquisition caching method and equipment
CN101120325A (en) * 2005-02-15 2008-02-06 皇家飞利浦电子股份有限公司 Enhancing performance of a memory unit of a data processing device by separating reading and fetching functionalities

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100341014C (en) * 2000-10-19 2007-10-03 英特拉克蒂克控股公司 Scaleable interconnect structure for parallel computing and parallel memory access
JP3840966B2 (en) * 2001-12-12 2006-11-01 ソニー株式会社 Image processing apparatus and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1094525A (en) * 1993-04-19 1994-11-02 电子科技大学 A kind of high-capacity and high-speed data acquisition caching method and equipment
CN101120325A (en) * 2005-02-15 2008-02-06 皇家飞利浦电子股份有限公司 Enhancing performance of a memory unit of a data processing device by separating reading and fetching functionalities

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特表2004-531783A 2004.10.14

Also Published As

Publication number Publication date
CN102542525A (en) 2012-07-04

Similar Documents

Publication Publication Date Title
US10296217B2 (en) Techniques to configure a solid state drive to operate in a storage mode or a memory mode
US10372428B1 (en) Dynamic computational acceleration using a heterogeneous hardware infrastructure
US9043806B2 (en) Information processing device and task switching method
CN104081449A (en) Buffer management for graphics parallel processing unit
US9086920B2 (en) Device for managing data buffers in a memory space divided into a plurality of memory elements
CN110308982B (en) Shared memory multiplexing method and device
JP2006195823A (en) Dma device
JP2019075101A (en) Method of processing in-memory command, high-bandwidth memory (hbm) implementing the same, and hbm system
CN1983196A (en) System and method for grouping execution threads
CN105247478B (en) For storing the method and relevant apparatus of order
CN102542525B (en) Information processing equipment and information processing method
US8825465B2 (en) Simulation apparatus and method for multicore system
CN104346132A (en) Control device applied to running of intelligent card virtual machine and intelligent card virtual machine
US20130145373A1 (en) Information processing apparatus, information processing method, and storage medium
CN105242909A (en) Method for many-core circulation partitioning based on multi-version code generation
CN101341471B (en) Apparatus and method for dynamic cache management
Ji et al. Demand layering for real-time DNN inference with minimized memory usage
CN110175450A (en) A kind of processing method of information, device and equipment
US20220188380A1 (en) Data processing method and apparatus applied to graphics processing unit, and electronic device
CN110825502A (en) Neural network processor and task scheduling method for neural network processor
CN114070892A (en) Data transmission method and device
US6920513B2 (en) Bus management techniques
US20120137300A1 (en) Information Processor and Information Processing Method
TW201005649A (en) Operating system fast run command
JP2006260014A (en) Speed converter with load control function

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant