CN109039804A - A kind of file reading and electronic equipment - Google Patents
A kind of file reading and electronic equipment Download PDFInfo
- Publication number
- CN109039804A CN109039804A CN201810763356.9A CN201810763356A CN109039804A CN 109039804 A CN109039804 A CN 109039804A CN 201810763356 A CN201810763356 A CN 201810763356A CN 109039804 A CN109039804 A CN 109039804A
- Authority
- CN
- China
- Prior art keywords
- page
- thread
- read
- file
- multipage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0852—Delays
- H04L43/087—Jitter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
Landscapes
- Engineering & Computer Science (AREA)
- Environmental & Geological Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Present disclose provides a kind of file reading and electronic equipments, allow any record of big file to may be loaded into memory using paging technique, avoid file because of machine resource problem itself, load fails;Sliding window mechanism is used simultaneously, file load is allowed to load from the next line of the current line number of file rapidly, reduce the time of positioning file line number, thread local L2 cache is used in loading procedure, fast thread is loaded from its L2 cache first when switching the page, avoid the steep drop of thread pressure test flow when global buffer to be loaded such as when switching the page, asynchronous load is used in loading page simultaneously, reduce unnecessary waiting in loading procedure, so that pressure measurement of discharge QPS keeps stablizing in loading procedure and when the switching page.
Description
Technical field
The present invention relates to performance pressure survey field more particularly to a kind of file readings and electronic equipment.
Background technique
It carries out performance pressure to network frequently with sequence parameter in network technology to survey, in sequence parameter, parameter is used
It can be designated value by the value dynamic replacement in parameter source in construction set, course of exerting pressure, when reading file record, each
Pressure survey line journey is read in order parameterizes file, and sequence pressure survey systematicness is strong, can substitute random reading file, and sequence is joined
Numberization also can apply to make the special occasions such as data, while press the difference of measured data and data application mode that can make to press survey process
Applied to different real scenes.And traditional order parameterized approach once reads small documents interior mainly for small documents
In depositing, but actual performance press survey during, file size be it is unforeseen, also cannot route stability, once
The way for reading memory is simple and reliable, but capability platform limits the use of file size, while adding in the pressure test page
It carries with during page switching, there are thread waitings, so that flow QPS is unstable in page load and handoff procedure.
Summary of the invention
In view of the above problems, the invention discloses a kind of file reading and electronic equipment, so that pressure survey process meets
When sequence reads file, file size is unrestricted, flow QPS is stable in page load handoff procedure, reading speed is fast, interior
It deposits and consumes the demands such as few.
The present invention provides a kind of file readings, comprising:
This document is divided into multipage by S1;
S2 reads multipage using at least one thread order, wherein when thread has read page (n-1)th from global buffer, from
The content of nth page is read in L2 cache, meanwhile, the content of nth page is loaded in global buffer, n is the integer greater than 1;
S3, after the content of nth page is loaded in global buffer, thread is remaining in this n pages from reading in global buffer
Content, wherein remaining content refers to the content not read from L2 cache in nth page.
Optionally, before step S1 further include:
S0 judges the size of the file, if this document line number is less than or equal to threshold value, directly reads data, if should
When file line number is greater than threshold value, S1 is thened follow the steps.
Optionally, in step S2, multipage is read using at least one thread order, comprising:
The every page of multipage is read by row using at least one thread, wherein the row using sliding window to every page carries out
Positioning.
Optionally, in step S2, multipage is read using at least one thread order, comprising:
Using the asynchronous reading multipage of multiple threads.
Optionally, step S2 reads the content of nth page from L2 cache, comprising:
Per thread includes a L2 cache.
The present invention also provides a kind of electronic equipment, comprising:
Processor;
Memory is stored with computer executable program, and the program is when being executed by processor, so that processor is held
Row:
This document is divided into multipage by S1;
S2 reads multipage using at least one thread order, wherein when thread has read page (n-1)th from global buffer, from
The content of nth page is read in L2 cache, meanwhile, the content of nth page is loaded in global buffer, n is the integer greater than 1;
S3, after the content of nth page is loaded in global buffer, thread is remaining in this n pages from reading in global buffer
Content, wherein remaining content refers to the content not read from L2 cache in nth page.
Optionally, wherein processor also executes before executing paging technique:
S0 judges the size of the file, if this document line number is less than or equal to threshold value, directly reads data, if should
When file line number is greater than threshold value, S1 is thened follow the steps.
Optionally, wherein processor sequence reads multipage, also executes:
The every page of multipage is read by row using at least one thread, wherein the row using sliding window to every page carries out
Positioning.
Optionally, wherein processor sequence reads multipage, also executes:
Using the asynchronous reading multipage of multiple threads.
Optionally, wherein memory includes:
Per thread includes a L2 cache.
Another aspect of the present disclosure provides a kind of computer-readable medium, is stored with computer executable instructions, described
Instruction is when executed for realizing method as described above.
Another aspect of the present disclosure provides a kind of computer program, and the computer program, which includes that computer is executable, to be referred to
It enables, described instruction is when executed for realizing method as described above.
The limitation of big file when reading file present invention mainly solves ordered mode in pressure test sequence, during pressure is surveyed
The problems such as flow QPS shakes is inscribed, and is not re-used as bottleneck with file size of the present invention, and presses survey process QPS shake smaller, can
The specific occasion required in platform to data sequence is surveyed in pressure to apply well.
Detailed description of the invention
In order to which the disclosure and its advantage is more fully understood, referring now to being described below in conjunction with attached drawing, in which:
Fig. 1 diagrammatically illustrates the flow chart that parametrization file is read according to disclosure sequence.
Fig. 2 diagrammatically illustrates the operation step map that parametrization file is read according to disclosure sequence.
Fig. 3 is diagrammatically illustrated according to embodiment of the present disclosure L2 cache flow chart.
Fig. 4 diagrammatically illustrates the electronic device block diagram according to the disclosure.
Specific embodiment
Hereinafter, will be described with reference to the accompanying drawings embodiment of the disclosure.However, it should be understood that these descriptions are only exemplary
, and it is not intended to limit the scope of the present disclosure.In the following detailed description, to elaborate many specific thin convenient for explaining
Section is to provide the comprehensive understanding to the embodiment of the present disclosure.It may be evident, however, that one or more embodiments are not having these specific thin
It can also be carried out in the case where section.In addition, in the following description, descriptions of well-known structures and technologies are omitted, to avoid
Unnecessarily obscure the concept of the disclosure.
Term as used herein is not intended to limit the disclosure just for the sake of description specific embodiment.It uses herein
The terms "include", "comprise" etc. show the presence of the feature, step, operation and/or component, but it is not excluded that in the presence of
Or add other one or more features, step, operation or component.
There are all terms (including technical and scientific term) as used herein those skilled in the art to be generally understood
Meaning, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification
Meaning, without that should be explained with idealization or excessively mechanical mode.
Shown in the drawings of some block diagrams and/or flow chart.It should be understood that some sides in block diagram and/or flow chart
Frame or combinations thereof can be realized by computer program instructions.These computer program instructions can be supplied to general purpose computer,
The processor of special purpose computer or other programmable data processing units, so that these instructions are when executed by this processor can be with
Creation is for realizing function/operation device illustrated in these block diagrams and/or flow chart.
Embodiment of the disclosure provides the method and electronic equipment of a kind of sequence reading parametrization file, using paging skill
Art allows any record of big file to may be loaded into memory, and the reading of big file is made no longer to there is limitation;Sliding window is used simultaneously
Mouth mechanism allows file load to load from the next line of the current line number of file rapidly, reduces the time of positioning file line number, acceleration
Reading of the pressure test sequence to file, while thread local L2 cache is used in loading procedure, make cable release when switching the page
Cheng Shouxian is loaded from L2 cache, makes fast thread suitably slack-off, and slow thread is allowed to catch up with, and is avoided because fast thread switches every time
The page allows slow thread that can only be traceable to file and reads from new, the defect for causing slow thread to go down slowly always, so that loading
Pressure test flow QPS keeps stablizing when the page;In addition, the present invention shifts to an earlier date loading page using asynchronous system, load the page
Without waiting, it is further reduced during pressure is surveyed because load document causes to press measurement of discharge QPS big ups and downs.
Fig. 1 diagrammatically illustrates sequence parameter performance pressure flow gauge figure, first distinguishes load document as shown in Figure 1:
Size is less than or equal to the small documents of 100,000 rows if it is line number, directly reads this document and enter in memory, and when reading is directly suitable
Sequence is read, and every primary record number of reading adds 1, is finished until reading;
It is greater than the big file of 100,000 rows if it is record line number, when current record number is less than page line number, directly from complete
Data are read in office's caching, if current record number is more than or equal to page line number, using asynchronous loading method, per thread is held
There is the L2 cache of 100 datas, it is when data can not be read in the big page from sharing in process, then quickly slow from second level
File 100 datas of load are deposited to come out, and non-load 100,000 row records, and survey line journey is pressed to read the slow thread of file without loading
The page, can slowly catch up with the thread for reading page fast speed, and pressure survey process QPS shake is smaller.Fast thread is read in it first
L2 cache in depositing directly reads in L2 cache if containing designated recorder in L2 cache and records, if the second level
Designated recorder is free of in memory, then synchronization loads the L2 cache page into thread memory, and needs further more new thread
Data in memory then read the record in L2 cache again, update the BufferReader and index of L2 cache, read
It takes into rear record number and adds 1, when recording number more than file record line number, record number is reset, and terminates to read.
It is illustrated in figure 2 the operation step map of sequence parameter performance pressure survey process, as shown in Fig. 2, specifically including:
S1: this document is divided into multipage.
Firstly the need of the size for judging file, size discrimination is carried out by file line number, is less than or equal to 100,000 rows record and represents
Small documents, differentiation mode once read page of data and are then labeled as small documents when reaching end of file to memory.
For small documents, call method load document is recorded in proceeding internal memory for the first time, and subsequent each reading speed is very fast,
Thread corresponding record is read according to the recordIndex of per thread, is returned when reaching end of file, executes program such as
Under:
private volatile ThreadLocal<Map<String,Object>>
localDataThreadHolder;
In order to guarantee the independence of statistical data between thread, each thread is recorded using threadLocal and is currently read
Record position, key=Accumulator, value=Integer in Map.
Integer iAccumulator=fileRecordAssist.getThredAccumulator ()==null?
0:fileRecordAssist.getThredAccumulator();
fileRecordAssist.setThredAccumulator(iAccumulator+1);
When reaching the end of file, set 0 is marked.
Big file is represented when file line number is greater than when 100,000 rows record, paging processing is carried out to big file, Paging system is such as
Under:
Private volatile Map<String, Object>mData=null;
Three kinds of data of map caching every page:
Key=dataContainer;The data of value=List caching current page;
Key=dataBeginIndex;Value=Integer caches the line number of current first trip hereof, for following
Ring loaded page;
Key=bufferReader;Value=BufferReader file pointer, storage is a style of writing under current page
The position fseek of part;Three of the above data can completely be abstracted a page data.
By the above paging technique, the reading of big file is unrestricted, and pressure testing data is made to be more in line with real process.
S2 reads multipage using at least one thread order, wherein when thread has read page (n-1)th from global buffer, from
The content of nth page is read in L2 cache, meanwhile, the content of nth page is loaded in global buffer, n is the integer greater than 1;
The load of the page is carried out in loading procedure using sliding window, sliding window makes file load rapidly under file
A line is loaded, and the time of positioning file line number is reduced, and sliding window mainly solves the big text of Testing Platform load
Each loaded page is avoided all to reopen a bufferReader, the inherently current page of bufferReader storage when part
Next line position, therefore read access time is reduced using sliding window, while reducing unnecessary waiting, the reading made
Cheng Gengjia is rapidly and efficiently.
S3, after the content of nth page is loaded in global buffer, thread is remaining in this n pages from reading in global buffer
Content, wherein remaining content refers to the content not read from L2 cache in nth page.
Global buffer is initially set up, the data in the global buffer create a mechanism as follows to all thread opening and shares:
Private volatile Map<String, Object>mData=null;
All pressure survey line journeys of process-level are shared, in mData:
Key=dataContainer, value=List<String>cache 100,000 rows record;
Key=dataBeginIndex, value=Integer mark big file paging starting index;
Key=bufferReader, value=BufferReader file pointer, these data are that sliding window is taken out
Image data object.
L2 cache is established, for the file after paging, in every page of process, all pressure survey line journeys are shared reads global buffer,
All threads read the faster or slower of file speed in process, if setting sets of threads point, thread QPS agrees during pressure is surveyed
Surely it can decline because file loads, be unable to satisfy QPS stability requirement, consider L2 cache, per thread holds one
The small caching of 100 datas then quickly loads 100 from file when that can not read data in the big page from sharing in process
Data come out, and non-load 100,000 row records, and press survey line journey to read the slow thread of file and be not necessarily to loading page, can slowly catch up with
The thread of page fast speed is read, using L2 cache technology, sets of threads that no setting is required point, fast thread is without waiting for slow line
Journey, there is no unnecessary stop waitings for all threads made, therefore press survey process QPS shake smaller, and L2 cache establishes machine
It makes as follows:
private volatile ThreadLocal<Map<String,Object>>
localDataThreadHolder;
LocalDataThreadHolder first is a thread variable threadlocal, stores following crucial number
According to:
Key=localeBufferedReader, value=BufferedReader;
Key=localeContainer, value=List<String>thread L2 cache corresponding record container;
localeBeginIndex;Value=Integer thread corresponds to L2 cache start of Page index, guarantees sliding window
Mouth operation, second level cache abstract data object.
Fig. 3 diagrammatically illustrates L2 cache flow chart, and L2 cache process is as follows it can be seen from 3 figures:
Assume initially that at time 1, pressure process has multiple threads, thread 1,2, n it is globally shared all from current process
Memory also read data in n-1, it is very fast to notice that thread 1 reads data this when, has arrived and read page n-1's
At end, and thread 2 is slightly a little slower, has not read there are also batch of data, and thread n is most slow;3 threads are still slow from the overall situation at this time
Deposit middle reading data.
And then thread 1 finds the data not read in page n-1, this when, 1 process of thread was asynchronous
Trigger lower one page movement;If other threads of back, which also read to be over to trigger, simultaneously cuts page movement, to prevent from repeating to cut page.It adopts
It prevents from repeating to cut page with double-check.But when 100,000 row data of load will load one section when the switching page active of thread 1
Between, it needs this when thread 1 to be also to try to return to the data needed, has used the big file of this sequence parameterization here
Core, second level cache.Here load second level cache is brought a problem again here, is still wanted using synchronous loading method
Using sliding window technique, file super large is avoided, loads second level cache every time all since file first trip, thread 1 is in loaded page
It reads behind when the n of face and is all read from partial cache every time.The record of thread 2 and thread n this when of page n-1 do not consume
It finishes, data is still read from global buffer.Execute fast 1 this when of thread it is slow get off, thread 2 and thread n are
Slowly it is caught up with thread 1.
In moment 3, page n has loaded completion, can be replaced, and juxtaposition empty page load label allows the page below
Load can operate normally.The access of 1 subsequent reads of thread according to still reading from global buffer, also just may be used for 2 this when by thread
To read from page n, but most slow thread n discovery page n-1 has lost, this when, thread n was also required to load oneself
Partial cache, finally page n-1 reading data is finished from partial cache, it is subsequent that number is still read from global buffer
According to.
When second level chche very good solution pressure surveys platform and carries out the big file pressure of sequence parameterization and survey, page switching causes
The problem of QPS declines, allows thread step as far as possible on a horizontal line.
In conclusion the present invention carries out segment processing to big file using paging technique, it is divided into multipage, is read in pressure test
It is read page by page during taking, the file size made is no longer the limitation in pressure test sequence, while using in load and switching
Sliding window, L2 cache and asynchronous loading method are used when the page, reduce positioning and waiting during pressure is surveyed, the pressure made
Reading speed is fast during survey, EMS memory occupation is small, and flow QPS shake is small.
Fig. 4 diagrammatically illustrates the frame of the computer system for being adapted for carrying out disclosed method according to the embodiment of the present disclosure
Figure.Computer system shown in Fig. 4 is only an example, should not function to the embodiment of the present disclosure and use scope bring and appoint
What is limited.
As shown in figure 4, electronic equipment 400 includes processor 410, computer readable storage medium 420.The electronic equipment
400 can execute the method according to the embodiment of the present disclosure.
Specifically, processor 410 for example may include general purpose microprocessor, instruction set processor and/or related chip group
And/or special microprocessor (for example, specific integrated circuit (ASIC)), etc..Processor 410 can also include using for caching
The onboard storage device on way.Processor 410 can be the different movements for executing the method flow according to the embodiment of the present disclosure
Single treatment unit either multiple processing units.
Computer readable storage medium 420, such as can be times can include, store, transmitting, propagating or transmitting instruction
Meaning medium.For example, readable storage medium storing program for executing can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device,
Device or propagation medium.The specific example of readable storage medium storing program for executing includes: magnetic memory apparatus, such as tape or hard disk (HDD);Optical storage
Device, such as CD (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication chain
Road.
Computer readable storage medium 420 may include computer program 421, which may include generation
Code/computer executable instructions execute processor 410 according to the embodiment of the present disclosure
Method or its any deformation.
Computer program 421 can be configured to have the computer program code for example including computer program module.Example
Such as, in the exemplary embodiment, the code in computer program 421 may include one or more program modules, for example including
421A, module 421B ....It should be noted that the division mode and number of module are not fixation, those skilled in the art can
To be combined according to the actual situation using suitable program module or program module, when these program modules are combined by processor 410
When execution, processor 410 is executed according to the method for the embodiment of the present disclosure or its any deformation.
In accordance with an embodiment of the present disclosure, computer readable storage medium for example can be-but be not limited to-electricity, magnetic, light, electricity
Magnetic, the system of infrared ray or semiconductor, device or device, or any above combination.Computer readable storage medium is more
Specific example can include but is not limited to: have the electrical connections of one or more conducting wires, portable computer diskette, hard disk,
Random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber,
Portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate group
It closes.In the disclosure, it includes or the tangible medium of storage program that the program can be with that computer readable storage medium, which can be any,
It is commanded execution system, device or device use or in connection.And in the disclosure, computer-readable signal
Medium may include in a base band or as the data-signal that carrier wave a part is propagated, wherein carrying computer-readable journey
Sequence code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal, optical signal or above-mentioned
Any appropriate combination.Computer-readable signal media can also be any computer other than computer readable storage medium
Readable medium, the computer-readable medium can be sent, propagated or transmitted for by instruction execution system, device or device
Using or program in connection.The program code for including on computer-readable medium can be with any suitable medium
Transmission, including but not limited to: wireless, wired, optical cable, radiofrequency signal etc. or above-mentioned any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of above-mentioned module, program segment or code include one or more
Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box
The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical
On can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it wants
It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule
The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction
It closes to realize.
It will be understood by those skilled in the art that the feature recorded in each embodiment and/or claim of the disclosure can
To carry out multiple combinations or/or combination, even if such combination or combination are not expressly recited in the disclosure.Particularly, exist
In the case where not departing from disclosure spirit or teaching, the feature recorded in each embodiment and/or claim of the disclosure can
To carry out multiple combinations and/or combination.All these combinations and/or combination each fall within the scope of the present disclosure.
Although the disclosure, art technology has shown and described referring to the certain exemplary embodiments of the disclosure
Personnel it should be understood that in the case where the spirit and scope of the present disclosure limited without departing substantially from the following claims and their equivalents,
A variety of changes in form and details can be carried out to the disclosure.Therefore, the scope of the present disclosure should not necessarily be limited by above-described embodiment,
But should be not only determined by appended claims, also it is defined by the equivalent of appended claims.
Claims (10)
1. a kind of file reading, comprising:
This document is divided into multipage by S1;
S2 reads the multipage using at least one thread order, wherein the thread has read page (n-1)th from global buffer
When, the content of nth page is read from L2 cache, meanwhile, the content of nth page is loaded in the global buffer, n is greater than 1
Integer;
S3, after the content of nth page is loaded in the global buffer, the thread reads the n from the global buffer
Remaining content in page, wherein the remaining content refers to the content not read from L2 cache in nth page.
2. file reading according to claim 1, wherein before the S1 further include:
S0 judges the size of the file, if this document line number is less than or equal to threshold value, data is directly read, if this document
When line number is greater than threshold value, S1 is thened follow the steps.
3. file reading according to claim 1, wherein in the step S2, using at least one thread order
Read the multipage, comprising:
The every page of the multipage is read by row using at least one thread, wherein the row using sliding window to every page carries out
Positioning.
4. file reading according to claim 1, wherein in the step S2, using at least one thread order
Read the multipage, comprising:
The multipage is read using multiple threads are asynchronous.
5. file reading according to claim 1, wherein the step S2, nth page is read from L2 cache
Content, comprising:
Per thread includes the L2 cache.
6. a kind of electronic equipment, comprising:
Processor;
Memory is stored with computer executable program, and the program by the processor when being executed, so that the processor
It executes:
This document is divided into multipage by S1;
S2 reads the multipage using at least one thread order, wherein the thread has read page (n-1)th from global buffer
When, the content of nth page is read from L2 cache, meanwhile, the content of nth page is loaded in the global buffer, n is greater than 1
Integer;
S3, after the content of nth page is loaded in the global buffer, the thread reads the n from the global buffer
Remaining content in page, wherein the remaining content refers to the content not read from L2 cache in nth page.
7. electronic equipment according to claim 6, wherein the processor also executes before executing paging technique:
S0 judges the size of the file, if this document line number is less than or equal to threshold value, data is directly read, if this document
When line number is greater than threshold value, S1 is thened follow the steps.
8. electronic equipment according to claim 6 also executes wherein processor sequence reads multipage:
The every page of the multipage is read by row using at least one thread, wherein the row using sliding window to every page carries out
Positioning.
9. electronic equipment according to claim 8 also executes wherein processor sequence reads multipage:
The multipage is read using multiple threads are asynchronous.
10. electronic equipment according to claim 6, wherein the memory includes:
Per thread includes the L2 cache.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810763356.9A CN109039804B (en) | 2018-07-12 | 2018-07-12 | File reading method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810763356.9A CN109039804B (en) | 2018-07-12 | 2018-07-12 | File reading method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109039804A true CN109039804A (en) | 2018-12-18 |
CN109039804B CN109039804B (en) | 2020-08-25 |
Family
ID=64640938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810763356.9A Active CN109039804B (en) | 2018-07-12 | 2018-07-12 | File reading method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109039804B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109783454A (en) * | 2019-01-23 | 2019-05-21 | 成都易海通科技有限公司 | A kind of super large text file comparison method |
CN110990079A (en) * | 2019-12-02 | 2020-04-10 | 北京大学 | Method and device for loading remote csv file |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09198296A (en) * | 1996-01-16 | 1997-07-31 | Matsushita Graphic Commun Syst Inc | Image information processing system |
CN101059800A (en) * | 2006-04-21 | 2007-10-24 | 上海晨兴电子科技有限公司 | Method and apparatus for displaying electronic book on mobile phone |
CN101452465A (en) * | 2007-12-05 | 2009-06-10 | 高德软件有限公司 | Mass file data storing and reading method |
CN102073743A (en) * | 2011-02-01 | 2011-05-25 | 苏州同元软控信息技术有限公司 | Large-capacity simulation result file storage and access method |
CN103677554A (en) * | 2012-09-17 | 2014-03-26 | 腾讯科技(深圳)有限公司 | Method and device for sliding screen smoothly |
US20140101310A1 (en) * | 2012-10-04 | 2014-04-10 | Box, Inc. | Seamless access, editing, and creation of files in a web interface or mobile interface to a collaborative cloud platform |
CN104735099A (en) * | 2013-12-18 | 2015-06-24 | 北京神州泰岳软件股份有限公司 | Far-end file reading method and system |
CN106936623A (en) * | 2015-12-31 | 2017-07-07 | 五八同城信息技术有限公司 | The management method of distributed cache system and cache cluster |
-
2018
- 2018-07-12 CN CN201810763356.9A patent/CN109039804B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09198296A (en) * | 1996-01-16 | 1997-07-31 | Matsushita Graphic Commun Syst Inc | Image information processing system |
CN101059800A (en) * | 2006-04-21 | 2007-10-24 | 上海晨兴电子科技有限公司 | Method and apparatus for displaying electronic book on mobile phone |
CN101452465A (en) * | 2007-12-05 | 2009-06-10 | 高德软件有限公司 | Mass file data storing and reading method |
CN102073743A (en) * | 2011-02-01 | 2011-05-25 | 苏州同元软控信息技术有限公司 | Large-capacity simulation result file storage and access method |
CN103677554A (en) * | 2012-09-17 | 2014-03-26 | 腾讯科技(深圳)有限公司 | Method and device for sliding screen smoothly |
US20140101310A1 (en) * | 2012-10-04 | 2014-04-10 | Box, Inc. | Seamless access, editing, and creation of files in a web interface or mobile interface to a collaborative cloud platform |
CN104735099A (en) * | 2013-12-18 | 2015-06-24 | 北京神州泰岳软件股份有限公司 | Far-end file reading method and system |
CN106936623A (en) * | 2015-12-31 | 2017-07-07 | 五八同城信息技术有限公司 | The management method of distributed cache system and cache cluster |
Non-Patent Citations (2)
Title |
---|
QIANBIN XIA,WEIJUN XIAO: "High-Performance and Endurable Cache Management for Flash-Based Read Caching", 《IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS》 * |
何炜: "基于多页的龙芯2F软TLB重载入异常处理改进", 《技术与方法》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109783454A (en) * | 2019-01-23 | 2019-05-21 | 成都易海通科技有限公司 | A kind of super large text file comparison method |
CN110990079A (en) * | 2019-12-02 | 2020-04-10 | 北京大学 | Method and device for loading remote csv file |
CN110990079B (en) * | 2019-12-02 | 2020-07-24 | 北京大学 | Method and device for loading remote csv file |
Also Published As
Publication number | Publication date |
---|---|
CN109039804B (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9864774B2 (en) | Granular buffering of metadata changes for journaling file systems | |
CN109508246A (en) | Log recording method, system and computer readable storage medium | |
EP3267716A1 (en) | Edge processing for data transmission | |
CN111309732B (en) | Data processing method, device, medium and computing equipment | |
CN108965355A (en) | Method, apparatus and computer readable storage medium for data transmission | |
US11122002B2 (en) | Storing messages of a message queue | |
CN107172208A (en) | The dispositions method and its system of server | |
CN109039804A (en) | A kind of file reading and electronic equipment | |
CN111416825A (en) | Inter-thread lock-free log management method and system, terminal and storage medium | |
CN104951482B (en) | A kind of method and device of the image file of operation Sparse formats | |
CN112380148A (en) | Data transmission method and data transmission device | |
CN108462652A (en) | A kind of message processing method, device and the network equipment | |
CN108647278B (en) | File management method and system | |
CN112231327B (en) | Flight information updating method, device, server and storage medium | |
CN113127438B (en) | Method, apparatus, server and medium for storing data | |
CN111913807A (en) | Event processing method, system and device based on multiple storage areas | |
CN117114623A (en) | Intelligent management method and system for monitoring equipment in park | |
CN110162423A (en) | Resource inspection method and resource check device | |
CN102929562A (en) | Extensible reordering method based on identification marks | |
US20160366225A1 (en) | Shuffle embedded distributed storage system supporting virtual merge and method thereof | |
CN103559000B (en) | A kind of massive image data moving method towards quality and system | |
CN107656702A (en) | Accelerate the method and its system and electronic equipment of disk read-write | |
CN112035522B (en) | Database data acquisition method and device | |
CN107948336A (en) | Automobile data recorder data cloud storage system | |
CN114556283B (en) | Method and device for data writing, consistency checking and reading |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |