CN109634877B - Method, device, equipment and storage medium for realizing stream operation - Google Patents

Method, device, equipment and storage medium for realizing stream operation Download PDF

Info

Publication number
CN109634877B
CN109634877B CN201811495092.XA CN201811495092A CN109634877B CN 109634877 B CN109634877 B CN 109634877B CN 201811495092 A CN201811495092 A CN 201811495092A CN 109634877 B CN109634877 B CN 109634877B
Authority
CN
China
Prior art keywords
page
file
pages
unit
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811495092.XA
Other languages
Chinese (zh)
Other versions
CN109634877A (en
Inventor
区润强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Baiguoyuan Information Technology Co Ltd
Original Assignee
Guangzhou Baiguoyuan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baiguoyuan Information Technology Co Ltd filed Critical Guangzhou Baiguoyuan Information Technology Co Ltd
Priority to CN201811495092.XA priority Critical patent/CN109634877B/en
Publication of CN109634877A publication Critical patent/CN109634877A/en
Application granted granted Critical
Publication of CN109634877B publication Critical patent/CN109634877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0882Page mode
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method, a device, equipment and a storage medium for realizing stream operation, wherein the method comprises the following steps: acquiring operation information of at least two stream operations, wherein the operation information comprises: operation type and target file information; and when the target file information of the at least two stream operations is the same, executing corresponding operations in a file buffer area corresponding to the target file based on the operation types of the stream operations respectively. By using the method, the memory space waste and the frequent reading of the disk file when a plurality of streams operate the same file are reduced, the time consumption for realizing the stream operation is shortened, the operation complexity when the plurality of streams operate on the same file is also reduced to a great extent, and the effect of controllable stream operation process is achieved.

Description

Method, device, equipment and storage medium for realizing stream operation
Technical Field
The present invention relates to the field of computer application technologies, and in particular, to a method, an apparatus, a device, and a storage medium for implementing stream operations.
Background
In an upper layer application formed based on an object-oriented programming language, common stream operations are operations of standard file input stream and output stream, and the stream operations can realize reading of data content in a target file or writing of the data content in the target file. To implement operations related to input and output streams, it is currently done to open up a continuous memory buffer area for each stream (input stream or output stream) in the memory, so as to implement reading or writing of a disk file by an upper application through the memory buffer area.
In the above-mentioned method, different flows correspond to different memory buffers, so that the following problems exist: 1) The same data content may be cached in memory buffers corresponding to different streams, thereby resulting in waste of memory space; 2) If an output stream a writes a portion of data content into its own memory buffer (e.g., interval [1024,2048] bytes), and another input stream b wants to attempt to read the portion of data content (interval [1024,2048 ]), b can only wait for a to synchronize the content of the corresponding memory buffer to the file, and b can read the portion of data content from the file into its corresponding memory buffer, thereby making unnecessary time consumption; 3) The contents of the memory buffer are emptied after a read or write operation, and when an input stream frequently reads data from different locations, the data contents need to be frequently re-read from the disk file.
For the upper layer application under the linux system, the file can be directly mapped to the memory based on the file mapping interface on the linux system, and at the moment, the upper layer application can manipulate the file content in a memory manipulation manner. The method can solve the problems of memory waste, unnecessary time consumption and frequent reading of the disk, but cannot control the reading or writing of the data content in a fine granularity, and in addition, the method is limited to the upper layer application under the Linux operating system, and cannot be widely applied to various operating systems.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a method, an apparatus, a device, and a storage medium for implementing a stream operation, so as to solve the problem that the existing method for implementing a stream operation cannot effectively implement a stream operation.
In a first aspect, an embodiment of the present invention provides a method for implementing a streaming operation, including:
acquiring operation information of at least two stream operations, wherein the operation information comprises: operation type and target file information;
and when the target file information of the at least two stream operations is the same, executing corresponding operations in a file buffer area corresponding to the target file based on the operation types of the stream operations respectively.
Further, based on the operation type of each stream operation, executing a corresponding operation in a file buffer corresponding to the target file, including:
if the input stream operation with the operation type of file input exists, executing data writing operation in a file buffer corresponding to the target file; if the operation type is the output stream operation of file output, executing data reading operation in a file buffer area corresponding to the target file; and the file buffer area performs data read-write operation by taking a page as a unit.
Further, the executing the data writing operation in the file buffer corresponding to the target file includes:
obtaining the write data size of an input stream operation and the page size of a unit page in the file buffer, wherein the write data size is contained in operation information of the input stream operation; determining the number of write pages required for the input stream operation write according to the write data size and the page size; applying for the occupied page of the written page number for the input stream operation according to the written page number and the idle page remaining amount of the file buffer area; and writing the data to be written into the occupied pages of the application to obtain the written pages containing the written marks, wherein each written page comprises a file address corresponding to the written data in the target file.
Further, after executing the data writing operation in the file buffer corresponding to the target file, the method further includes:
and when the data writing-back condition is met, writing back the data content of the writing page with the writing mark in the file buffer zone to the target file based on the corresponding file address, and clearing the writing mark of the writing page.
Further, according to the number of written pages and the remaining amount of free pages in the file buffer, applying for the input stream operation to occupy pages with the number of written pages includes:
If the number of the written pages is smaller than or equal to the remaining free pages, directly applying for the occupied pages of the written pages for the input stream operation from the remaining free pages; otherwise, the rest free pages are used as occupied pages to be distributed to the input stream operation, and the rest occupied pages are applied for the input stream operation from unit pages with release priorities.
Further, the applying for the remaining occupied pages for the input stream operation from the unit pages with release priority includes:
taking the difference between the number of written pages and the remaining amount of free pages as the remaining number of pages required for the input stream operation; sequentially selecting candidate unit pages of the rest pages after sequencing the corresponding unit pages from high to low based on the release priority; and releasing the data content currently contained in each candidate unit page, and applying each released candidate unit page as an occupied page remained by the input stream operation.
Further, the executing the data reading operation in the file buffer corresponding to the target file includes:
acquiring the read data size of the output stream operation, the file initial read address and the page size of a unit page in the file buffer area; determining the number of read pages required by the output stream operation according to the read data size and the page size; determining a file reading address of a target file corresponding to each page to be read according to the file initial reading address and the reading page number; and positioning a target read page serving as a corresponding occupied page from the file buffer according to each file read address, and reading the data content in each target read page.
Further, locating the target read page from the file buffer as the corresponding occupied page includes:
traversing a unit page containing a file address in the file buffer area aiming at each file reading address; if the target file address matched with the file reading address exists, taking a target reading page containing the target file address as an occupied page corresponding to the file reading address; otherwise, applying for a new free page from the file buffer area, caching target data content loaded from the target file into the new free page to form a target read page serving as an occupied page, and taking the file read address as the file address of the target read page, wherein the target data content is determined based on the file read address.
Further, applying for a new free page from the file buffer area includes:
determining whether the total number of unit pages contained in the file buffer reaches a set total amount; if not, directly applying for a new idle page; if yes, selecting the unit page with the highest release priority, releasing the current contained data content, and taking the released unit page as a new free page.
Further, the release priority of the unit pages in the file buffer is determined by:
determining current existing stream operation in the file buffer area, and determining page occupation information of each stream operation application; acquiring pointing information of pointers of each stream operation; for each unit page in the file buffer area, determining whether the unit page is an occupied page of any stream operation; if so, determining that the release priority of the unit page is a set first priority value when the unit page is determined to be an unoperated page based on the pointing information of the belonging stream operation, and determining the release priority of the unit page based on a priority determination rule when the unit page is determined to be an operated page; if not, determining the release priority of the unit page based on a priority determination rule.
Further, the determining the release priority of the unit page based on the priority determining rule includes:
when it is determined that the unit page does not have a write flag, determining whether a preamble flow operation exists before the unit page; if so, determining a distance value from the unit page to the last occupied page contained in the preamble stream operation, and determining the distance value as the release priority of the unit page; if not, the set second priority value is determined as the release priority of the unit page.
In a second aspect, the present invention provides an implementation apparatus for stream operation, including:
a flow obtaining module, configured to obtain operation information of at least two flow operations, where the operation information includes: operation type and target file information;
and the stream execution module is used for executing corresponding operations in the file buffer corresponding to the target file based on the operation types of the stream operations respectively when the target file information of the at least two stream operations is the same.
In a third aspect, an embodiment of the present invention provides a computer apparatus, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the method for implementing the streaming operation provided by the embodiment of the first aspect of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for implementing a streaming operation provided by an embodiment of the first aspect of the present invention.
The method, the device, the equipment and the storage medium for realizing the stream operation provided by the embodiment of the invention firstly acquire operation information of at least two stream operations, wherein the operation information comprises the following steps: operation type and target file information; and then, when the information of the target files of at least two stream operations is the same, respectively executing corresponding operations in the file buffer corresponding to the target files based on the operation types of the stream operations. Compared with the existing implementation method, the embodiment of the invention reduces the memory space waste and the frequent reading of the disk file when a plurality of streams operate on the same file, shortens the time consumption for realizing the stream operation, greatly reduces the operation complexity when the plurality of stream operations act on the same file, and achieves the effect of controllable stream operation implementation process.
Drawings
FIG. 1 is a schematic flow chart of a method for implementing a streaming operation according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an implementation flow of an input stream operation in an embodiment of the invention;
FIG. 3 is a schematic diagram of an implementation flow of output stream operation in an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an implementation of determining a target read page corresponding to an output stream operation according to an embodiment of the present invention;
FIG. 5 is a flowchart of an implementation of determining release priority of a unit page in a file buffer according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an example of determining unit page release priority in a file buffer in accordance with an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a flow operation implementation device according to an embodiment of the present invention;
fig. 8 shows a schematic hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, not all, of the structures or components related to the present invention are shown in the drawings.
Fig. 1 is a schematic flow chart of a method for implementing a streaming operation according to an embodiment of the present invention, where the method is applicable to a case of implementing a response to a streaming operation invoked by an upper application, and the method may be implemented by an implementation device of the streaming operation, where the device may be implemented by software and/or hardware and is generally integrated on a computer device.
In this embodiment, the computer device may specifically be an electronic terminal with an upper application installation environment, and preferably may be an electronic device such as a PC, a mobile phone, a tablet computer, a notebook computer, and the like.
It should be noted that, in this embodiment, the implementation of the streaming operation may exist in many application scenarios, and for example, it is assumed that one application scenario is an upper layer application with an audio and video playing function, for the upper layer application, many playing resources are included on the application, and a user may select a playing resource that wants to be appreciated, and the upper layer application may use a mode of downloading and playing at the same time to present playing content to the user, where the implementation of the mode of downloading and playing at the same time is specifically equivalent to: the upper layer application invokes two stream operations, namely an input stream operation, for downloading and writing the playing resources stored on the server into the local file, and an output stream operation, for reading the downloaded playing resources from the local file for playing. Therefore, the method for realizing the stream operation provided by the embodiment can be used for realizing the two stream operations, so that the upper layer application achieves the effect of broadcasting under the same time.
As shown in fig. 1, the implementation method of a stream operation provided in the embodiment of the present invention specifically includes the following operations:
s101, acquiring operation information of at least two stream operations, wherein the operation information comprises: operation type and object file information.
In this embodiment, the at least two stream operations may be stream operations applied by an upper layer application, where when the upper layer application applies for a stream operation, operation information required for implementing the stream operation is transmitted simultaneously, and the operation information may specifically include: the operation type of the stream operation and the target file information of the target file corresponding to the stream operation, wherein the operation type can comprise input stream operation of file input and output stream operation of file output; the target file information may be a file name of the target file or a storage path of the target file, etc. The step specifically realizes the acquisition of the operation information corresponding to the stream operation.
S102, when the information of the target files of at least two stream operations is the same, executing corresponding operations in the file buffer corresponding to the target files based on the operation types of the stream operations respectively.
It should be noted that, in practical application, multiple stream operations applied by an upper layer application may act on the same object file, or may act on different object files, and when multiple stream operations act on the same object file, an existing method for implementing stream operations may have the problems or defects described in the background art, so that the embodiment provides a specific implementation that multiple streams act on the same object file to solve the problems or defects in the background art.
Specifically, in this embodiment, a file buffer is set with respect to the target file, and any streaming operation that acts on the target file needs to implement data interaction with the target file through the file buffer. For example, assuming that there are two operation types of stream operations (such as an input stream operation and an output stream operation) currently needing to perform data interaction with a target file, the two stream operations may first access a file buffer set in a memory corresponding to the target file, and then implement corresponding operations (such as writing data content corresponding to the input stream and reading file content corresponding to the output stream) through accessing the file buffer.
The method for realizing the stream operation provided by the embodiment of the invention comprises the steps of firstly acquiring operation information of at least two stream operations of an upper application, wherein the operation information comprises an operation type and target file information; when the information of the target files of at least two stream operations is the same, the operation types of the stream operations realize the respective operations in the file buffer corresponding to the target files. Compared with the existing implementation method, the implementation method provided by the embodiment reduces the memory space waste and the frequent reading of the disk file when a plurality of streams operate on the same file, shortens the time consumption for realizing the stream operation, greatly reduces the operation complexity when the plurality of stream operations act on the same file, and achieves the effect of controllable stream operation implementation process.
In an optional embodiment of the present invention, based on the operation type of each stream operation, a corresponding operation is further executed in a file buffer corresponding to the target file, which is specifically optimized as follows: if the input stream operation with the operation type of file input exists, executing data writing operation in a file buffer corresponding to the target file; if the operation type is the output stream operation of file output, executing data reading operation in a file buffer area corresponding to the target file; and the file buffer area performs data read-write operation by taking a page as a unit.
In general, common operation types of stream operations applied by upper layer applications are input stream operations for file content writing and output stream operations for file content reading. In this embodiment, the operation type of the at least two stream operations may be optimized to be an input stream operation or an output stream operation. Specifically, when the operation type of the stream operation is file input, the write operation of the input stream can be realized by executing data writing in a file buffer corresponding to the target file; similarly, when the operation type of the stream operation is file output, the read operation of the output stream can be implemented by executing data read in the file buffer corresponding to the target file.
In this embodiment, a file buffer is created in advance in the memory corresponding to a file stored on the disk, that is, when the file needs to be read or written by content, and the file buffer caches the content of the file in units of pages, which is equivalent to dividing the file into a plurality of data pages with fixed sizes and loading the data pages into the file buffer.
It should be noted that, in this embodiment, the initial data content of the file buffer created for the target file in the memory is empty, and then the content of the target file may be loaded in units of pages as needed, or external data may be written in units of pages as needed, and the written external data may be written back to the target file when the external data meets the requirement.
Further, fig. 2 shows a schematic flow chart of an implementation of the input stream operation in the embodiment of the present invention, and referring to fig. 2, the implementation includes the following steps:
s201, obtaining the write data size of the input stream operation and the page size of a unit page in a file buffer, wherein the write data size is contained in operation information of the input stream operation.
In this embodiment, the input stream operation may be triggered by an upper layer application, and may specifically be used to implement an operation of the upper layer application to write external data into a target file, so that when the upper layer application applies for data writing to trigger the input stream operation, the operation information corresponding to the input stream operation includes the write data size of the data to be written. The step can acquire the write-in data size of the input stream operation, and can also acquire the page size of a unit page relied on when the corresponding file buffer area of the target file in the memory performs the read-write operation by taking the page as a unit.
S202, determining the number of writing pages required by input stream operation writing according to the size of the writing data and the page size.
In this embodiment, knowing the write data size and the page size of the file buffer unit page, the number of write pages required for the input stream operation to write data can be determined. By way of example, assuming that the write data size of external data to be written is 100kb and the page size of a unit page is 4kb, the number of pages to be written can be determined to be 25 pages.
S203, applying for occupied pages of the written page number for input stream operation according to the written page number and the idle page remaining amount of the file buffer area.
It is known that the corresponding buffer space size can be set when creating the file buffer, and thus, when the page size of a unit page is determined, the total number of pages that can be divided by the file buffer can be determined in combination with the buffer space size, and the remaining amount of free pages can be understood as the difference between the total number of pages and the data page, and the data page corresponds to the unit page after loading the data content or writing the data content.
In this embodiment, an idle page requiring an equivalent number of written pages is applied in the file buffer as an occupied page for the input stream operation, so as to ensure all writing of external data to be written. The specific allocation mode of occupied pages required by the input stream operation can be determined according to the comparison of the remaining amount of the free pages and the written pages, and when the remaining amount of the free pages is larger than or equal to the written pages, the free pages in the file buffer area are indicated to be sufficient, and the free pages with the same written pages can be directly allocated from the remaining free pages to serve as occupied pages; when the free page is smaller than the written page number, the free page is insufficient, and besides the rest free pages are applied as occupied pages, the unit pages with releasable contents are applied from the unit pages containing the data contents to be used as occupied pages.
Optionally, in this embodiment, the applying, for the input stream operation, the occupied page of the written page number according to the written page number and the free page remaining amount of the file buffer may be specifically optimized as follows:
if the number of the written pages is smaller than or equal to the remaining free pages, directly applying for the occupied pages of the written pages for the input stream operation from the remaining free pages; otherwise, the rest free pages are used as occupied pages to be distributed to the input stream operation, and the rest occupied pages are applied for the input stream operation from unit pages with release priorities.
In this embodiment, when applying for occupying pages in the case that the number of written pages is greater than the remaining amount of free pages, the remaining free pages in the file buffer may be applied as occupied pages for the input stream operation first, then, the unit pages with release priority in the file buffer are traversed, and the unit pages that can be used as the remaining occupied pages for the input stream operation are determined by the level of the release priority. The release priority specifically identifies the importance of the data content contained in the unit page in the subsequent operation, and the higher the release priority of the unit page, the lower the importance of the data content contained in the unit page in the subsequent use is.
Based on the optimization, the applying for the remaining occupied pages for the input stream operation from the unit pages with release priority includes:
taking the difference between the number of written pages and the remaining amount of free pages as the remaining number of pages required for the input stream operation; sequentially selecting candidate unit pages of the rest pages after sequencing the corresponding unit pages from high to low based on the release priority; and releasing the data content currently contained in each candidate unit page, and applying each released candidate unit page as an occupied page remained by the input stream operation.
In this embodiment, the subsequent availability of the data content contained in the unit page corresponding to the high release priority may be considered to be low, whereby the corresponding unit pages may be ordered from high to low based on the release priority, then the remaining number of unit pages may be sequentially selected as candidate unit pages, and then the data content currently contained in each candidate unit page may be released.
S204, sequentially writing the data to be written into the occupied pages of the application to obtain the written pages containing the written marks, wherein each written page comprises the file address corresponding to the written data in the target file.
After obtaining the occupied pages required by the input stream operation of the number of written pages based on the above steps, external data to be written can be written into each occupied page in turn, generally, the writing sequence of the data to the occupied pages is from left to right, in this embodiment, the data writing of the input stream operation specifically adopts pointers to determine the occupied pages of the data to be written currently, that is, the data to be written currently is written into the occupied pages currently pointed by the pointers of the input stream operation, the occupied pages after the data writing are recorded as the operated written pages, then writing marks are added to the written pages, and the written pages after the writing marks are added cannot execute the release of the data content, and also do not need to carry out the release priority determination.
In addition, for each write page, in addition to the written write data, the write data is included to correspond to a file address in the target file, and the file address can be understood as the position information corresponding to the write data in the target file, so that the file address can be specifically used as a write address index when the write data is written back to the target file.
And S205, when the data writing back condition is met, writing back the data content of the writing page with the writing mark in the file buffer area to the target file based on the corresponding file address, and clearing the writing mark of the writing page.
In this embodiment, after the data writing of the input stream to the file buffer is implemented based on the above S201-S204, the data content written to the file buffer needs to be written back to the target file later. This step describes the write-back of the written data content in the file buffer to the target file.
Specifically, the writing back of the data content to the target file is not performed immediately after the data content is written into the file buffer area, and it is first required to determine whether a data writing back condition is reached, where the data writing back condition may be that the written data content reaches a certain amount, or that the time length from the last writing back of the target file reaches a set time interval value, or the like. After the write-back condition is met, the data content in the write-back page currently provided with the write-back mark in the file buffer area can be written back to the target file, and each write-back page needs to determine the specific write-back position of the data content in the target file based on the contained file address, and then the contained data content is written back to the determined write-back position. After that, the writing mark added on the writing page may be cleared, and it should be noted that only the writing mark on the writing page is cleared here, and the specific data content in the writing page is not cleared, so that when the data content corresponding to the file address needs to be read, the data content may also be read from the writing page. And, the write page after the write flag is cleared may also participate in the determination of the release priority.
Further, fig. 3 shows a schematic flow chart of an implementation of the output flow operation in the embodiment of the present invention, and referring to fig. 3, the implementation specifically includes the following operations:
s301, acquiring the read data size of the output stream operation, the file start read address and the page size of a unit page in a file buffer.
In this embodiment, the output stream operation may also be triggered by an upper layer application, which may be specifically used to implement an operation of reading data content from a target file by the upper layer application, so that when the upper layer application applies for data reading to trigger the output stream operation, the operation information corresponding to the output stream operation includes the read data size of the data to be read. The step can acquire the read data size of the output stream operation, can acquire the file initial read address of the data content to be read in the target file by the upper layer application, and can also acquire the page size of a unit page relied on when the corresponding file buffer area of the target file in the memory performs the read-write operation by taking the page as a unit.
In this embodiment, the file start read address may be understood as a start read byte of the data content to be read in the target file, where the file start read address may be determined by the operation information when the upper layer application applies for the output stream operation. Illustratively, assume that the upper layer application declares the following when applying for output stream operations: the data content of 1000 bytes is read from the quarter of the data content contained in the target file, the read data size of the data content to be read can be determined to be 1000 bytes, and the specific bytes pointed to by the quarter can also be determined when the total size of the data content of the target file is known (the total size of the data content is 2000kb, the quarter is equivalent to 500kb of the data content, namely, the file start read address is the start read from 500kb of the data content). If the data content of the target file is divided into fixed sizes with the same page size, the data start byte and the data end byte of each data block formed after the division can be determined. It is known that the partitioning of the target file in a fixed size should be performed prior to the output stream operation.
S302, determining the number of read pages required by the output stream operation according to the read data size and the page size.
In this embodiment, knowing the read data size and page size, the number of read pages required for data reading by the output stream operation can be determined. Similarly, assuming that the read data size to be read by the upper layer application is 100kb and the page size of a unit page is 4kb, the number of read pages to be read can be determined to be 25 pages.
It will be appreciated that the upper layer application needs to load the data to be read into the file buffer based on the output stream operation for reading the data content of the target file, and finally transfers the data content read from the file buffer to the upper layer application. Thus, this step requires determining the number of read pages required for the output stream operation to load the data content in the file buffer, and it should be noted that the data content to be read by the upper layer application may already exist in the file buffer, and at this time, the data content needs to be read from the file buffer and transferred to the upper layer application.
S303, determining the file reading address of the target file corresponding to each page to be read according to the file initial reading address and the reading page number.
In this embodiment, based on the above description, it can be known that the file start read address is specifically a data start read byte of the data content to be read in the target file, and after knowing the number of read pages and the page size of each page, the data start byte of the data content contained in each page to be read can be determined by combining the file start read address, and in this embodiment, the data start byte is recorded as the file read address of each page to be read relative to the target file. For example, assuming that the file read start address is 500kb of the data content, the number of read pages is 25, and the page size of each page is 4kb, the file read address of the first page to be read is 500kb, the file read address of the second subsequent page to be read is 504kb, and thus the file read addresses of the last page to be read are 596kb after sequentially accumulating.
S304, positioning a target read page serving as a corresponding occupied page from the file buffer area according to each file read address, and reading the data content in each target read page.
In this embodiment, the file buffer already includes the data content buffered from the target file after multiple streaming operations, and the data content is buffered in unit page form, and each unit page in which the data content is buffered further includes the file address corresponding to the data content in the target file. The step can firstly perform positioning search in the file buffer according to the determined file reading address of each page to be read, so as to determine whether the data content to be read is contained in the file buffer. If the target read page which is the occupied page of the output stream operation and is matched with the file read address is determined, the data content can be directly read from the target read page, and if some file read addresses cannot be matched, the data content is firstly loaded from the target file according to the file read address and cached in the newly allocated target read page which is the occupied page, and then the data content is read from the newly allocated target read page.
In this embodiment, the target read page that is the corresponding occupied page is further located in the file buffer, specifically, fig. 4 shows a flowchart of an implementation of determining the target read page corresponding to the output stream operation in the embodiment of the present invention, and referring to fig. 4, it can be known that "locating the target read page that is the corresponding occupied page from the file buffer" specifically includes the following steps:
s401, reading the address for each file, and traversing the unit page containing the file address in the file buffer area.
In this embodiment, if the data content is cached in a unit page of the file buffer, the unit page also includes a file address corresponding to the cached data content in the target file. The step may traverse the unit page containing the file address for each determined file read address, thereby determining whether there is a unit page containing the file address matching the file read address.
S402, determining whether a target file address matched with the file reading address exists, if so, executing S403; if not, S404 is performed.
Through the above traversal, it can be determined whether there is a target file address matching the file read address in the file buffer, and if so, S403 is further performed; otherwise, S404 needs to be performed.
S403, taking the target read page containing the target file address as an occupied page corresponding to the file read address.
After determining the target file address that exists in the file read address matching based on the above operation, the target read page corresponding to the file read address as the occupied page may be determined based on this step.
S404, applying for a new free page from the file buffer.
In this embodiment, if it is determined that there is no file address matching the file read address in the file buffer, that is, there is no unit page in the file buffer containing the data content corresponding to the file read address, a new free page needs to be applied from the file buffer as a target read page for caching the data content corresponding to the file read address.
Optionally, the embodiment embodies applying for a new free page from the file buffer as follows: determining whether the total number of unit pages contained in the file buffer reaches a set total amount; if not, directly applying for a new idle page; if yes, selecting the unit page with the highest release priority, releasing the current contained data content, and taking the released unit page as a new free page.
In this embodiment, when a new idle application is performed, it needs to determine whether the total number of unit pages currently provided in the file buffer reaches a set total amount allowed by the file buffer, and if the total number of unit pages does not reach the set total amount, the new idle page for data buffering may be directly applied; if the set total amount is reached, the unit page with the highest release priority is selected from the file buffer area, the data content currently contained in the unit page is released, and then the released unit page is used as a new free page for caching the data content corresponding to the file reading address.
S405, caching the target data content loaded from the target file into a new free page to form a target read page serving as an occupied page, and taking the file read address as the file address of the target read page.
In this embodiment, after a new free page is applied, the target data content corresponding to the file read address may be loaded from the target file and cached in the applied new free page, so as to form a target read page that is occupied by the output stream operation. It will be appreciated that the target read page also needs to contain the file address of the data content in the target file, which is virtually identical to the file read address.
In summary, the above-mentioned alternative embodiments of the present invention specifically provide an implementation process of an input stream operation and an output stream operation as the same target file, which mainly relate to determination of occupied pages corresponding to the input stream operation and the output stream operation, and writing or reading in the occupied pages. According to the stream operation implementation method provided by the embodiment of the invention, the file buffer area is set for the target file, so that the problems of memory waste, unnecessary time consumption of stream operation, frequent reading of a disk and the like when a plurality of streams access the same file are realized, and the fine granularity control of reading or writing of data content during stream operation is realized.
It will be appreciated that in the above described process of performing an input stream or output stream operation, the required occupied pages for each stream operation may be determined based on the release priority of a unit page, which should also be determined in real time each time the data content needs to be released. In an alternative embodiment of the present invention, a process for determining the release priority of a unit page in a file buffer is specifically provided, and fig. 5 shows a flowchart for implementing the determination of the release priority of a unit page in a file buffer in an embodiment of the present invention, and referring to fig. 5, the determination of the release priority of a unit page specifically includes the following operations:
s501, determining the current existing stream operation in the file buffer area, and determining the page occupation information of each stream operation application.
In this embodiment, it may be determined that the stream operations acting on the same target file are determined in the file buffer corresponding to the target file, and page occupation information applied in the file buffer by each stream operation may be obtained.
S502, acquiring pointing information of pointers of each stream operation, and executing the following operation for each unit page in a file buffer.
It should be noted that, when the stream operation in the file buffer is specifically read or written, the occupied page for the next writing or reading is determined mainly by the pointer, and the pointing information of the pointer of the stream operation is determined correspondingly when the occupied page is applied, so that the pointing information of the pointer of each stream operation can be obtained in this step.
S503, determining whether the current unit page is an occupied page of any stream operation, if so, executing S504; if not, S506 is performed.
In this embodiment, the release priority is specifically used to determine which unit page containing the data content should be released, and thus, the target of determining the release priority in this embodiment corresponds to the unit page containing the data content, and for each unit page containing the data content, the present embodiment records the unit page currently required to perform the release priority determination as the current unit page, and based on the present step and the above determined occupied page information of the stream operation, it may be determined whether the current unit page is the occupied page of any stream operation, if so, the operation of S504 may be performed, and if not, the operation of S506 may be performed.
S504, determining whether the current unit page is an unoperated page or not based on the pointing information of the flow operation, if so, executing S505; if not, S506 is performed.
In this embodiment, any streaming operation determines an occupied page of the next operation based on the pointing information of the pointer, so that when the current unit page is an occupied page based on the pointing information and the current position of the pointer, it can be determined whether the occupied page has been operated, if the occupied page is operated, it is equivalent to an operated page, otherwise, it is an unoperated page. The present step may perform S505 when it is determined that the current unit page as the occupied page is an unoperated page; otherwise, S506 may be performed if the page is already operated.
S505, determining the release priority of the current unit page as a set first priority value.
When a unit page that is an occupied page is an unoperated page, the release priority of the unit page may be set to a first priority value, which, illustratively, needs to be a lower value, which may be preferably 0, indicating that the unit page is currently the lowest priority of the releasable page.
S506, determining the release priority of the current unit page based on the priority determination rule.
In this embodiment, the priority determining rule is specifically equivalent to a priority determining policy, and according to the operation characteristics of the stream operation performed from left to right in the file buffer, this step may set the number of unit pages from the calculation unit page to the unit page occupied by the last operation of the left adjacent stream as the release priority of the unit page.
Optionally, the determining the release priority of the unit page based on the priority determining rule according to this embodiment is embodied as: when it is determined that the unit page does not have a write flag, determining whether a preamble flow operation exists before the unit page; if so, determining a distance value from the unit page to the last occupied page contained in the preamble stream operation, and determining the distance value as the release priority of the unit page; if not, the set second priority value is determined as the release priority of the unit page.
It should be noted that, in the present embodiment, for a unit page with a write flag, the written content corresponding to the written unit page is not yet written back into the target file, and the data content therein cannot be released, and thus, the present embodiment does not consider the determination of the release priority for the unit page with the write flag. The unit page in this embodiment specifically refers to a unit page currently undergoing release priority determination, and the preamble stream operation specifically may refer to a stream operation located at the left side of the unit page in the file buffer closest to the unit page. If there is no preamble stream operation on the left side of the unit page, it may be considered that there is no stream operation on the left side of the unit page, because there is no stream operation on the left side of the unit page, indicating that there is little possibility of data content being read from or written to the unit page, and thus the unit page may be considered as the most probable release page, and thus the present embodiment may directly assign a set second priority value to the unit page as the release priority of the unit page, and the present embodiment selects a higher value as the second priority value, preferably, may select a value greater than the total amount of partitionable unit pages of the file buffer.
In addition, if a preface stream operation exists on the left side of the unit page, determining a distance value from the unit page to the last occupied page contained in the preface stream operation, and taking the distance value as the release priority of the unit page, wherein the distance value is specifically equivalent to the unit page separated from the last occupied page.
By way of example, fig. 6 shows an exemplary diagram for determining the release priority of unit pages in a file buffer according to an embodiment of the present invention, and referring to fig. 6, a visualized file buffer 61 is specifically provided, and it is preferable to consider that the file buffer 61 is divided into unit pages in fixed bytes (e.g., 4 kb), and set the total number of unit pages that can be divided by the file buffer 61 to be 40 (page numbers are 0-39), and traversing the file buffer, it can be found that there are two stream operations currently, namely, an input stream side and an output stream side, and it can be determined that the page number of an occupied page of an input stream side application is 5-9, and the occupied page of an output stream side application is 28-32. When determining the release priority of the unit page, the following steps may be performed starting from the unit page with the page number of 0: first, determining whether the unit page is an occupied page of any stream operation (input stream or output stream), if the unit page is an occupied page, determining that the release priority thereof is 0 (first priority value) when the unit page is an unoperated page in the stream operation; when the unit page is an operated page in the stream operation, whether a preface stream operation exists before the unit page can be checked, if not, the release priority of the unit page can be set to a maximum value M (e.g. the unit page with the page number of 3, which does not exist before the preface stream, so the release priority is M), wherein the value of M can be any value greater than the total amount of the unit page, such as set to 40, and if so, the distance value from the unit page to the last occupied page in the preface stream operation needs to be determined; if the unit page is not an occupied page, the determination of the release priority may be directly based on the determination method of referring to the unit page as the operated page in the stream operation, and by way of example, assuming that the page number of the unit page is 27, in fig. 6, the preamble stream is present before the unit page with the page number of 27, and the page number of the last occupied page of the preamble stream is 9, it is known that the difference between the distance values of 27 and 9 from the unit page with the page number of 27 to the occupied page with the page number of 9, that is, the currently determined release priority of the unit page with the page number of 27 is 18; as another example, assuming that the page number of the unit page is 10, in fig. 6, there is also a preamble stream preceding the unit page with the page number of 10, and the page number of the last occupied page of the preamble stream is 9, it is known that the difference between the distance value of the unit page with the page number of 10 and the occupied page with the page number of 9 is 10 and 9, that is, the currently determined release priority of the unit page with the page number of 10 is 1.
In summary, the above embodiment of the present invention specifically provides a method for determining a release priority of a unit page in a file buffer. The embodiment better solves the problem that the streaming operation cannot be effectively performed when the buffer space of the file buffer is limited by setting the release priority. By the method, the data content with the smallest stream operation association can be released, the influence of the released content on the stream operation is reduced, the data content release by the method also effectively improves the reading or writing speed of the stream operation, reduces the frequent reading of the disk file, and shortens the time consumption for realizing the stream operation.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments.
Fig. 7 is a schematic block diagram of a flow operation implementation apparatus according to an embodiment of the present invention. The implementation device is suitable for the situation of responding to the streaming operation called by the upper application, wherein the device can be implemented by software and/or hardware and is generally integrated on computer equipment. As shown in fig. 7, the apparatus includes: a stream acquisition module 71 and a stream execution module 72.
Wherein, the flow obtaining module 71 is configured to obtain operation information of at least two flow operations, where the operation information includes: operation type and target file information;
and the stream execution module 72 is configured to execute, when the target file information of the at least two stream operations is the same, a corresponding operation in a file buffer corresponding to the target file based on the operation type of each stream operation.
Further, the flow execution module 72 includes:
an input stream execution unit, configured to execute a data write operation in a file buffer corresponding to the target file when there is an input stream operation whose operation type is file input;
the output stream execution unit is used for executing data reading operation in a file buffer area corresponding to the target file when the output stream operation with the operation type of file output exists;
and the file buffer area performs data read-write operation by taking a page as a unit.
Further, the input stream execution unit is specifically configured to:
when an input stream operation with an operation type of file input exists, acquiring the write-in data size of the input stream operation and the page size of a unit page in the file buffer, wherein the write-in data size is contained in operation information of the input stream operation; determining the number of write pages required for the input stream operation write according to the write data size and the page size; applying for the occupied page of the written page number for the input stream operation according to the written page number and the idle page remaining amount of the file buffer area; and writing the data to be written into the occupied pages of the application to obtain the written pages containing the written marks, wherein each written page comprises a file address corresponding to the written data in the target file.
The input stream execution unit is further configured to:
after the data writing operation is executed in the file buffer corresponding to the target file, when the data writing back condition is reached, writing back the data content of the writing page with the writing mark in the file buffer to the target file based on the corresponding file address, and clearing the writing mark of the writing page.
Further, according to the number of written pages and the remaining amount of free pages in the file buffer, applying for the input stream operation to occupy pages with the number of written pages includes:
if the number of the written pages is smaller than or equal to the remaining free pages, directly applying for the occupied pages of the written pages for the input stream operation from the remaining free pages; otherwise, the rest free pages are used as occupied pages to be distributed to the input stream operation, and the rest occupied pages are applied for the input stream operation from unit pages with release priorities.
Further, the applying for the remaining occupied pages for the input stream operation from the unit pages with release priority includes:
taking the difference between the number of written pages and the remaining amount of free pages as the remaining number of pages required for the input stream operation; sequentially selecting candidate unit pages of the rest pages after sequencing the corresponding unit pages from high to low based on the release priority; and releasing the data content currently contained in each candidate unit page, and applying each released candidate unit page as an occupied page remained by the input stream operation.
Optionally, the output stream execution unit is specifically configured to:
when an output stream operation with the operation type of file output exists, acquiring the read data size of the output stream operation, the file initial read address and the page size of a unit page in the file buffer area; determining the number of read pages required by the output stream operation according to the read data size and the page size; determining a file reading address of a target file corresponding to each page to be read according to the file initial reading address and the reading page number; and positioning a target read page serving as a corresponding occupied page from the file buffer according to each file read address, and reading the data content in each target read page.
Optionally, locating the target read page as the corresponding occupied page from the file buffer includes:
traversing a unit page containing a file address in the file buffer area aiming at each file reading address; if the target file address matched with the file reading address exists, taking a target reading page containing the target file address as an occupied page corresponding to the file reading address; otherwise, applying for a new free page from the file buffer area, caching target data content loaded from the target file into the new free page to form a target read page serving as an occupied page, and taking the file read address as the file address of the target read page, wherein the target data content is determined based on the file read address.
Optionally, the applying for a new free page from the file buffer area includes:
determining whether the total number of unit pages contained in the file buffer reaches a set total amount; if not, directly applying for a new idle page; if yes, selecting the unit page with the highest release priority, releasing the current contained data content, and taking the released unit page as a new free page.
Optionally, the release priority of the unit pages in the file buffer is determined by:
determining current existing stream operation in the file buffer area, and determining page occupation information of each stream operation application; acquiring pointing information of pointers of each stream operation; for each unit page in the file buffer area, determining whether the unit page is an occupied page of any stream operation; if so, determining that the release priority of the unit page is a set first priority value when the unit page is determined to be an unoperated page based on the pointing information of the belonging stream operation, and determining the release priority of the unit page based on a priority determination rule when the unit page is determined to be an operated page; if not, determining the release priority of the unit page based on a priority determination rule.
On the basis of the optimization, the determining the release priority of the unit page based on the priority determining rule includes:
when it is determined that the unit page does not have a write flag, determining whether a preamble flow operation exists before the unit page; if so, determining a distance value from the unit page to the last occupied page contained in the preamble stream operation, and determining the distance value as the release priority of the unit page; if not, the set second priority value is determined as the release priority of the unit page.
It should be noted that, the implementation device for stream operation provided by the embodiment of the present invention may execute the implementation method for stream operation provided by the embodiment of the present invention, and has the corresponding functions and beneficial effects of the execution method.
Fig. 8 shows a schematic hardware structure of a computer device according to an embodiment of the present invention, and specifically, the computer device includes: a processor and a memory. At least one instruction is stored in the memory and executed by the processor, to cause the computer device to perform a method of implementing a streaming operation as described in the method embodiments above.
Referring to fig. 8, the computer device may specifically include: a processor 80, a memory device 81, a display screen 82 with touch function, an input device 83, an output device 84, and a communication device 85. The number of processors 80 in the computer device may be one or more, one processor 80 being illustrated in fig. 8. The number of storage means 81 in the computer device may be one or more, one storage means 81 being exemplified in fig. 8. The processor 80, the storage means 81, the display 82, the input means 83, the output means 84 and the communication means 85 of the computer device may be connected by a bus or by other means, in fig. 8 by way of example.
The storage device 81 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and modules, such as program instructions/modules corresponding to the embodiments of the present invention (for example, the flow obtaining module 71 and the flow executing module 72 in the implementation device for flow operations provided in the foregoing embodiments). The storage device 81 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating device, at least one application program required for a function; the storage data area may store data created according to the use of the computer device, etc. Further, the storage 81 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 81 may further include memory located remotely from processor 80, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The display screen 82 is a touch-enabled display screen 82, which may be a capacitive screen, an electromagnetic screen, or an infrared screen. Generally, the display screen 82 is used for displaying data according to the instruction of the processor 80, and is also used for receiving a touch operation applied to the display screen 82 and transmitting a corresponding signal to the processor 80 or other devices. Optionally, when the display screen 82 is an infrared screen, it further includes an infrared touch frame disposed around the display screen 82, which may also be used to receive infrared signals and transmit the infrared signals to the processor 80 or other computer device.
Communication means 85 for establishing a communication connection with other computer devices, which may be wired communication means and/or wireless communication means.
The input means 83 may be used for receiving input digital or character information and generating key signal inputs related to user settings and function control of the computer device, as well as a camera for capturing images and a sound pick-up computer device for capturing audio in video data. The output means 84 may comprise video computer devices such as a display screen and audio computer devices such as speakers. The specific composition of the input device 83 and the output device 84 may be set according to the actual situation.
The processor 80 executes various functional applications of the computer device and data processing, that is, implements the above-described implementation method of the streaming operation by running software programs, instructions, and modules stored in the storage 81.
Specifically, in the embodiment, when the processor 80 executes one or more programs stored in the storage device 81, the following operations are specifically implemented: acquiring operation information of at least two stream operations, wherein the operation information comprises: operation type and target file information; and when the target file information of the at least two stream operations is the same, executing corresponding operations in a file buffer area corresponding to the target file based on the operation types of the stream operations respectively.
The embodiment of the present invention also provides a computer-readable storage medium, where a program in the storage medium, when executed by a processor of a computer device, enables the computer device to perform a method for implementing a streaming operation as described in the above embodiment. Exemplary, the implementation method of the streaming operation described in the foregoing embodiment includes: acquiring operation information of at least two stream operations, wherein the operation information comprises: operation type and target file information; and when the target file information of the at least two stream operations is the same, executing corresponding operations in a file buffer area corresponding to the target file based on the operation types of the stream operations respectively.
It should be noted that, for the apparatus, computer device, and storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and the relevant points refer to the part of the description of the method embodiments.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., comprising several instructions for causing a computer device (which may be a robot, a personal computer, a server, or a network device, etc.) to perform the implementation method of the streaming operation according to any embodiment of the present invention.
It should be noted that, in the implementation apparatus for stream operation, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding function can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution device. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (13)

1. A method for implementing a streaming operation, comprising:
acquiring operation information of at least two stream operations, wherein the operation information comprises: operation type and target file information; the target file information at least comprises a file name of the target file or a storage path of the target file;
when the target file information of the at least two stream operations is the same, executing corresponding operations in a file buffer area corresponding to the target file based on the operation types of the stream operations respectively; setting a file buffer area relative to the target file, wherein the file buffer area carries out corresponding operation by taking pages as units;
Determining required occupied pages for each stream operation according to the release priority of the unit pages;
the release priority of the unit pages in the file buffer is determined by:
determining current existing stream operation in the file buffer area, and determining page occupation information of each stream operation application;
acquiring pointing information of pointers of each stream operation;
for each unit page in the file buffer area, determining whether the unit page is an occupied page of any stream operation;
if so, determining that the release priority of the unit page is a set first priority value when the unit page is determined to be an unoperated page based on the pointing information of the belonging stream operation, and determining the release priority of the unit page based on a priority determination rule when the unit page is determined to be an operated page;
if not, determining the release priority of the unit page based on a priority determination rule.
2. The method according to claim 1, wherein the performing, based on the operation type of each of the stream operations, a corresponding operation in a file buffer corresponding to the target file includes:
if the input stream operation with the operation type of file input exists, executing data writing operation in a file buffer corresponding to the target file;
And if the operation type is the output stream operation of the file output, executing the data reading operation in the file buffer corresponding to the target file.
3. The method of claim 2, wherein performing the data write operation in the file buffer corresponding to the target file comprises:
obtaining the write data size of an input stream operation and the page size of a unit page in the file buffer, wherein the write data size is contained in operation information of the input stream operation;
determining the number of write pages required for the input stream operation write according to the write data size and the page size;
applying for the occupied page of the written page number for the input stream operation according to the written page number and the idle page remaining amount of the file buffer area;
and writing the data to be written into the occupied pages of the application to obtain the written pages containing the written marks, wherein each written page comprises a file address corresponding to the written data in the target file.
4. The method of claim 3, further comprising, after performing the data write operation in the file buffer corresponding to the target file:
And when the data writing-back condition is met, writing back the data content of the writing page with the writing mark in the file buffer zone to the target file based on the corresponding file address, and clearing the writing mark of the writing page.
5. The method of claim 3, wherein applying for the input stream operation an occupied page as many as the number of written pages based on the number of written pages and the remaining amount of free pages in the file buffer, comprising:
if the number of the written pages is smaller than or equal to the remaining free pages, directly applying for the occupied pages of the written pages for the input stream operation from the remaining free pages; otherwise the first set of parameters is selected,
and allocating the rest free pages as occupied pages to the input stream operation, and applying for the rest occupied pages for the input stream operation from unit pages with release priorities.
6. The method of claim 5, wherein the applying for remaining occupied pages for the input stream operation from among the unit pages having the release priority comprises:
taking the difference between the number of written pages and the remaining amount of free pages as the remaining number of pages required for the input stream operation;
sequentially selecting candidate unit pages of the rest pages after sequencing the corresponding unit pages from high to low based on the release priority;
And releasing the data content currently contained in each candidate unit page, and applying each released candidate unit page as an occupied page remained by the input stream operation.
7. The method of claim 2, wherein performing a data read operation in a file buffer corresponding to the target file comprises:
acquiring the read data size, the file initial read address and the page size of a unit page in the file buffer area of an output stream operation, wherein the read data size and the file initial read address are contained in operation information of the input stream operation;
determining the number of read pages required by the output stream operation according to the read data size and the page size;
determining a file reading address of a target file corresponding to each page to be read according to the file initial reading address and the reading page number;
and positioning a target read page serving as a corresponding occupied page from the file buffer according to each file read address, and reading the data content in each target read page.
8. The method of claim 7, wherein locating the target read page from the file buffer as the corresponding occupied page comprises:
Traversing a unit page containing a file address in the file buffer area aiming at each file reading address;
if the target file address matched with the file reading address exists, taking a target reading page containing the target file address as an occupied page corresponding to the file reading address; otherwise the first set of parameters is selected,
applying for a new free page from the file buffer area, caching target data content loaded from the target file into the new free page to form a target read page serving as an occupied page, and taking the file read address as the file address of the target read page, wherein the target data content is determined based on the file read address.
9. The method of claim 8, wherein applying for a new free page from the file buffer comprises:
determining whether the total number of unit pages contained in the file buffer reaches a set total amount;
if not, directly applying for a new idle page;
if yes, selecting the unit page with the highest release priority, releasing the current contained data content, and taking the released unit page as a new free page.
10. The method of claim 1, wherein the determining the release priority of the unit page based on a priority determination rule comprises:
When it is determined that the unit page does not have a write flag, determining whether a preamble flow operation exists before the unit page;
if so, determining a distance value from the unit page to the last occupied page contained in the preamble stream operation, and determining the distance value as the release priority of the unit page;
if not, the set second priority value is determined as the release priority of the unit page.
11. A flow operation implementation apparatus, comprising:
a flow obtaining module, configured to obtain operation information of at least two flow operations, where the operation information includes: operation type and target file information; the target file information at least comprises a file name of the target file or a storage path of the target file;
the stream execution module is used for executing corresponding operations in a file buffer area corresponding to the target file based on the operation types of the stream operations respectively when the target file information of the at least two stream operations is the same; setting a file buffer area relative to the target file, wherein the file buffer area carries out corresponding operation by taking pages as units;
determining required occupied pages for each stream operation according to the release priority of the unit pages;
The release priority of the unit pages in the file buffer is determined by:
determining current existing stream operation in the file buffer area, and determining page occupation information of each stream operation application; acquiring pointing information of pointers of each stream operation; for each unit page in the file buffer area, determining whether the unit page is an occupied page of any stream operation; if so, determining that the release priority of the unit page is a set first priority value when the unit page is determined to be an unoperated page based on the pointing information of the belonging stream operation, and determining the release priority of the unit page based on a priority determination rule when the unit page is determined to be an operated page; if not, determining the release priority of the unit page based on a priority determination rule.
12. A computer device, comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs being executed by the one or more processors to cause the one or more processors to implement the method of implementing streaming operations of any of claims 1-10.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a method of implementing a streaming operation according to any of claims 1-10.
CN201811495092.XA 2018-12-07 2018-12-07 Method, device, equipment and storage medium for realizing stream operation Active CN109634877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811495092.XA CN109634877B (en) 2018-12-07 2018-12-07 Method, device, equipment and storage medium for realizing stream operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811495092.XA CN109634877B (en) 2018-12-07 2018-12-07 Method, device, equipment and storage medium for realizing stream operation

Publications (2)

Publication Number Publication Date
CN109634877A CN109634877A (en) 2019-04-16
CN109634877B true CN109634877B (en) 2023-07-21

Family

ID=66072001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811495092.XA Active CN109634877B (en) 2018-12-07 2018-12-07 Method, device, equipment and storage medium for realizing stream operation

Country Status (1)

Country Link
CN (1) CN109634877B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821456A (en) * 2021-09-30 2021-12-21 龙芯中科技术股份有限公司 Memory data reading method and device, electronic equipment and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154192A (en) * 2006-09-29 2008-04-02 国际商业机器公司 Administering an access conflict in a computer memory cache
CN102006368A (en) * 2010-12-03 2011-04-06 重庆新媒农信科技有限公司 Streaming media audio file play method based on mobile terminal memory card cache technology
CN103488772A (en) * 2013-09-27 2014-01-01 珠海金山网络游戏科技有限公司 Method, system and equipment for caching files through external storage
CN105847942A (en) * 2016-04-01 2016-08-10 青岛海信宽带多媒体技术有限公司 Media data buffering method, media data buffering device and intelligent television

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154192A (en) * 2006-09-29 2008-04-02 国际商业机器公司 Administering an access conflict in a computer memory cache
CN102006368A (en) * 2010-12-03 2011-04-06 重庆新媒农信科技有限公司 Streaming media audio file play method based on mobile terminal memory card cache technology
CN103488772A (en) * 2013-09-27 2014-01-01 珠海金山网络游戏科技有限公司 Method, system and equipment for caching files through external storage
CN105847942A (en) * 2016-04-01 2016-08-10 青岛海信宽带多媒体技术有限公司 Media data buffering method, media data buffering device and intelligent television

Also Published As

Publication number Publication date
CN109634877A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN110312156B (en) Video caching method and device and readable storage medium
CN111159436B (en) Method, device and computing equipment for recommending multimedia content
US20210089343A1 (en) Information processing apparatus and information processing method
US20170075614A1 (en) Memory system and host apparatus
CN112328185B (en) Intelligent pre-reading method based on distributed storage
CN110196681B (en) Disk data write-in control method and device for business write operation and electronic equipment
CN115167786B (en) Data storage method, device, system, equipment and medium
KR20140146458A (en) Method for managing memory and apparatus thereof
CN110933140B (en) CDN storage allocation method, system and electronic equipment
CN114840450B (en) Storage space arrangement method and electronic equipment
CN114302040B (en) Method for sharing single camera by multiple applications and related products
CN111078410A (en) Memory allocation method and device, storage medium and electronic equipment
CN109634877B (en) Method, device, equipment and storage medium for realizing stream operation
WO2018000300A1 (en) Data operation method for electronic device, and electronic device
CN115934002B (en) Solid state disk access method, solid state disk, storage system and cloud server
CN117112215A (en) Memory allocation method, equipment and storage medium
CN115168259B (en) Data access method, device, equipment and computer readable storage medium
WO2023010879A1 (en) Memory management method and apparatus, and computer device
US20110283068A1 (en) Memory access apparatus and method
KR20090053487A (en) Method of demand paging for codes which requires real time response and terminal
CN115495020A (en) File processing method and device, electronic equipment and readable storage medium
CN112764668A (en) Method, electronic device and computer program product for expanding GPU memory
US20080195838A1 (en) Cyclic Buffer Management
CN113176942A (en) Method and device for sharing cache and electronic equipment
KR102076248B1 (en) Selective Delay Garbage Collection Method And Memory System Using The Same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant