CN112395083A - Resource file release method and device - Google Patents

Resource file release method and device Download PDF

Info

Publication number
CN112395083A
CN112395083A CN202011064329.6A CN202011064329A CN112395083A CN 112395083 A CN112395083 A CN 112395083A CN 202011064329 A CN202011064329 A CN 202011064329A CN 112395083 A CN112395083 A CN 112395083A
Authority
CN
China
Prior art keywords
resource file
file
channel
memory
data block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011064329.6A
Other languages
Chinese (zh)
Other versions
CN112395083B (en
Inventor
宫冰川
王嫒婷
殷科君
徐志宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011064329.6A priority Critical patent/CN112395083B/en
Publication of CN112395083A publication Critical patent/CN112395083A/en
Application granted granted Critical
Publication of CN112395083B publication Critical patent/CN112395083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a method and a device for releasing a resource file, electronic equipment and a computer readable storage medium; the method comprises the following steps: respectively creating channels in one-to-one association aiming at each source resource file to be released in a disk; registering a plurality of channels into the same selector, and inquiring the states of the channels through the selector; reading data blocks from the associated source resource files to a buffer area of a memory through the inquired channel in the ready state, and writing the data blocks in the buffer area of the memory into a corresponding target resource file in the disk; and when all the data blocks in the associated source resource file are read and written into the corresponding target resource file, closing the channel in the ready state. By the method and the device, the resource file can be efficiently released.

Description

Resource file release method and device
Technical Field
The present application relates to computer technologies, and in particular, to a method and an apparatus for releasing a resource file, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of the software industry, the performance requirements of users on application software are higher and higher. Taking releasing the resource file as an example, in the related art, the data stream is input and output in a blocking manner to release the resource file, once the blocking occurs, the thread is suspended, the use right of the CPU is lost, once the use right of the CPU is lost, when the CPU can process the resource file to be released currently, the control cannot be performed, and the efficiency of releasing the file is reduced; moreover, for larger resource files, the release speed completely depends on the switching speed of the operating system to the user space and the kernel address space. This approach has significant drawbacks in both performance and controllability.
Therefore, there is no effective solution for how to efficiently release resource files in the related art.
Disclosure of Invention
The embodiment of the application provides a method and a device for releasing a resource file, electronic equipment and a computer readable storage medium, which can efficiently release the resource file.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for releasing a resource file, which comprises the following steps:
respectively creating channels in one-to-one association aiming at each source resource file to be released in a disk;
registering a plurality of channels into the same selector, and inquiring the states of the channels through the selector;
reading data blocks from the associated source resource files to a buffer area of a memory through the inquired channel in the ready state, and writing the data blocks in the buffer area of the memory into a corresponding target resource file in the disk;
and when the data blocks in the associated source resource file are all read and written into the corresponding target resource file, closing the channel in the ready state.
An embodiment of the present application provides a release apparatus for a resource file, including:
the first establishing module is used for respectively establishing channels which are associated one by one aiming at each source resource file to be released in the disk;
the query module is used for registering the channels into the same selector and querying the states of the channels through the selector;
a releasing module, configured to read a data block from the associated source resource file to a buffer of a memory through the queried channel in the ready state, and write the data block in the buffer of the memory into a corresponding target resource file in the disk;
and the closing module is used for closing the channel in the ready state when the data blocks in the associated source resource file are all read and written into the corresponding target resource file.
In the foregoing solution, an apparatus for releasing a resource file provided in an embodiment of the present application further includes:
the second creating module is used for acquiring the path of each source resource file and creating a source resource file object pointing to the path of the source resource file;
the source resource file object is used for acquiring a reading source of a data block in a buffer area of the memory;
acquiring a path of each target resource file, and creating a target resource file object pointing to the path of the target resource file;
and the target resource file object is used for acquiring a write address of a data block in a buffer area of the memory.
In the above scheme, the types of the channels include a file reading channel and a file writing channel;
the first creating module is further configured to, for each source resource file to be released in the disk, perform the following operations:
creating a file input stream object named by the source resource file object in the memory, and creating a file read channel associated with the source resource file through the file input stream object;
and creating a file output stream object named by the target resource file object in the memory, and creating a file writing channel associated with the target resource file through the file output stream object.
In the above solution, the types of the states of the channel include: ready state, not ready state;
and the query module is also used for immediately returning and querying the next channel when the queried channel is in the non-ready state until the channel in the ready state is queried.
In the above scheme, the releasing module is further configured to, when any one of the file reading channels in the ready state is queried, read a data block in the source resource file associated with the channel in the ready state into a buffer of the memory through the file reading channel in the ready state;
when any one file writing channel in the ready state is inquired, writing the data block in the source resource file associated with the file writing channel in the ready state in the buffer area of the memory into the corresponding target resource file in the disk through the file writing channel in the ready state.
In the foregoing solution, when there are a plurality of data in a file read channel and/or a file write channel in a ready state that are queried at the same time, the apparatus for releasing a resource file according to an embodiment of the present application further includes:
the selection module is used for acquiring the release progress of each source resource file;
determining a corresponding release progress for the source resource file associated with the file reading channel and/or the file writing channel in the ready state;
and selecting a file reading channel and/or a file writing channel associated with the source resource file with the minimum release progress as a channel to be executed.
In the above scheme, the releasing module further updates the location parameter in the buffer area of the memory when the first data block in the buffer area of the memory is written into the corresponding target resource file in the disk;
wherein the position parameter is used for indicating a starting position for reading the second data block;
when reading the second data block from the source resource file, reading the second data block from the starting position indicated by the position parameter.
In the foregoing solution, an apparatus for releasing a resource file provided in an embodiment of the present application further includes:
the modification module is used for acquiring the position parameters of the data block in the cache region of the memory;
modifying the position parameters according to the designated position;
and reading the data block from the starting position corresponding to the modified position parameter.
In the above scheme, the buffer area of the memory is a virtual memory space mapped to a physical memory space;
the release module is further configured to obtain an access address of the buffer of the memory, and search the access address in the physical memory space;
when the access address exists in the physical memory space, directly writing the data block into a corresponding target resource file from the physical memory space;
when the access address does not exist in the physical memory space, searching a disk address corresponding to the access address in a disk space;
and exchanging the disk address and the physical memory address to write the data block into a corresponding target resource file from the physical memory space.
An embodiment of the present application provides a release apparatus for a resource file, including:
a memory for storing executable instructions;
and the processor is used for realizing the resource file release method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions and is used for implementing the method for releasing a resource file provided by the embodiment of the present application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
the data exchange between the channel and the buffer area by taking the data block as a unit is realized by a mode that a selector polls the channel of each source resource file to be released, so that a plurality of source resource files are quickly released to a target resource file, a thread occupied by the selector can efficiently serve a plurality of channels, the resource files are quickly released on the basis of only occupying a small amount of resources of an operating system, and the release efficiency is improved.
Drawings
Fig. 1 is a schematic architecture diagram of a resource file release system 100 according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal 400 for releasing a resource file according to an embodiment of the present application;
fig. 3 is a schematic diagram of a resource file releasing apparatus releasing a resource file according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a multi-channel release resource file according to an embodiment of the present disclosure;
fig. 5A is a schematic flowchart of a resource file release method according to an embodiment of the present application;
fig. 5B is a flowchart illustrating a resource file release method according to an embodiment of the present application;
fig. 5C is a schematic flowchart of a resource file release method according to an embodiment of the present application;
FIG. 6 is a diagram illustrating accessing data in a buffer of a memory according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating an operation of a buffer for read/write events according to an embodiment of the present disclosure;
fig. 8 is a schematic comparison diagram of a resource file release method provided in an embodiment of the present application and a resource file release method in the related art;
FIG. 9 is a schematic diagram of releasing a resource file in one complete pass in the related art;
fig. 10 is a flowchart of a method for releasing a map file according to an embodiment of the present application;
fig. 11 is a schematic diagram illustrating reading and writing of a cache region of a memory according to an embodiment of the present disclosure;
fig. 12 is a comparison graph of technical effects achieved by the technical solutions provided in the embodiments of the present application and technical effects achieved by the related art.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) A byte is a unit of measurement used by computer information technology to measure storage capacity, and a binary string processed as a unit is a small unit of constituent information. The most common byte is an eight-bit byte, i.e., it contains eight-bit binary numbers.
2) A data block is a group of several records arranged consecutively in sequence, and is a unit of data transferred between a disk and an input device and an output device.
3) Stream, which is an abstraction of a byte sequence in Java. Like "water flow", a stream in Java also has a "direction of flow", and an object from which a byte sequence can be read is generally called an input stream (InputStream); an object to which a byte sequence can be written is called an output stream (OutputStream).
4) Input stream/output stream (IO) performs data exchange in units of bytes or characters, completes reading from the original data source and outputs the original data source to the target medium.
5) Buffer, there are two modes of operation, write mode and read mode. In the read mode, the application program can only read data from the Buffer, and in the write mode, the application program can perform read operation. Containing some data to be written or read, i.e. container objects, are used to store data units.
6) A File (File class) is a File or directory path name represented in Java.
7) The Channel (Channel) is mainly used for effectively transmitting data between the buffer and an entity located on the other side of the Channel (each Channel occupies a certain memory bandwidth, and the memory bandwidth determines the data exchange speed between the memory and the CPU). Through which data can be read and written for data exchange in units of data blocks.
8) And a Selector (Selector) which manages a plurality of channels registered to the Selector, inquires the status of the plurality of channels, and executes a task on the channel in a ready state.
9) A Byte-code (Byte-code), a binary file containing an executable, consisting of a sequence of code/data pairs.
10) Java, a door-to-object programming language.
11) A Java Virtual Machine (JVM) is an imaginary computer, and is implemented by simulating various computer functions on an actual computer. The Java virtual machine has a self-perfected hardware architecture, such as a processor, a stack, a register and the like, and also has a corresponding instruction system.
The related art provides an implementation way for releasing the resource file by performing input and output of the stream in a blocking manner. Once a block occurs, the thread that handles the read and write is suspended, during which nothing can be done, the thread loses CPU usage, and once CPU usage is lost, it is unknown when the CPU has free time to process the resource file that is currently being released. This release is unacceptable in the case of performance requirements. When writing data, when the data written once is too much, the data needs to be divided according to the length of the data, and in the process, the user space and the kernel address space need to be switched, and the switching is controlled by an operating system. Therefore, the implementation of the related art has great defects in performance and controllability.
In view of the foregoing problems, an embodiment of the present application provides a method and an apparatus for releasing a resource file, an electronic device, and a computer-readable storage medium, which are capable of releasing a resource file efficiently, and an exemplary application of the electronic device for releasing a resource file provided in the embodiment of the present application is described below. Next, an exemplary application when the electronic device is implemented as a terminal device will be explained.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a system 100 for releasing a resource file according to an embodiment of the present application, where the system 100 for releasing a resource file includes: a server 200, a network 300 and terminals (terminal 400-1 and terminal 400-2 are exemplarily shown), the terminals are connected to the server 200 through the network 300, and the network 300 can be a wide area network or a local area network, or a combination of the two.
In some embodiments, taking as an example a terminal running a map program (developed based on an object-oriented programming language such as Java), the terminal receives a user download instruction, downloads a map package from the network 300; the server 200 returns a map packet; the terminal stores the downloaded offline map package in a directory corresponding to the project engineering, and when a user starts or uses the map application, the terminal calls an interface provided by an operating system through a map program to release the downloaded offline map package (namely, a source resource file) to a map file (namely, a target resource file) at a position which can be read by the map program, so that the map program can load map data in the offline map package through the released map file to present a map of an area requested by the user at the terminal.
With reference to the exemplary application and implementation of the terminal device provided by the embodiment of the present application, it can be understood from the foregoing that the method for releasing a resource file provided by the embodiment of the present application can be widely applied to a scenario of releasing a resource file, for example, in a voice broadcast application program, a voice packet (i.e., a source resource file) is stored in a directory corresponding to a project in a form of "resource", and when the voice broadcast application program is started or used, an interface provided by an operating system is called by a map program to release an offline voice packet (i.e., a source resource file) to a voice file (i.e., a target resource file) at a position that can be read by the map program, thereby implementing voice broadcast. Besides, the scenarios related to the release of the resource file all belong to the potential application scenarios of the embodiments of the present application.
In some embodiments, the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like, but is not limited thereto, and the embodiments of the present application are not limited thereto.
The following describes in detail a hardware structure of an electronic device of the method for releasing a resource file according to the embodiment of the present application. Taking an electronic device as the terminal 400 shown in fig. 1 as an example, referring to fig. 2, fig. 2 is a schematic structural diagram of the terminal 400 for releasing a resource file provided in an embodiment of the present application, where the terminal 400 shown in fig. 2 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in fig. 2.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the resource file releasing device provided in this embodiment of the present application may be implemented in software, and fig. 2 illustrates a resource file releasing device 455 stored in the memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: a first creation module 4551, a query module 4552, a release module 4553 and a close module 4554. These modules are logical and thus may be combined or further split according to the functionality implemented, the functionality of the individual modules being described below.
In other embodiments, the resource file releasing Device provided in this embodiment may be implemented in hardware, and for example, the resource file releasing Device provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the resource file releasing method provided in this embodiment, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The following provides a detailed description of the principle of releasing a resource file by a resource file releasing apparatus according to an embodiment of the present application. Referring to fig. 3, fig. 3 is a schematic diagram of a resource file releasing apparatus releasing a resource file according to an embodiment of the present application. The resource file releasing device is the resource file releasing device 455 shown in fig. 2, the resource file releasing device releases the resource file by using a packaged Java program, the packaged Java program is compiled into a Java bytecode, and a Java Virtual Machine (JVM, J ava Virtual Machine) installed in the operating system platform executes the Java bytecode to release the Java program of the resource file.
In some embodiments, taking a Java program as an example of a map program, according to an operation of a user triggering the map program on a map application, the map program calls a method in a JVM library of an operating system to implement logic of the Java program to perform a release-related operation, that is, release a downloaded offline map package (i.e., a source resource file) to a map file (i.e., a target resource file) of a location that can be read by the map program.
It should be noted that the Java bytecode is not specific to a specific operating platform, but is specific to an abstract JVM virtual machine. The Java virtual machine shields the difference of different operating platforms, and a Java program can be run on different operating platforms through the JVM virtual machine.
The schematic diagram of releasing resource files by multiple programs provided in the embodiment of the present application is described in detail below. Referring to fig. 4, fig. 4 is a schematic diagram of releasing a resource file by multiple channels according to an embodiment of the present application. Wherein, the core assembly includes: a channel, a buffer, and a selector.
In some embodiments, the release of resource files of multiple channels is implemented by one thread, for example, multiple channels are registered to the same selector, the selector runs in one thread, the selector runs to inquire whether events (e.g., write events and read events) on the multiple channels are ready, and the events on the channels in the ready state are executed through a selection policy; reading data blocks from a source resource file into a buffer area of a memory, wherein the reading event and the writing event are read or written by taking the data blocks as a unit, the reading event and the writing event are regarded as a primary reading event, the writing of the data blocks in the buffer area of the memory into a target resource file is regarded as a primary writing event, and the primary reading event and the primary writing event are equivalent to the releasing operation of one data block of the resource file; the selection strategy comprises the following steps: when a channel is in a ready state, executing a read or write event on the channel; and when the plurality of channels are in the ready state, selecting the channel to be executed according to the release progress of the resource file.
In some embodiments, multiple channels may correspond to one program, i.e., one resource file is released on multiple channels; the resource file is partitioned into a plurality of data blocks, each channel releasing a different data block from one resource file.
In other embodiments, the multiple channels may correspond to multiple programs, that is, multiple resource files are released on the multiple channels, and the channels correspond to the resource files one to one; for example, executing three resource files on three channels, a first resource file is partitioned into a first data block and a second data block, a second resource file is partitioned into a first data block and a second data block, and a third resource file is partitioned into a first data block and a second data block; when the first channel is in a ready state, releasing a first data block of the first resource file; when the second channel is in a ready state, releasing the first data block of the second resource file; when the third channel is in a ready state, releasing a third data block of a third resource file; when the first channel is again in the ready state, the second data block of the first resource file is released, and so on.
The following describes a method for releasing a resource file according to an embodiment of the present application, with reference to an exemplary application of a terminal that implements the method for releasing a resource file according to the embodiment of the present invention. Referring to fig. 5A, fig. 5A is a schematic flowchart of a method for releasing a resource file according to an embodiment of the present application, and the following description will be made with reference to the steps shown in fig. 5A.
In step 101, a one-to-one associated channel is created for each source resource file to be released in the disk.
In some embodiments, the types of channels include a file read channel and a file write channel. For each source resource file to be released in the disk, one-to-one associated channel is respectively created, which can be realized by the following modes: for each source resource file to be released in the disk, executing the following operations: creating a file input stream object named by a source resource file object in a memory, and creating a file reading channel associated with a source resource file through the file input stream object; and creating a file output stream object named by the target resource file object in the memory, and creating a file writing channel associated with the target resource file through the file output stream object.
In step 102, multiple channels are registered in the same selector, and the state of the multiple channels is queried through the selector.
In some embodiments, the types of states of the channel include: ready state, not ready state; querying the states of the plurality of channels through the selector can be realized by the following steps: when the state of the inquired channel is not in the ready state, returning and inquiring the next channel immediately until the channel in the ready state is inquired.
It should be noted that the ready state here refers to that there is data that can be read and written (a read event is to read a data block from a source resource file into a buffer of a memory; the non-ready state means that data does not arrive yet and a read-write event cannot be carried out yet; the registered object is a selector; the selector is run by one thread, and multiple channels can be managed by one thread.
In other embodiments, multiple channels may also be managed by multiple threads to execute read-write events on the channels, and the created file read channel and file write channel are registered in the same selector of the same main thread, and the main thread traverses the read-write events in the multiple channels; when the read-write event in the channel is not ready (data is not reached), returning to 0, and continuously polling the next channel; when the read-write event is ready (data arrives) in the channel, returning to 1; the main thread sends a read-write operation instruction to the corresponding channel, returns to continuously poll other channels immediately after sending, and the sub-thread controls the channel in the ready state to execute a read-write event. In the embodiment of the application, the main thread is only responsible for polling, so the main thread can never be blocked; and the read-write events on the channels are executed through the sub-threads, and the channels are executed in parallel, so that the efficiency of the read-write events is higher.
In step 103, the data blocks are read from the associated source resource file to the buffer area of the memory through the queried channel in the ready state, and the data blocks in the buffer area of the memory are written into the corresponding target resource file in the disk.
The association here refers to the resource file associated with the channel in the ready state.
In some embodiments, based on fig. 5A, referring to fig. 5B, fig. 5B is a flowchart of a method for releasing a resource file provided in an embodiment of the present application, and step 103 shown in fig. 5B may be implemented by step 1031 and step 1032, which will be described with reference to the step shown in fig. 5B.
In step 1031, when any one of the file reading channels in the ready state is queried, the data blocks in the source resource file associated with the channel in the ready state are read into the buffer area of the memory through the file reading channel in the ready state.
In step 1032, when any file write channel in the ready state is queried, the data block in the source resource file associated with the file write channel in the ready state in the buffer of the memory is written into the corresponding target resource file in the disk through the file write channel in the ready state.
It should be noted that, the buffer area of the memory is a buffer area applied for storing the source resource file in the memory before each read/write, and the size of the applied buffer area may be determined according to the size of the source resource file to be read/written.
In some embodiments, the buffer of memory is a virtual memory space mapped to a physical memory space; writing the data blocks in the buffer area of the memory into the corresponding target resource file in the disk, which can be realized by the following method: obtaining an access address of a buffer area of a memory, and searching the access address in a physical memory space; when the access address exists in the physical memory space, directly writing the data block into the corresponding target resource file from the physical memory space; when the access address does not exist in the physical memory space, searching a disk address corresponding to the access address in a disk space; and exchanging the disk address with the physical memory address so as to write the data block into the corresponding target resource file from the physical memory space.
For example, referring to fig. 6, fig. 6 is a schematic diagram illustrating accessing data in a buffer of a memory according to an embodiment of the present disclosure. When the user program writes the buffer area of the memory into the target resource file, the virtual memory system searches the physical memory space for whether the virtual address of the buffer area exists or not. If the data block exists, as shown in a in the virtual memory space shown in fig. 8, the data block is directly written into the target resource file from the physical memory space; if the address does not exist, D in the disk space shown in fig. 6 throws a signal, and at this time, the virtual memory system searches the virtual address in the disk space, places the virtual address into the physical memory space after finding the virtual address, and then exchanges D in the disk space with C in the physical memory space, so that the user program can use data of D in the physical memory space.
In the embodiment of the application, the virtual memory space can be larger than the physical memory space, and a user program can be ensured to read some large files.
In some embodiments, reading a data block from an associated source resource file into a buffer area of a memory, and writing the data block in the buffer area of the memory into a corresponding target resource file in a disk, may be implemented by: when a first data block in a buffer area of the memory is written into a corresponding target resource file in a disk, updating a position parameter in the buffer area of the memory; the position parameter is used for indicating the starting position of reading the second data block; when reading the second data block from the source resource file, the second data block is read starting from the starting position indicated by the position parameter. Here, the first data block of the source resource file is read first, and then the second data block of the source resource file is read.
The following describes in detail an operation mechanism of a buffer during reading and writing, referring to fig. 7, where fig. 7 is a schematic diagram of an operation of performing a reading and writing event on the buffer according to an embodiment of the present application.
Wherein the buffer comprises three parameters: capacity parameter, location parameter, end qualifier. Wherein, the capacity parameter represents the maximum capacity of the buffer area, the value of the capacity parameter is determined according to the size of the applied buffer area, and when the buffer area is newly built, the value of the capacity parameter is equal to the value of the tail delimiter; the position parameter is used for tracking how many data blocks are written or read, and points to the starting position of reading and writing of the next data block; the tail qualifier represents how much data can be read or how much space can be written, and the value of the tail qualifier is less than or equal to the capacity parameter; the position parameter and the end qualifier are related to whether the current buffer is in read mode or write mode, and the capacity parameter is the same regardless of read and write mode.
When reading the data block from the source resource file to the buffer area, setting the position parameter as 0, wherein the current position parameter points to the most beginning of the data, and the data block can be read from the source resource file to the buffer area from the position pointed by the position parameter; when the data blocks in the buffer area are written into the target resource file, a flip method is called to set the position parameter to be 0, the tail qualifier is set to be the value of the original position parameter (the position parameter during the last reading), the current position parameter points to the initial position of writing the data blocks in the buffer area into the target resource file, and the data blocks in the buffer area can be written into the target resource file from the position pointed by the position parameter. And after one-time reading and writing is finished, calling a reset clear method, resetting all the read methods, and preparing for next reading.
In the embodiment of the present application, generally, a program runs on a CPU, and the speed of the program must far exceed the running speed of a general disk. When writing data to disk, the program needs to wait and cannot do anything, i.e. the thread is blocked. Data can be put into a buffer area (the reading and writing speed of a memory is far higher than that of a hard disk) through the buffer area, then a program can be continuously executed downwards, and when the data in the buffer area is written into a disk is selected according to conditions, so that the waiting times of the program are reduced, and the working coordination between low-speed reading and writing operation and high-speed running program is realized through the buffer area; the buffer area writes data into the disk instead of writing the data into the disk immediately, when the resource file is released, the data in the source resource file is read into the buffer area, and the data block can be written into the target resource file immediately after the data block is read once; and after the buffer area is full, the data block can be written into the target resource file, so that the read-write times of the disk can be reduced, and the service life of the disk can be prolonged.
In step 104, when all data blocks in the associated source resource file are read and written into the corresponding target resource file, the channel in the ready state is closed.
It should be noted that the virtual machine JVM is a virtual platform running on top of the operating system, and it is only possible for Java programs to run by the operating system on which the JVM is installed. The Java garbage collection mechanism releases the resources inside the JVM, and has no right to intervene for the resources outside the JVM. Therefore, when the release of the source resource file is finished, the setting program calls the JVM library to execute a close method so as to release the resources of the operating system except the JVM.
In some embodiments, when there are a plurality of data of the file reading channel and/or the file writing channel in the ready state, which are queried at the same time, the policy for selecting the channel to be executed may be implemented by: obtaining the release progress of each source resource file; determining a corresponding release progress for a source resource file associated with a file reading channel and/or a file writing channel in a ready state; and selecting a file reading channel and/or a file writing channel associated with the source resource file with the minimum release progress as a channel to be executed.
In other embodiments, the policy for selecting a channel to be executed may also be implemented as follows: setting the priority of each source resource file; determining corresponding priority for source resource files associated with a file reading channel and/or a file writing channel in a ready state; and selecting a file reading channel and/or a file writing channel associated with the source resource file with the highest priority as a channel to be executed.
In the embodiment of the application, the channel to be executed is selected according to the release progress of the resource files, the release progress of each resource file is balanced, and the overall release efficiency is ensured.
In some embodiments, based on fig. 5A, referring to fig. 5C, fig. 5C is a schematic flowchart of a resource file release method provided in an embodiment of the present application, and fig. 5C shows that step 105 to step 107 may be further performed before step 103, which will be described below with reference to each step.
In step 105, acquiring a location parameter of the data block in a cache region of the memory;
in step 106, the location parameters are modified according to the specified location.
In step 107, the data block is read from the start position corresponding to the modified position parameter.
In some examples, the specified position may be arbitrarily set according to a user's requirement, and a position parameter of the file channel FileChannel may be obtained by invoking a position method, where the position parameter is used to indicate a starting position for reading a next data block, and then modifying the position parameter of the FileC channel by invoking the position method to modify a position of the data block in the buffer, so as to enable the data block to read the data block from the specified position of the buffer.
In the embodiment of the application, when reading/writing the data block in a buffer area which is processed later by the application, the position of the data block in the buffer area can be moved by modifying the position parameter, and the flexibility in the reading and writing operation processing process is increased.
In the release method of the resource file in the related technology, after the request is initiated, if the data is not ready for reading and writing, the thread is blocked until the readable and writable data is returned. The following describes a method for releasing a resource file and a method for releasing a resource file in the related art in detail. Referring to fig. 8, fig. 8 is a schematic comparison diagram of a resource file release method provided in an embodiment of the present application and a resource file release method in the related art.
According to the method for releasing the resource file, the thread sends the query request waiting data to the registered channels to check whether the channels in the ready state exist or not, and the thread can continue to perform other operations until the readable and writable data are returned, so that the thread is non-blocking in the data waiting stage; if the channel in the ready state exists, reading a data block from a source resource file associated with the channel in the ready state into a buffer area of the memory, or writing the data block in the buffer area of the memory into a target resource file; if there is no channel in the ready state, then a 0 is returned directly and the thread is never blocked.
The resource file release method provided by the embodiment of the application realizes that a plurality of read-write channels are monitored simultaneously through one thread, and non-blocking continuous inquiry is carried out to find the channel with the read-write event so as to release the resource file, thereby reducing the time consumed by releasing the resource file and improving the speed of releasing the resource file.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Referring to fig. 9, fig. 9 is a schematic diagram of releasing a resource file completely at a time in the related art, and a process of releasing a resource file completed at a time in the related art includes: first, two resource file paths are introduced, wherein one resource file path is a path of a resource file to be released (such as a source resource file shown in fig. 9) and the other resource file path is a file path to be written to a target (such as a target resource file shown in fig. 9), and then the program respectively creates a source resource file object and a target resource file object, and then creates an input stream object and an output stream object which can really read and write the file according to the file objects. The input stream object/output stream object and the file establish a path between them through the operating system. The input stream object continuously reads data from the source resource file to the memory area, while the output stream object continuously reads data from the memory area, and then writes the read data into the target resource file.
In the implementation process of the embodiment of the application, the following problems are found in the related art: since the input stream and the output stream are executed by threads occupying the CPU, it will cause the possibility of thread blocking when data is written into the output stream or read from the input stream, and the use right of the CPU is lost once thread blocking occurs, which is absolutely not allowed under the current performance requirement. Moreover, the data in the memory area is not transparent to the user, i.e. the user cannot move the data in the memory area back and forth, and the operability is poor.
In view of the foregoing problems in the related art, embodiments of the present application provide a method for releasing a map file, which releases an offline map package as a local file to display an updated map. Referring to fig. 10, fig. 10 is a flowchart of a method for releasing a map file according to an embodiment of the present application. For the following detailed description of the flowchart of the method for releasing the map file provided in the embodiment of the present application, the step of releasing the map file provided in the embodiment of the present application may be implemented in the following manner:
step 201: and downloading the off-line map package. Here, the offline map package is the source resource file.
Step 202: file objects of an offline map package are created. Creating a file object of the offline map package based on the position of the offline map package; here, the offline map package is a resource file that needs to be released, and the number of file objects corresponds to the number of channels one to one.
Step 203: a file input stream object is created. Based on the file object created in step 202, an operating system services interface is invoked to create a file input stream object.
Step 204: a file read channel is created. Based on the file input stream object created in step 203, an operating system services interface is invoked to create a file read channel.
Step 205: a file object of the target resource file is created. Creating a file object of the target resource file based on the location file object of the target resource file; here, the position of the target resource file is a target position to which the offline map package needs to be released, and the map program can read the map file of the position.
Step 206: a file output stream object is created. Based on the file object of the target resource file created in step 205, an operating system services interface is invoked to create a file output stream object.
Step 207: a file write channel is created. Based on the file output stream object created in step 206, a system services interface is invoked to create a file write channel.
Step 208: a buffer is applied to the operating system. And applying for opening up a section of memory area as a buffer area to the operating system.
Step 209: and reading data from the file reading channel to the buffer area, and judging whether the reading is finished. Registering the created file reading channel into a selector of the same thread, inquiring the file reading channel by the selector, returning to 1 when the file reading channel is in a ready state (data arrival), executing a reading event through the channel, and reading the data from the offline map packet to the buffer area applied in the step 208 through the file reading channel created in the step 204; and determining whether the tail of the file is read (i.e., determining whether reading is finished), if so, executing step 211, otherwise, executing step 210.
Step 210: and writing the data of the buffer area into the target resource file through the file writing channel, and resetting the starting position of the next reading position in the buffer area. Writing the data read to the buffer in step 209 to the target resource file through the file writing channel created in step 207, and resetting the starting position of the next reading position in the buffer to execute step 209;
step 211: and judging whether the buffer area has data. If yes, go to step 212; otherwise, step 213 is performed.
Step 212: and writing the data to the target position through the file writing channel. The data in the buffer is written into the target resource file through the file write channel created in step 207.
Step 213: and closing the file reading channel and the file writing channel. To avoid memory leakage.
Step 214: and reading the released map file and displaying a map page.
It should be noted that, in step 202, an operating system File class object is called, and a new keyword is created to create a handle associated with the offline map packet object in the device memory; step 203, acquiring a file input stream object through the created file object handle, specifically calling an operating system file input stream Fi leInputStream class object, and creating the file input stream object in a memory in a new keyword mode; step 204, calling a getCahannel method for acquiring a channel under the class object through the FileInputStream object to create a file reading channel; step 205, calling a system File class object, and creating a handle associated with the target File object in the device memory through the new keyword; step 206, acquiring a file output stream object through the created file object handle, specifically calling an operating system file output stream FileOutpu tStream class object, and creating the file output stream object in the memory in a new keyword mode; step 207, calling a getChanel method under a class object through a file output stream FileOutputStream object to create a file writing channel; step 208, a buffer is created in the operating system, and is mainly used for transferring the data read from the file reading channel and writing the data into the target resource file; step 209 is to judge whether the currently read data index reaches the end of the file through a loop condition, if so, the operating system transfer method is always called, and the file is continuously released; step 210 is to perform assignment operation on the length of each released resource and the remaining amount of the resource until the offline map packet is released completely; the purpose of step 211 is to ensure security in order to avoid that residual file data is not released completely; step 213 is to call a close method provided by the operating system to close the file read/write channel, thereby avoiding memory leakage.
Referring to fig. 11, fig. 11 is a schematic diagram illustrating reading and writing of a cache region of a memory according to an embodiment of the present disclosure. Taking the example of reading data from the source resource file to the buffer area of the memory, the process of reading data from the source resource file to the buffer area of the memory is as follows: the kernel reads the disk data into the buffer area of the kernel space, and then the program reads the data in the buffer area of the kernel space into the user address space of the program. The kernel space is a space for operating the kernel of the operating system, and is a space specially developed for the kernel in order to ensure that the kernel of the operating system can safely and stably operate; and the user space refers to a space in which the user program runs. It should be noted that, since the program in the user space cannot directly read data from the disk space, it must be acquired through the kernel space; in addition, the memory pages of the user space are not aligned with the disk space, so that a cache region of the kernel space needs to be added in the disk space and the user space, and a layer of transfer processing is performed in the middle.
In the embodiment of the application, the user space and the kernel space are distinguished by using the virtual memory, and the user space and the memory space are both in the virtual memory. The virtual memory can enable a plurality of virtual memory addresses to point to the same physical memory, and when the resource file is released, the buffer area of the user space and the buffer area of the kernel space can point to the same physical memory, so that the program of the user space can be directly accessed, the kernel space does not need to be removed, the data is retrieved, and the memory space is saved.
In some embodiments, referring to fig. 12, fig. 12 is a comparison graph of technical effects achieved by the technical solutions provided in the embodiments of the present application and technical effects achieved by the related art. In the specific implementation process, the same resource file (about 13M) is released simultaneously, as shown in fig. 12, d4dacb241b09899a 25879a 8316f252a. mp4 is a source resource file, IO _ d4dacb241b09899a 25879 a8316f252a. mp4 is released by the related art resource file release method, and NIO _ d4dacb241b09899a 25879a 8316f252a. mp4 is released by the resource file release method provided by the embodiment of the present application. As can be seen from the comparative results of the procedures, the release rate of the examples of the present application is about 50% faster than that of the related art. It should be noted that the release speed is related to the size of the resource file.
Continuing with the exemplary structure of the resource file releasing device 455 provided in the present application as a software module, in some embodiments, as shown in fig. 2, the software module stored in the resource file releasing device 455 of the memory 440 may include:
a first creating module 4551, configured to create channels associated with one another for each source resource file to be released in the disk; a query module 4552, configured to register multiple channels in the same selector, and query the statuses of the multiple channels through the selector; a releasing module 4553, configured to read a data block from the associated source resource file into a buffer of a memory through the queried channel in the ready state, and write the data block in the buffer of the memory into a corresponding target resource file in the disk; a closing module 4554, configured to close the channel in the ready state when all data blocks in the associated source resource file are read and written into the corresponding target resource file.
In some embodiments, an apparatus for releasing a resource file provided in an embodiment of the present application further includes: the second creating module is used for acquiring the path of each source resource file and creating a source resource file object pointing to the path of the source resource file; the source resource file object is used for acquiring a reading source of a data block in a buffer area of the memory; acquiring a path of each target resource file, and creating a target resource file object pointing to the path of the target resource file; and the target resource file object is used for acquiring a write address of a data block in a buffer area of the memory.
In some embodiments, the types of channels include a file read channel and a file write channel; the first creating module 4551 is further configured to, for each source resource file to be released in the disk, perform the following operations: creating a file input stream object named by the source resource file object in the memory, and creating a file read channel associated with the source resource file through the file input stream object; and creating a file output stream object named by the target resource file object in the memory, and creating a file writing channel associated with the target resource file through the file output stream object.
In some embodiments, the types of states of the channel include: ready state, not ready state; the query module 4552 is further configured to, when the queried channel is in an not-ready state, immediately return to and query the next channel until the channel in the ready state is queried.
In some embodiments, the releasing module 4553 is further configured to, when any one of the file read channels in the ready state is queried, read, through the file read channel in the ready state, a data block in the source resource file associated with the channel in the ready state into a buffer of the memory; when any one file writing channel in the ready state is inquired, writing the data block in the source resource file associated with the file writing channel in the ready state in the buffer area of the memory into the corresponding target resource file in the disk through the file writing channel in the ready state.
In some embodiments, when there are a plurality of data in the file read channel and/or the file write channel in the ready state that are queried at the same time, an apparatus for releasing a resource file provided in an embodiment of the present application further includes: the selection module is used for acquiring the release progress of each source resource file; determining a corresponding release progress for the source resource file associated with the file reading channel and/or the file writing channel in the ready state; and selecting a file reading channel and/or a file writing channel associated with the source resource file with the minimum release progress as a channel to be executed.
In some embodiments, the releasing module 4553 further updates the location parameter in the buffer of the memory when the first data block in the buffer of the memory is written into the corresponding target resource file in the disk; wherein the position parameter is used for indicating a starting position for reading the second data block; when reading the second data block from the source resource file, reading the second data block from the starting position indicated by the position parameter.
In some embodiments, an apparatus for releasing a resource file provided in an embodiment of the present application further includes: the modification module is used for acquiring the position parameters of the data block in the cache region of the memory; modifying the position parameters according to the designated position; and reading the data block from the starting position corresponding to the modified position parameter.
In some embodiments, the buffer of the memory is a virtual memory space mapped to a physical memory space; the release module 4553 is further configured to obtain an access address of the buffer of the memory, and search the access address in the physical memory space; when the access address exists in the physical memory space, directly writing the data block into a corresponding target resource file from the physical memory space; when the access address does not exist in the physical memory space, searching a disk address corresponding to the access address in a disk space; and exchanging the disk address and the physical memory address to write the data block into a corresponding target resource file from the physical memory space.
Embodiments of the present application provide a computer program product or a computer program, which includes computer instructions, and the computer instructions are stored in a computer readable storage medium. The processor of the computer device reads the thread computer instructions from the computer-readable storage medium, and the processor executes the thread computer instructions, so that the thread computer device executes the method for releasing the resource file according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, where the executable instructions are stored, and when executed by a processor, the executable instructions will cause the processor to execute a method for releasing a resource file provided by embodiments of the present application, for example, a method for releasing a resource file as shown in fig. 5A, 5B, and 5C.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the present application, only one thread is needed to manage multiple channels, and when waiting for the channel stage in the ready state, the threads are all non-blocking, and the resource file is quickly released on the basis of only occupying a small amount of resources of the operating system; the resource file is released in a data block mode, so that the release efficiency is improved; read-write events on a plurality of channels can be executed through a plurality of sub-threads, the channels are executed in parallel, and the efficiency of the read-write events is higher; the virtual memory space is larger than the physical memory space through virtual mapping, and a user program can read large files; the number of program waiting times is reduced by transferring data in the buffer area, and the working coordination between low-speed read-write operation and high-speed running programs is realized; after the buffer area is full, the data block can be written into the target resource file, so that the read-write times of the disk can be reduced, and the service life of the disk can be prolonged; the method has the advantages that a strategy of the channel to be executed is formulated according to actual requirements of users, so that the requirements of the users can be better met, and the experience and satisfaction of the users are improved; the position of the data block in the buffer area can be moved by modifying the position parameter, so that the flexibility in the read-write operation processing process is increased.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (10)

1. A method for releasing a resource file is characterized by comprising the following steps:
respectively creating channels in one-to-one association aiming at each source resource file to be released in a disk;
registering a plurality of channels into the same selector, and inquiring the states of the channels through the selector;
reading data blocks from the associated source resource files to a buffer area of a memory through the inquired channel in the ready state, and writing the data blocks in the buffer area of the memory into a corresponding target resource file in the disk;
and when the data blocks in the associated source resource file are all read and written into the corresponding target resource file, closing the channel in the ready state.
2. The method of claim 1,
when the channels associated one to one are respectively created for each source resource file to be released in the disk, the method further includes:
acquiring a path of each source resource file, and creating a source resource file object pointing to the path of the source resource file;
the source resource file object is used for acquiring a reading source of a data block in a buffer area of the memory;
acquiring a path of each target resource file, and creating a target resource file object pointing to the path of the target resource file;
and the target resource file object is used for acquiring a write address of a data block in a buffer area of the memory.
3. The method of claim 2, wherein the types of channels include a file read channel and a file write channel;
the creating a channel associated with each source resource file to be released in the disk respectively includes:
for each source resource file to be released in the disk, executing the following operations:
creating a file input stream object named by the source resource file object in the memory, and creating a file read channel associated with the source resource file through the file input stream object;
and creating a file output stream object named by the target resource file object in the memory, and creating a file writing channel associated with the target resource file through the file output stream object.
4. The method according to any one of claims 1 to 3,
the types of states of the channel include: ready state, not ready state;
said querying, by said selector, a status of a plurality of said channels, comprising:
and when the inquired channel is in the non-ready state, immediately returning and inquiring the next channel until the channel in the ready state is inquired.
5. The method as claimed in claim 2, wherein the reading, by the queried channel in the ready state, a data block from the associated source resource file into a buffer of a memory, and writing the data block in the buffer of the memory into a corresponding target resource file in the disk comprises:
when any one file reading channel in the ready state is inquired, reading the data block in the source resource file associated with the channel in the ready state into a buffer area of the memory through the file reading channel in the ready state;
when any one file writing channel in the ready state is inquired, writing the data block in the source resource file associated with the file writing channel in the ready state in the buffer area of the memory into the corresponding target resource file in the disk through the file writing channel in the ready state.
6. The method according to claim 5, wherein when there are a plurality of data of the file read channel and/or the file write channel in the ready state, the method further comprises:
obtaining the release progress of each source resource file;
determining a corresponding release progress for the source resource file associated with the file reading channel and/or the file writing channel in the ready state;
and selecting a file reading channel and/or a file writing channel associated with the source resource file with the minimum release progress as a channel to be executed.
7. The method according to claim 1, wherein the reading of the data blocks from the associated source resource file into a buffer of a memory, and the writing of the data blocks in the buffer of the memory into a corresponding target resource file in the disk, comprises:
when a first data block in the buffer area of the memory is written into a corresponding target resource file in the disk, updating a position parameter in the buffer area of the memory;
wherein the position parameter is used for indicating a starting position for reading the second data block;
when reading the second data block from the source resource file, reading the second data block from the starting position indicated by the position parameter.
8. The method of claim 7, further comprising, prior to the reading the data block from the associated source resource file into a buffer of memory:
acquiring the position parameters of the data block in a cache region of the memory;
modifying the position parameters according to the designated position;
and reading the data block from the starting position corresponding to the modified position parameter.
9. The method according to any one of claims 1 to 8,
the buffer area of the memory is a virtual memory space mapped to a physical memory space;
the writing the data block in the buffer area of the memory into the corresponding target resource file in the disk includes:
obtaining an access address of a buffer area of the memory, and searching the access address in the physical memory space;
when the access address exists in the physical memory space, directly writing the data block into a corresponding target resource file from the physical memory space;
when the access address does not exist in the physical memory space, searching a disk address corresponding to the access address in a disk space;
and exchanging the disk address and the physical memory address to write the data block into a corresponding target resource file from the physical memory space.
10. An apparatus for releasing a resource file, comprising:
the first establishing module is used for respectively establishing channels which are associated one by one aiming at each source resource file to be released in the disk;
the query module is used for registering the channels into the same selector and querying the states of the channels through the selector;
a releasing module, configured to read a data block from the associated source resource file to a buffer of a memory through the queried channel in the ready state, and write the data block in the buffer of the memory into a corresponding target resource file in the disk;
and the closing module is used for closing the channel in the ready state when all the data blocks in the associated source resource file are read and written into the corresponding target resource file.
CN202011064329.6A 2020-09-30 2020-09-30 Resource file release method and device and computer readable storage medium Active CN112395083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011064329.6A CN112395083B (en) 2020-09-30 2020-09-30 Resource file release method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011064329.6A CN112395083B (en) 2020-09-30 2020-09-30 Resource file release method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112395083A true CN112395083A (en) 2021-02-23
CN112395083B CN112395083B (en) 2022-03-15

Family

ID=74595778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011064329.6A Active CN112395083B (en) 2020-09-30 2020-09-30 Resource file release method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112395083B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114328565A (en) * 2021-12-29 2022-04-12 上海达梦数据库有限公司 Large-field data release method and device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101188544A (en) * 2007-12-04 2008-05-28 浙江大学 File transfer method for distributed file server based on buffer
CN101478785A (en) * 2009-01-21 2009-07-08 华为技术有限公司 Resource pool management system and signal processing method
CN101814032A (en) * 2010-02-08 2010-08-25 河南大学 Resource encapsulation method utilizing Delphi resource file to generate Windows application program
US20140068127A1 (en) * 2012-09-04 2014-03-06 Red Hat Israel, Ltd. Shared locking mechanism for storage centric leases
CN103631565A (en) * 2013-11-13 2014-03-12 北京像素软件科技股份有限公司 Loading method and device for scene resources
CN104079398A (en) * 2013-03-28 2014-10-01 腾讯科技(深圳)有限公司 Data communication method, device and system
CN104217140A (en) * 2014-08-29 2014-12-17 北京奇虎科技有限公司 Method and device for reinforcing application program
CN104917817A (en) * 2015-04-23 2015-09-16 四川师范大学 Client side and data communication method
CN105068832A (en) * 2015-07-30 2015-11-18 北京奇虎科技有限公司 Method and apparatus for generating executable file
CN108111578A (en) * 2017-11-28 2018-06-01 国电南瑞科技股份有限公司 The method of distribution terminal data acquisition platform access terminal equipment based on NIO
CN108762833A (en) * 2018-05-16 2018-11-06 北京安云世纪科技有限公司 Application in Android system starts method and apparatus
CN109408203A (en) * 2018-11-01 2019-03-01 无锡华云数据技术服务有限公司 A kind of implementation method, device, the computing system of queue message consistency
CN110134534A (en) * 2019-05-17 2019-08-16 普元信息技术股份有限公司 The system and method for Message Processing optimization is carried out for big data distributed system based on NIO

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101188544A (en) * 2007-12-04 2008-05-28 浙江大学 File transfer method for distributed file server based on buffer
CN101478785A (en) * 2009-01-21 2009-07-08 华为技术有限公司 Resource pool management system and signal processing method
CN101814032A (en) * 2010-02-08 2010-08-25 河南大学 Resource encapsulation method utilizing Delphi resource file to generate Windows application program
US20140068127A1 (en) * 2012-09-04 2014-03-06 Red Hat Israel, Ltd. Shared locking mechanism for storage centric leases
CN104079398A (en) * 2013-03-28 2014-10-01 腾讯科技(深圳)有限公司 Data communication method, device and system
CN103631565A (en) * 2013-11-13 2014-03-12 北京像素软件科技股份有限公司 Loading method and device for scene resources
CN104217140A (en) * 2014-08-29 2014-12-17 北京奇虎科技有限公司 Method and device for reinforcing application program
CN104917817A (en) * 2015-04-23 2015-09-16 四川师范大学 Client side and data communication method
CN105068832A (en) * 2015-07-30 2015-11-18 北京奇虎科技有限公司 Method and apparatus for generating executable file
CN108111578A (en) * 2017-11-28 2018-06-01 国电南瑞科技股份有限公司 The method of distribution terminal data acquisition platform access terminal equipment based on NIO
CN108762833A (en) * 2018-05-16 2018-11-06 北京安云世纪科技有限公司 Application in Android system starts method and apparatus
CN109408203A (en) * 2018-11-01 2019-03-01 无锡华云数据技术服务有限公司 A kind of implementation method, device, the computing system of queue message consistency
CN110134534A (en) * 2019-05-17 2019-08-16 普元信息技术股份有限公司 The system and method for Message Processing optimization is carried out for big data distributed system based on NIO

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
L. VELOSO ET AL.: "《Big data resources for EEGs: Enabling deep learning research》", 《2017 IEEE SIGNAL PROCESSING IN MEDICINE AND BIOLOGY SYMPOSIUM (SPMB)》 *
刘青: "《 面向渲染应用的分布式数据管理与访问优化》", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *
闫阳: "《分布式对象文件系统的缓存策略研究》", 《万方智搜》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114328565A (en) * 2021-12-29 2022-04-12 上海达梦数据库有限公司 Large-field data release method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112395083B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
US20240095043A1 (en) Execution of sub-application processes within application program
CN100461096C (en) Dynamic registry partitioning
JP5295379B2 (en) Dynamic linking method of program in embedded platform and embedded platform
CN101421711B (en) Virtual execution system for resource-constrained devices
CN108170503A (en) A kind of method, terminal and the storage medium of cross-system operation Android application
CN108572818A (en) A kind of user interface rendering intent and device
CN102411506A (en) Java system service unit plug-in type management system and service function dynamic change method
CN102667714B (en) Support the method and system that the function provided by the resource outside operating system environment is provided
JP2009059349A (en) Shared type java (r) jar file
CN107368379B (en) EVP-oriented cross Guest OS inter-process communication method and system
WO2023124968A1 (en) Method for calling android dynamic library hal interface by software operating system, device and medium
US9176713B2 (en) Method, apparatus and program storage device that provides a user mode device interface
US20070239890A1 (en) Method, system and program storage device for preventing a real-time application from running out of free threads when the real-time application receives a device interface request
CN111090823A (en) Integration platform of page application and application access method, device and equipment
JP3034873B2 (en) Information processing device
CN112395083B (en) Resource file release method and device and computer readable storage medium
JP2006294028A (en) System for providing direct execution function, computer system, method and program
JP2007510211A (en) Mapping dynamic link libraries on computer equipment
CN115421787A (en) Instruction execution method, apparatus, device, system, program product, and medium
WO2024098888A1 (en) Model storage optimization method, and electronic device
CN109783145B (en) Method for creating multi-image-based multifunctional embedded system
US20040230556A1 (en) Generic data persistence application program interface
CN111737166A (en) Data object processing method, device and equipment
US6915408B2 (en) Implementation of thread-static data in multi-threaded computer systems
US9418175B2 (en) Enumeration of a concurrent data structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant