CN114153599A - Memory write-in optimization method, device, equipment and medium - Google Patents

Memory write-in optimization method, device, equipment and medium Download PDF

Info

Publication number
CN114153599A
CN114153599A CN202111362594.7A CN202111362594A CN114153599A CN 114153599 A CN114153599 A CN 114153599A CN 202111362594 A CN202111362594 A CN 202111362594A CN 114153599 A CN114153599 A CN 114153599A
Authority
CN
China
Prior art keywords
data
memory
memory space
write
data packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111362594.7A
Other languages
Chinese (zh)
Inventor
王帅阳
李文鹏
李旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202111362594.7A priority Critical patent/CN114153599A/en
Publication of CN114153599A publication Critical patent/CN114153599A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a memory write-in optimization method, which is applied to distributed file system service and comprises the following steps: receiving a data packet to be written into a memory in a network flow, and acquiring message header information in the data packet; acquiring the length of data in the data packet according to the message header information, and applying for a memory space corresponding to the length of the data in the data packet in a database C; sequentially writing the data in the data packet into the applied memory space, and transmitting the current write-in parameters into a C library; the invention also provides a memory write-in optimization device, equipment and medium, which effectively reduce the time consumption for writing the memory in the C library and improve the memory write-in efficiency in the C library.

Description

Memory write-in optimization method, device, equipment and medium
Technical Field
The present invention relates to the field of memory optimization, and in particular, to a memory write optimization method, apparatus, device, and medium.
Background
In the process of reading and writing a Distributed File System (object storage) by using an HDFS (Hadoop Distributed File System) service, the HDFS service is composed of Java codes and c + + codes, and the Java codes call the c + + codes through an jna interface (Java Native Access, an interface which can realize dynamic Access of a System local library through mapping from the Java interface to the local library) so as to realize information interaction communication between the Java codes and the c + + codes.
In the prior art, network stream data acquired by Java code is firstly written into a C + + module (i.e. a C library), and then the C + + module copies the written data into a memory applied by the C + + module itself, which needs to relate to the processes of network stream data writing, memory space application, data copying, and the like, so that the memory writing takes long time, and the waiting time of the C + + module is increased, thereby reducing the performance of the C + + module.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and innovatively provides a memory write-in optimization method, device, equipment and medium, so that the problems of long time consumption and low efficiency of memory write-in a C library caused by the prior art are effectively solved, the time consumption of memory write-in the C library is effectively reduced, and the memory write-in efficiency in the C library is improved.
The first aspect of the present invention provides a memory write optimization method, which is applied to a distributed file system service, and includes:
receiving a data packet to be written into a memory in a network flow, and acquiring message header information in the data packet;
acquiring the length of data in the data packet according to the message header information, and applying for a memory space corresponding to the length of the data in the data packet in a database C;
sequentially writing the data in the data packet into the applied memory space, and transmitting the current write-in parameters into a C library;
and packaging the data in the memory space according to the received write request and the current write parameters, and inserting the packaged data in the memory space into a cache queue to be flushed until all the data in the data packet is inserted into the cache queue to be flushed.
Optionally, the step of sequentially writing the data in the data packet into the applied memory space specifically includes: and sequentially reading the data with the same size in the data packets, and respectively writing the data with the same size in the sequentially read data packets into the applied memory space.
Optionally, the current write parameters include, but are not limited to, a memory address, a currently written memory data offset, and a write length of currently written data.
Further, the memory data offset is a difference between a write length of currently written data in the memory data block and an offset position of initially written data in the data packet.
Optionally, the memory reference count parameter is set to dynamically manage the memory space applied in the C library.
Further, still include:
flushing data in a memory space in a cache queue to be flushed;
calculating the current memory reference count according to the memory reference count when the memory is not refreshed;
and determining whether to remove the data in the memory space from the cache queue to be flushed according to the current memory reference count, and releasing the memory space.
Further, if the current memory reference count is a preset value, removing data in the memory space from the cache queue to be flushed, and releasing the memory space; and if the current memory reference count is not a preset value, waiting for removing the data in the memory space from the cache queue to be flushed, and releasing the memory space.
The second aspect of the present invention provides a memory write optimization apparatus, which is applied to a distributed file system service, and includes:
the receiving module is used for receiving a data packet to be written into the memory in the network flow and acquiring message header information in the data packet;
the application module acquires the length of the data in the data packet according to the message header information and applies for the memory space corresponding to the length of the data in the data packet in the C library;
the writing module is used for sequentially writing the data in the data packet into the applied memory space and transmitting the current writing parameters into the C library;
and the encapsulation and insertion module encapsulates the data in the memory space according to the received write request and the current write parameters, and inserts the encapsulated data in the memory space into the cache queue to be flushed until all the data in the data packet is inserted into the cache queue to be flushed.
A third aspect of the present invention provides an electronic device comprising: a memory for storing a computer program; a processor, configured to implement the steps of the memory write optimization method according to the first aspect of the present invention when executing the computer program.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a memory write optimization method according to the first aspect of the present invention.
The technical scheme adopted by the invention comprises the following technical effects:
1. aiming at the HDFS service writing process, the memory is applied by the C library, and the Java code directly writes the data in the network flow into the C library, so that one-time memory copy is reduced, the operation pressure of the C library is reduced, the performance of the C library is improved, the problems of long time consumption and low efficiency of memory writing in the C library caused by the prior art are effectively solved, the time consumption of memory writing in the C library is effectively reduced, and the memory writing efficiency in the C library is improved.
2. According to the technical scheme, the C base encapsulates the data in the memory space according to the received write request and the current write parameters, and inserts the encapsulated data in the memory space into the cache queue to be flushed, so that the C base can automatically identify the written data in the memory space according to the data structure in the encapsulated memory space, and the accuracy and reliability of the write-in of the memory in the C base are ensured.
3. According to the technical scheme, the memory reference counting parameter is set to be used for dynamically managing the applied memory space in the C library, whether the data in the memory space is removed from the cache queue to be flushed is determined according to the current memory reference counting, the memory space is released, automatic management of memory release is achieved, and the method and the device are convenient and efficient.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without any creative effort.
Fig. 1 is a flow chart illustrating a method for optimizing memory write according to an embodiment of the present invention;
fig. 2 is another schematic flow chart illustrating a method for optimizing memory write according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a memory write optimization apparatus according to a second embodiment of the present invention;
fig. 4 is another schematic structural diagram of a memory write optimization apparatus according to a second embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a memory write optimization device according to a third embodiment of the present disclosure.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example one
As shown in fig. 1, the present invention provides a memory write optimization method, which is applied to a distributed file system service, and includes:
s1, receiving a data packet to be written into the memory in the network flow, and acquiring message header information in the data packet;
s2, acquiring the length of the data in the data packet according to the message header information, and applying for a memory space corresponding to the length of the data in the data packet in the C library;
s3, writing the data in the data packet into the applied memory space in sequence, and transmitting the current write-in parameters into a C library;
and S4, encapsulating the data in the memory space according to the received write request and the current write parameters, and inserting the encapsulated data in the memory space into a cache queue to be flushed until all the data in the data packet is inserted into the cache queue to be flushed.
In step S1, the HDFS service receives a data packet to be written in the Java network stream, and the HDFS service parses and obtains header information (header) of the data packet in the network stream according to a protocol sequence, where the data packet generally includes a header, data (data), check data, and the like, and preferably, also can obtain checksum data (check data information included in the data packet), and can be used to check whether the data packet to be written meets preset requirements (data security, data format, data integrity, and the like), so as to improve reliability of writing the data packet.
In step S2, the HDFS service acquires the length of data in the packet based on the header information. The first jna interface is packaged to apply for a memory space corresponding to the length len of data in a C + + module (library C).
In step S3, the specific steps of sequentially writing the data in the data packet into the applied memory space are: and sequentially reading the data with the same size in the data packets, and respectively writing the data with the same size in the sequentially read data packets into the applied memory space. The data in the data packet read each time may be 4096 bytes, that is, the HDFS service reads data in the Java network stream each time in 4096 bytes, and writes the sequentially read data into the memory space applied by the C library through the second jna interface packaged in advance.
And the HDFS service acquires the current write-in parameters according to the write-in records and transmits the acquired current write-in parameters into the C library. The current write parameters include, but are not limited to, a memory address, a current write memory data offset, and a current write length of written data. The memory data offset (which refers to the offset of the data packet into the memory data, and the initial index when the memory data is written, which is equivalent to the memory write pointer) is the difference between the write length of the currently written data in the memory data block and the offset position of the initially written data in the data packet. It should be noted that, the write length of the currently written data may be different from the write length of the packet, because the data in the read packet may have duplicate data and needs to be removed. Therefore, the HDFS service needs to transfer the currently inhaled file ino number (the index number of the file system, which is unique in the file system and used to identify different files in the file system), the initial location of the currently written memory, the memory address, the currently written memory data offset (offset), and the write length len of the currently written data into the call C library.
In step S4, the C library encapsulates the data in the memory space according to the received write request and the current write parameter, and inserts the encapsulated data in the memory space into the cache queue to be flushed, until all the data in the data packet is inserted into the cache queue to be flushed, the C + + module is called and the Java write process is called.
Furthermore, in the technical scheme of the invention, the memory reference counting parameter can be set for dynamically managing the applied memory space in the C library.
As shown in fig. 2, the method for optimizing memory write provided by the technical solution of the present invention further includes:
s5, the data in the memory space in the cache queue to be refreshed is refreshed;
s6, calculating the current memory reference count according to the memory reference count when the memory is not refreshed;
and S7, determining whether to remove the data in the memory space from the cache queue to be flushed according to the current memory reference count, and releasing the memory space.
In step S5, before the data in the encapsulated memory space is inserted into the cache queue to be flushed, the initial value of the memory reference count is set to 1, and after the data in the encapsulated memory space is inserted into the cache queue to be flushed, the memory reference count is incremented by 1, which is accordingly changed to 2. And after the data in the data packet is completely inserted into the cache queue to be flushed, the memory reference count is automatically reduced by 1, and correspondingly, the memory reference count is changed into 1. I.e., the memory reference count when not brushed down is 1. And then, flushing the data in the memory space in the cache queue to be flushed.
In step S6, the calculation of the current memory reference count according to the memory reference count when not being flushed is specifically: and after the data in the memory space in the cache queue to be flushed is completely flushed, subtracting 1 from the memory reference count, namely, setting the memory reference count-1 to be 0 when the current memory reference count is not flushed.
In step S7, if the current memory reference count is a preset value (that is, 0), removing data in the memory space from the cache queue to be flushed, and releasing the memory space; if the current memory reference count is not a preset value (not 0), waiting for removing data in the memory space from the cache queue to be flushed, releasing the memory space, and realizing the management of memory space release by introducing a memory reference count parameter, thereby improving the management efficiency.
It should be noted that, in the technical solution of the present invention, steps S1-S7 may all be implemented by hardware or software language programming, and the implementation idea corresponds to the steps in the solution, and may also be implemented by other manners, which is not limited herein.
Aiming at the HDFS service writing process, the memory is applied by the C library, and the Java code directly writes the data in the network flow into the C library, so that one-time memory copy is reduced, the operation pressure of the C library is reduced, the performance of the C library is improved, the problems of long time consumption and low efficiency of memory writing in the C library caused by the prior art are effectively solved, the time consumption of memory writing in the C library is effectively reduced, and the memory writing efficiency in the C library is improved.
According to the technical scheme, the C base encapsulates the data in the memory space according to the received write request and the current write parameters, and inserts the encapsulated data in the memory space into the cache queue to be flushed, so that the C base can automatically identify the written data in the memory space according to the data structure in the encapsulated memory space, and the accuracy and reliability of the write-in of the memory in the C base are ensured.
According to the technical scheme, the memory reference counting parameter is set to be used for dynamically managing the applied memory space in the C library, whether the data in the memory space is removed from the cache queue to be flushed is determined according to the current memory reference counting, the memory space is released, automatic management of memory release is achieved, and the method and the device are convenient and efficient.
Example two
As shown in fig. 3, the technical solution of the present invention further provides a memory write optimization apparatus, which is applied to a distributed file system service, and includes:
the receiving module 101 is configured to receive a data packet to be written into a memory in a network stream, and obtain header information in the data packet;
the application module 102 is configured to obtain the length of data in the data packet according to the message header information, and apply for a memory space corresponding to the length of the data in the data packet in the C library;
the write-in module 103 is used for sequentially writing the data in the data packet into the applied memory space and transmitting the current write-in parameters into the C library;
and the encapsulation and insertion module 104 encapsulates the data in the memory space according to the received write request and the current write parameter, and inserts the encapsulated data in the memory space into the cache queue to be flushed until all the data in the data packet is inserted into the cache queue to be flushed.
In the receiving module 101, the HDFS service receives a data packet to be written in the Java network stream, and the HDFS service parses and acquires header information (header) of the data packet in the network stream according to a protocol sequence, where the data packet generally includes a header, data (data), check data, and the like, and preferably, may also acquire checksum data (check data information included in the data packet), and may be used to check whether the data packet to be written meets preset requirements (data security, data format, data integrity, and the like), so as to improve reliability of writing the data packet.
In the application module 102, the HDFS service obtains the length of the data in the data packet according to the header information. The first jna interface is packaged to apply for a memory space corresponding to the length len of data in a C + + module (library C).
In the write-in module 103, sequentially writing the data in the data packet into the applied memory space specifically includes: and sequentially reading the data with the same size in the data packets, and respectively writing the data with the same size in the sequentially read data packets into the applied memory space. The data in the data packet read each time may be 4096 bytes, that is, the HDFS service reads data in the Java network stream each time in 4096 bytes, and writes the sequentially read data into the memory space applied by the C library through the second jna interface packaged in advance.
And the HDFS service acquires the current write-in parameters according to the write-in records and transmits the acquired current write-in parameters into the C library. The current write parameters include, but are not limited to, a memory address, a current write memory data offset, and a current write length of written data. The memory data offset (which refers to the offset of the data packet into the memory data, and the initial index when the memory data is written, which is equivalent to the memory write pointer) is the difference between the write length of the currently written data in the memory data block and the offset position of the initially written data in the data packet. It should be noted that, the write length of the currently written data may be different from the write length of the packet, because the data in the read packet may have duplicate data and needs to be removed. Therefore, the HDFS service needs to transfer the currently inhaled file ino number (the index number of the file system, which is unique in the file system and used to identify different files in the file system), the initial location of the currently written memory, the memory address, the currently written memory data offset (offset), and the write length len of the currently written data into the call C library.
In the encapsulation and insertion module 104, the C library encapsulates the data in the memory space according to the received write request and the current write parameter, and inserts the encapsulated data in the memory space into the cache queue to be flushed, until all the data in the data packet is inserted into the cache queue to be flushed, the C + + module is called and the Java write process is called.
Furthermore, in the technical scheme of the invention, the memory reference counting parameter can be set for dynamically managing the applied memory space in the C library.
As shown in fig. 4, the method for optimizing memory write provided by the technical solution of the present invention further includes:
the flushing module 105 flushes data in the memory space in the cache queue to be flushed;
the calculating module 106 calculates the current memory reference count according to the memory reference count when the memory is not flushed;
the determining module 107 determines whether to remove data in the memory space from the cache queue to be flushed according to the current memory reference count, and releases the memory space.
In the flushing module 105, before the data in the encapsulated memory space is inserted into the cache queue to be flushed, the initial value of the memory reference count is set to 1, and after the data in the encapsulated memory space is inserted into the cache queue to be flushed, the memory reference count is incremented by 1, which is correspondingly changed to 2. And after the data in the data packet is completely inserted into the cache queue to be flushed, the memory reference count is automatically reduced by 1, and correspondingly, the memory reference count is changed into 1. I.e., the memory reference count when not brushed down is 1. And then, flushing the data in the memory space in the cache queue to be flushed.
In the calculating module 106, calculating the current memory reference count according to the memory reference count when the memory is not flushed specifically is: and after the data in the memory space in the cache queue to be flushed is completely flushed, subtracting 1 from the memory reference count, namely, setting the memory reference count-1 to be 0 when the current memory reference count is not flushed.
In the determining module 107, if the current memory reference count is a preset value (that is, 0), removing data in the memory space from the cache queue to be flushed, and releasing the memory space; if the current memory reference count is not a preset value (not 0), waiting for removing data in the memory space from the cache queue to be flushed, releasing the memory space, and realizing the management of memory space release by introducing a memory reference count parameter, thereby improving the management efficiency.
Aiming at the HDFS service writing process, the memory is applied by the C library, and the Java code directly writes the data in the network flow into the C library, so that one-time memory copy is reduced, the operation pressure of the C library is reduced, the performance of the C library is improved, the problems of long time consumption and low efficiency of memory writing in the C library caused by the prior art are effectively solved, the time consumption of memory writing in the C library is effectively reduced, and the memory writing efficiency in the C library is improved.
According to the technical scheme, the C base encapsulates the data in the memory space according to the received write request and the current write parameters, and inserts the encapsulated data in the memory space into the cache queue to be flushed, so that the C base can automatically identify the written data in the memory space according to the data structure in the encapsulated memory space, and the accuracy and reliability of the write-in of the memory in the C base are ensured.
According to the technical scheme, the memory reference counting parameter is set to be used for dynamically managing the applied memory space in the C library, whether the data in the memory space is removed from the cache queue to be flushed is determined according to the current memory reference counting, the memory space is released, automatic management of memory release is achieved, and the method and the device are convenient and efficient.
EXAMPLE III
As shown in fig. 5, the present invention further provides an electronic device, including: a memory 201 for storing a computer program; the processor 202 is configured to implement the steps of the memory write optimization method in the first embodiment when executing the computer program.
The memory 201 in the embodiments of the present application is used to store various types of data to support the operation of the electronic device. Examples of such data include: any computer program for operating on an electronic device. It will be appreciated that the memory 201 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 201 described in embodiments herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the embodiments of the present application may be applied to the processor 202, or implemented by the processor 202. The processor 202 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 202. The processor 202 may be a general-purpose processor, a DSP (Digital Signal Processing, i.e., a chip capable of implementing Digital Signal Processing), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Processor 202 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 201, and the processor 202 reads the program in the memory 201 and performs the steps of the foregoing method in combination with its hardware. When the processor 202 executes the program, the corresponding processes in the methods according to the embodiments of the present application are realized, and for brevity, are not described herein again.
Aiming at the HDFS service writing process, the memory is applied by the C library, and the Java code directly writes the data in the network flow into the C library, so that one-time memory copy is reduced, the operation pressure of the C library is reduced, the performance of the C library is improved, the problems of long time consumption and low efficiency of memory writing in the C library caused by the prior art are effectively solved, the time consumption of memory writing in the C library is effectively reduced, and the memory writing efficiency in the C library is improved.
According to the technical scheme, the C base encapsulates the data in the memory space according to the received write request and the current write parameters, and inserts the encapsulated data in the memory space into the cache queue to be flushed, so that the C base can automatically identify the written data in the memory space according to the data structure in the encapsulated memory space, and the accuracy and reliability of the write-in of the memory in the C base are ensured.
According to the technical scheme, the memory reference counting parameter is set to be used for dynamically managing the applied memory space in the C library, whether the data in the memory space is removed from the cache queue to be flushed is determined according to the current memory reference counting, the memory space is released, automatic management of memory release is achieved, and the method and the device are convenient and efficient.
Example four
The technical solution of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the memory write optimization method in the first embodiment are implemented.
For example, comprising a memory 201 storing a computer program executable by a processor 202 for performing the steps of the method as described above. The computer readable storage medium may be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code. Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof that contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
Aiming at the HDFS service writing process, the memory is applied by the C library, and the Java code directly writes the data in the network flow into the C library, so that one-time memory copy is reduced, the operation pressure of the C library is reduced, the performance of the C library is improved, the problems of long time consumption and low efficiency of memory writing in the C library caused by the prior art are effectively solved, the time consumption of memory writing in the C library is effectively reduced, and the memory writing efficiency in the C library is improved.
According to the technical scheme, the C base encapsulates the data in the memory space according to the received write request and the current write parameters, and inserts the encapsulated data in the memory space into the cache queue to be flushed, so that the C base can automatically identify the written data in the memory space according to the data structure in the encapsulated memory space, and the accuracy and reliability of the write-in of the memory in the C base are ensured.
According to the technical scheme, the memory reference counting parameter is set to be used for dynamically managing the applied memory space in the C library, whether the data in the memory space is removed from the cache queue to be flushed is determined according to the current memory reference counting, the memory space is released, automatic management of memory release is achieved, and the method and the device are convenient and efficient.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. A memory write optimization method is applied to distributed file system services and comprises the following steps:
receiving a data packet to be written into a memory in a network flow, and acquiring message header information in the data packet;
acquiring the length of data in the data packet according to the message header information, and applying for a memory space corresponding to the length of the data in the data packet in a database C;
sequentially writing the data in the data packet into the applied memory space, and transmitting the current write-in parameters into a C library;
and packaging the data in the memory space according to the received write request and the current write parameters, and inserting the packaged data in the memory space into a cache queue to be flushed until all the data in the data packet is inserted into the cache queue to be flushed.
2. The method according to claim 1, wherein the step of sequentially writing the data in the data packet into the requested memory space comprises: and sequentially reading the data with the same size in the data packets, and respectively writing the data with the same size in the sequentially read data packets into the applied memory space.
3. The method of claim 1, wherein the current write parameters include but are not limited to a memory address, a current write memory data offset, and a current write length of written data.
4. The method as claimed in claim 3, wherein the memory data offset is a difference between a write length of currently written data in the memory data block and an offset position of initially written data in the data packet.
5. The method as claimed in claim 1, wherein the memory reference count parameter is set for dynamically managing the memory space requested in the C bank.
6. The method of claim 5, further comprising:
flushing data in a memory space in a cache queue to be flushed;
calculating the current memory reference count according to the memory reference count when the memory is not refreshed;
and determining whether to remove the data in the memory space from the cache queue to be flushed according to the current memory reference count, and releasing the memory space.
7. The method according to claim 6, wherein if the current memory reference count is a preset value, removing data in the memory space from the cache queue to be flushed, and releasing the memory space; and if the current memory reference count is not a preset value, waiting for removing the data in the memory space from the cache queue to be flushed, and releasing the memory space.
8. A memory write-in optimization device is characterized by being applied to distributed file system services and comprising the following components:
the receiving module is used for receiving a data packet to be written into the memory in the network flow and acquiring message header information in the data packet;
the application module acquires the length of the data in the data packet according to the message header information and applies for the memory space corresponding to the length of the data in the data packet in the C library;
the writing module is used for sequentially writing the data in the data packet into the applied memory space and transmitting the current writing parameters into the C library;
and the encapsulation and insertion module encapsulates the data in the memory space according to the received write request and the current write parameters, and inserts the encapsulated data in the memory space into the cache queue to be flushed until all the data in the data packet is inserted into the cache queue to be flushed.
9. An electronic device, comprising: a memory for storing a computer program; a processor for implementing the steps of a memory write optimization method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of a method of memory write optimization according to any one of claims 1 to 7.
CN202111362594.7A 2021-11-17 2021-11-17 Memory write-in optimization method, device, equipment and medium Withdrawn CN114153599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111362594.7A CN114153599A (en) 2021-11-17 2021-11-17 Memory write-in optimization method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111362594.7A CN114153599A (en) 2021-11-17 2021-11-17 Memory write-in optimization method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114153599A true CN114153599A (en) 2022-03-08

Family

ID=80456532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111362594.7A Withdrawn CN114153599A (en) 2021-11-17 2021-11-17 Memory write-in optimization method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114153599A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587070A (en) * 2022-11-30 2023-01-10 摩尔线程智能科技(北京)有限责任公司 Apparatus, method, computing device, and storage medium for managing storage of numerical values

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587070A (en) * 2022-11-30 2023-01-10 摩尔线程智能科技(北京)有限责任公司 Apparatus, method, computing device, and storage medium for managing storage of numerical values

Similar Documents

Publication Publication Date Title
CN108062235B (en) Data processing method and device
WO2022062833A1 (en) Memory allocation method and related device
CN111427859A (en) Message processing method and device, electronic equipment and storage medium
CN114153599A (en) Memory write-in optimization method, device, equipment and medium
CN114217738A (en) Dynamic queue type cyclic storage method, device, equipment and medium
CN112286454B (en) Bitmap synchronization method and device, electronic equipment and storage medium
CN115827506A (en) Data writing method, data reading method, device, processing core and processor
US20240045763A1 (en) A data reconstruction method based on erasure coding, an apparatus, a device and a storage medium
CN110955639A (en) Data processing method and device
CN112688885B (en) Message processing method and device
US20120246264A1 (en) Data Exchange Between Communicating Computing Equipment Using Differential Information
US20230106217A1 (en) Web-end video playing method and apparatus, and computer device
CN111339056B (en) Method and system for improving writing performance of Samba processing large file
CN111435323B (en) Information transmission method, device, terminal, server and storage medium
CN111770054A (en) Interaction acceleration method and system for SMB protocol read request
CN111984591A (en) File storage method, file reading method, file storage device, file reading device, equipment and computer readable storage medium
CN114205115A (en) Data packet processing optimization method, device, equipment and medium
CN113808711A (en) DICOM file processing method, DICOM file processing device, computer equipment and storage medium
CN109947978B (en) Audio storage and playing method and device
CN113641512B (en) Method, system, equipment and storage medium for processing Ajax requests in merging mode
CN111600943A (en) Method and equipment for acquiring target data
CN112131193B (en) Application program compression method and device
CN115878351B (en) Message transmission method and device, storage medium and electronic device
CN111367462B (en) Data processing method and device
CN115174446B (en) Network traffic statistics method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220308

WW01 Invention patent application withdrawn after publication