CN109144892A - A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data - Google Patents

A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data Download PDF

Info

Publication number
CN109144892A
CN109144892A CN201810980002.XA CN201810980002A CN109144892A CN 109144892 A CN109144892 A CN 109144892A CN 201810980002 A CN201810980002 A CN 201810980002A CN 109144892 A CN109144892 A CN 109144892A
Authority
CN
China
Prior art keywords
data
listcurrent
buffering
listspare
high frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810980002.XA
Other languages
Chinese (zh)
Inventor
经玉健
吴小俊
王惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Guodian Nanzi Railway Traffic Engineering Co Ltd
Original Assignee
Nanjing Guodian Nanzi Railway Traffic Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Guodian Nanzi Railway Traffic Engineering Co Ltd filed Critical Nanjing Guodian Nanzi Railway Traffic Engineering Co Ltd
Priority to CN201810980002.XA priority Critical patent/CN109144892A/en
Publication of CN109144892A publication Critical patent/CN109144892A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention discloses a kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data, it is characterized by: increasing a buffering chained list (CacheList) on the basis of list structure, it is all for the addition of chained list, delete operation and to be not directly placed on memory, but buffering chained list (CacheList) is acted on, including initialization step, addition data step, deletion data step.The present invention uniformly applies for memory headroom, unified release, when system application is run, process software and hardware resources are used for the processing to real time data as far as possible, system interaction number is reduced to minimum, initialization starts the practical repeatedly multiplexing of memory headroom of just distribution, provides a good buffering between the dynamic data Container Management and memory headroom of application process.

Description

A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data
Technical field
One industrial monitoring system will be monitored, controlled and be managed to the operation of external system, and many applications not only need It safeguards a large amount of shared data, and requires a certain determining moment in regular hour time limit or specified from external system Data are acquired, and handle data according to specified requirement, then make to external system and timely responding.The present invention relates to a kind of pipes Manage the buffering linked list data structure design method of memory medium-high frequency delta data.
Background technique
In current metro automatic field, it is mutual that environment with monitoring of tools (BAS) system can provide an achievable information Logical and resource-sharing platform, design are soft using the multilevel software development platform customized application of modular, similar modular construction Part is realized in a manner of integrated and interconnection with each access subsystem using the hardware interface and soft communication agreement of General Open Information exchange, the final information mutual communication realized between Centralized Monitoring function and each system to each related electromechanical equipment, information are total It enjoys and coordination and interaction function.In actual operation management, each concrete application program based on system platform is according to itself function Energy data content of interest is not quite similar, and workflow is roughly the same, (tells system should using starting → log-on data With data of interest) → receive message push the situation of change of data of interest (this apply) → storing data and root of system Data are handled according to functional requirement.
In view of under industrial environment, system platform is often high frequency and enormous amount to the data-pushing of application, in order not to Excessive occupying system resources bring overcharge to system platform, and system platform is pushing away for specific a certain application distribution Data-message queue space is sent to be limited, this means that if the application cannot extract in time and handle PUSH message queue In system message, can not will receive in time new data-message because message queue is occupied full, cause information drop-out, this Being in the industrial automation that data age and accuracy have high request cannot be received.
How simple and quick extraction message and corresponding data content is handled, timely digestive system platform PUSH message Queue, the data structure used to program propose high request.Should be simple and convenient, facilitate data extraction process, it is simultaneous again Gu Wending and speed.
There are many common data management container, such as Array, Stack, Queue, List etc., and wherein chained list is dynamic One of Typical Representative of container.Chained list is storage organization discontinuous, non-sequential on a kind of physical memory cell, data element Logical order is realized by the pointer link orders in chained list.By a series of nodes, (each member is called usually chained list in chained list For node) it forms, node can be dynamically generated at runtime.Each node includes two parts: one is storing data-elements Data field, the other is storing the pointer field of next node address.Array linked list can be overcome to need using list structure The shortcomings that size of data is known in advance, list structure can make full use of calculator memory space, realize flexible memory dynamic Management.On the basis of chained list, the present invention provides a kind of buffering linked list data structures of managing internal memory medium-high frequency delta data Design method, special disposal high frequency variation mass data caused by message queue, it is more efficient, more stable.
Summary of the invention
Aiming at the problems existing in the prior art, the present invention provides a kind of bufferings of managing internal memory medium-high frequency delta data Linked list data structure design method.
To achieve the goals above, technical scheme is as follows.
A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data, it is characterised in that: in chain Increase a buffering chained list (CacheList) on the basis of table structure, it is all not direct for the addition of chained list, delete operation Memory is acted on, but acts on buffering chained list (CacheList), including initialization step, addition data step, deletion data Step.
A kind of buffering linked list data structure design method of above-mentioned managing internal memory medium-high frequency delta data, further feature Be: buffering chained list (CacheList) includes: the member of two List types of m_listCurrent and m_listSpare Unified application memory headroom, unified releasing memory are responsible in variable, m_listSpare managing internal memory space;m_listCurrent Directly facing application program, the addition of all data (pointer), delete operation are completed by m_listCurrent, m_ When listCurrent adds data (pointer), to m_listSpare apply memory, and non-OS, when deletion data, Not direct releasing memory space, but the occupied memory headroom of data will have been handled and " returned " m_listSpare.
A kind of buffering linked list data structure design method of above-mentioned managing internal memory medium-high frequency delta data, further feature Be: in the initialization step, m_listCurrent initialization is empty chain table, m_listSpare is first without any data Just application distributes the memory headroom of N number of PII data objects when beginningization.
A kind of buffering linked list data structure design method of above-mentioned managing internal memory medium-high frequency delta data, further feature Be: the addition data step includes checking whether m_listSpare is sky, if being not sky, m_ when encountering new data First data element pointerDataT1 of listSpare takes out to be used to m_listCurrent, first does new data assignment It saves, then pointerDataT1 is added in m_listCurrent.
A kind of buffering linked list data structure design method of above-mentioned managing internal memory medium-high frequency delta data, further feature Be: the deletion data step includes that application process extracts numerical value element pointer from m_listCurrent PointerDataT1 deletes it after processing from m_listCurrent, not releasing memory space after deletion, but PointerDataT1 be then added to m_listSpare continue it is spare, wait m_listCurrent to extract again whenever necessary, such as This moves in circles.
The utility model has the advantages that
Compared with prior art, the buffering linked list data structure of a kind of managing internal memory medium-high frequency delta data of the present invention Design method, different from conventional dynamic data container, real-time on-demand application, releasing memory, the present invention uniformly applies for memory Space, it is unified to discharge, when system application is run, process software and hardware resources are used for the processing to real time data as far as possible, will be System interaction times are reduced to minimum, and it is application process that initialization, which starts the practical repeatedly multiplexing in memory (heap) space of just distribution, A good buffering is provided between dynamic data Container Management and memory headroom.
Detailed description of the invention
Fig. 1 is the comparison diagram (init state) of conventional chained list and the application.
Fig. 2 is the comparison diagram (newly-increased data) of conventional chained list and the application.
Fig. 3 is conventional chained list and the comparison diagram (deleting data) of the application.
Specific embodiment
Invention is further described in detail for specific embodiment.
Chained list is a kind of data structure discontinuous on physical memory, is a kind of data structure of chain type access, with one Data element in the arbitrary storage unit storage linear list of group address.Data in chained list are indicated with node, each The composition of node: element (image of data element)+pointer (instruction following element storage location), element is exactly storing data Storage unit, pointer are exactly the address date for connecting each node.Common chain sheet form has single (to) chained list and bis- (to) chains Two kinds of table.
A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data of the present invention is different That an intermediate list is further added by the basis of chained list in doubly linked list, it is all for operations such as addition, the deletions of chained list simultaneously Memory is not directly acted on, intermediate list is acted on, intermediate list is like one between memory and program chained list Cache is buffered, therefore, is called buffering chained list (CacheList).
Its class realizes (C++) code schematically as follows:
It is m_listCurrent and m_ respectively there are two the member variable of List type in class CacheList ListSpare, m_listCurrent are directly facing application program, and that real managing internal memory space is then m_listSpare. M_listSpare is responsible for unifying to apply memory headroom, uniformly releasing memory, and addition, the delete operation of all data (pointer) It is completed by m_listCurrent.When m_listCurrent adds data (pointer), to m_listSpare application memory, and Non-OS, when deleting data, also not direct releasing memory space, but the occupied memory headroom of data will have been handled " returning " m_listSpare, specific implementation details are as follows.
Initialization step
In general, this kind of basic data structure of chained list, is initializing no any data, is being empty chain table, in addition to chained list itself Loss, occupied without any datarams, with the operation of application process, data dynamic, which increases, to be reduced, and chained list is real-time on demand Request storage allocation or releasing memory.
As shown in Figure 1, the comparison diagram (init state) of conventional chained list and the application.
Two List member variables of the design, m_listCurrent is identical as conventional chained list, initializes without any number According to for empty chain table.And just (N value is related to memory bus bandwidth, herein no longer by application distribution N in initialization by m_listSpare Repeating) a PII data objects memory headroom is in case subsequent use.
Add data step
Conventional chained list (normal data structures) is when there is new data to need to add, first to memory (heap) request for data sky Between, DataT*pointerDataT1=new DataT;New data assignment be saved in pointerDataT1 in the memory → SetDataT (newDataT), then the memory pointer pointerDataT1 of acquisition is added in m_List.
As shown in Fig. 2, the comparison diagram (newly-increased data) of conventional chained list and the application.
Different from conventional chained list, the design is to request memory headroom at the first time, and be to look at encountering new data M_listSpare whether be it is empty, such as not to be empty, first data element pointerDataT1 of m_listSpare take out to M_listCurrent is used, and first does the preservation of new data assignment, then pointerDataT1 is added in m_listCurrent.
Simple realization code is schematically as follows:
The advantages of doing so has at two, and first, some form of system is also belonged to operating system application configuration memory Interaction, needs waiting system to return, it could even be possible to request failure, returns to null pointer.In the work of the high frequency propelling data of magnanimity In industry automated environment, system again and again etc. is to be returned may to generate phase delay even data step-out, encounter and ask Failure is asked to be more likely to that application process is caused to collapse.When memory application is uniformly placed on initialization by the design, pushed in real time data When without waiting for system storage allocation, directly using the memory headroom having had been prepared for, be less likely to the poles such as null pointer occur Situation is held, speed is fast and stable.Second, even equally apply memory request, distributed in real time according to real-time requirement in The efficiency deposited also will distribute memory unitedly lower than the design.
Delete data step
When deleting data, application process extracts data from conventional chained list (normal data structures), and data (are referred to after processing Needle) it is deleted from chained list, and memory (heap) space of data is stored before discharging.
As shown in figure 3, the comparison diagram (deleting data) of conventional chained list and the application.
In the design, the process of delete operation is as shown in Figure 3:
Application process extracts numerical value element pointer pointerDataT1 from m_listCurrent, by it from m_ after processing It is deleted in listCurrent, not releasing memory space after deletion, but pointerDataT1 is then added to m_listSpare Continue spare, waits m_listCurrent to extract again whenever necessary, loop back and forth like this.
Simple realization code is schematically as follows:
// data to be processed are taken out from m_listCurrent
DataT**ppTmpDataT=FetchFirstCurrentItem (m_listCurrent);
// data processing
HandleData(ppTmpDataT);
// processed data object pointer returned to m_listSpare
ReserveItem(ppTmpDataT);
Above embodiment is only most preferred embodiment disclosed according to the technique and scheme of the present invention, this cannot be limited with this The protection scope of invention, it is all according to the technical idea provided by the invention, any changes made on the basis of the technical scheme, It falls within the protection scope of claims of the present invention.

Claims (5)

1. a kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data, it is characterised in that: in chained list Increase a buffering chained list CacheList on the basis of structure, it is all that the addition of chained list, delete operation are not acted on directly In memory, but buffering chained list CacheList is acted on, including initialization step, addition data step, deletion data step.
2. the buffering linked list data structure design method of managing internal memory medium-high frequency delta data according to claim 1, Be characterized in that: the buffering chained list CacheList include: two List types of m_listCurrent and m_listSpare at Unified application memory headroom, unified releasing memory are responsible in member's variable, m_listSpare managing internal memory space;m_ ListCurrent is directly facing application program, and all data, the addition of pointer, delete operation are by m_listCurrent Lai complete At when m_listCurrent adds data, pointer, to m_listSpare application memory, when deleting data, not directly in release Space is deposited, but the occupied memory headroom of data will have been handled and " returned " m_listSpare.
3. the buffering linked list data structure design method of managing internal memory medium-high frequency delta data according to claim 2, Be characterized in that: in the initialization step, m_listCurrent initialization is empty chain table, m_listSpare without any data In initialization, just application distributes the memory headroom of N number of PII data objects.
4. the buffering linked list data structure design method of managing internal memory medium-high frequency delta data according to claim 3, Be characterized in that: the addition data step includes: to check whether m_listSpare is sky when encountering new data, if being not sky, First data element pointerDataT1 of m_listSpare is taken out and is used to m_listCurrent, new data is first done Assignment saves, then pointerDataT1 is added in m_listCurrent.
5. the buffering linked list data structure design method of managing internal memory medium-high frequency delta data according to claim 4, Be characterized in that: the deletion data step includes: that application process extracts numerical value element pointer from m_listCurrent PointerDataT1 deletes it after processing from m_listCurrent, not releasing memory space after deletion, but PointerDataT1 be then added to m_listSpare continue it is spare, wait m_listCurrent to extract again whenever necessary, such as This moves in circles.
CN201810980002.XA 2018-08-27 2018-08-27 A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data Pending CN109144892A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810980002.XA CN109144892A (en) 2018-08-27 2018-08-27 A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810980002.XA CN109144892A (en) 2018-08-27 2018-08-27 A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data

Publications (1)

Publication Number Publication Date
CN109144892A true CN109144892A (en) 2019-01-04

Family

ID=64828206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810980002.XA Pending CN109144892A (en) 2018-08-27 2018-08-27 A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data

Country Status (1)

Country Link
CN (1) CN109144892A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723250A (en) * 2020-05-22 2020-09-29 长沙新弘软件有限公司 Linked list management method based on reference counting
CN113254364A (en) * 2021-05-24 2021-08-13 山东创恒科技发展有限公司 Information storage device for embedded system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075214A (en) * 2007-06-28 2007-11-21 腾讯科技(深圳)有限公司 Method and system for managing memory
CN103246567A (en) * 2013-03-26 2013-08-14 中国科学院电子学研究所 Queuing method for target tracking internal memory management
CN105302739A (en) * 2014-07-21 2016-02-03 深圳市中兴微电子技术有限公司 Memory management method and device
WO2017156683A1 (en) * 2016-03-14 2017-09-21 深圳创维-Rgb电子有限公司 Linked list-based application cache management method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075214A (en) * 2007-06-28 2007-11-21 腾讯科技(深圳)有限公司 Method and system for managing memory
CN103246567A (en) * 2013-03-26 2013-08-14 中国科学院电子学研究所 Queuing method for target tracking internal memory management
CN105302739A (en) * 2014-07-21 2016-02-03 深圳市中兴微电子技术有限公司 Memory management method and device
WO2017156683A1 (en) * 2016-03-14 2017-09-21 深圳创维-Rgb电子有限公司 Linked list-based application cache management method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余翔湛等: "动态共享内存缓冲池技术", 《哈尔滨工业大学学报》 *
李健: "《C语言程序设计》", 电子科技大学出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723250A (en) * 2020-05-22 2020-09-29 长沙新弘软件有限公司 Linked list management method based on reference counting
CN111723250B (en) * 2020-05-22 2024-03-08 长沙新弘软件有限公司 Chain table management method based on reference counting
CN113254364A (en) * 2021-05-24 2021-08-13 山东创恒科技发展有限公司 Information storage device for embedded system

Similar Documents

Publication Publication Date Title
CN102968498B (en) Data processing method and device
CN111324445B (en) Task scheduling simulation system
CN108510082A (en) The method and device that machine learning model is handled
CN105025053A (en) Distributed file upload method based on cloud storage technology and system
CN103905537A (en) System for managing industry real-time data storage in distributed environment
CN103778212B (en) Parallel mass data processing method based on back end
CN103870338A (en) Distributive parallel computing platform and method based on CPU (central processing unit) core management
CN105094982A (en) Multi-satellite remote sensing data processing system
CN111381983A (en) Lightweight message middleware system and method of virtual test target range verification system
CN110134430A (en) A kind of data packing method, device, storage medium and server
CN105474177B (en) Distributed processing system(DPS), equipment, method and recording medium
CN105930417B (en) A kind of big data ETL interactive process platform based on cloud computing
CN104615487A (en) System and method for optimizing parallel tasks
CN109144892A (en) A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data
CN113900810A (en) Distributed graph processing method, system and storage medium
CN104144202A (en) Hadoop distributed file system access method, system and device
CN113051102B (en) File backup method, device, system, storage medium and computer equipment
CN101110700B (en) Explorer in resource management platform
CN110134533B (en) System and method capable of scheduling data in batches
CN101645073A (en) Method for guiding prior database file into embedded type database
EP3958123A1 (en) Low latency queuing system
CN104281636A (en) Concurrent distributed processing method for mass report data
KR20220026603A (en) File handling methods, devices, electronic devices and storage media
CN116578353A (en) Application starting method and device, computer equipment and storage medium
CN110221778A (en) Processing method, system, storage medium and the electronic equipment of hotel's data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190104

RJ01 Rejection of invention patent application after publication