CN102646079B - Disk data protection method oriented to Linux operating system - Google Patents

Disk data protection method oriented to Linux operating system Download PDF

Info

Publication number
CN102646079B
CN102646079B CN201210121227.2A CN201210121227A CN102646079B CN 102646079 B CN102646079 B CN 102646079B CN 201210121227 A CN201210121227 A CN 201210121227A CN 102646079 B CN102646079 B CN 102646079B
Authority
CN
China
Prior art keywords
block
num
data
bitmap
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210121227.2A
Other languages
Chinese (zh)
Other versions
CN102646079A (en
Inventor
汪黎
王开宇
梁镇
吴庆波
戴华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201210121227.2A priority Critical patent/CN102646079B/en
Publication of CN102646079A publication Critical patent/CN102646079A/en
Application granted granted Critical
Publication of CN102646079B publication Critical patent/CN102646079B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a disk data protection method oriented to a Linux operating system. The technical scheme comprises modifying four functions of a loop device including adding DSP_init () at the beginning of loop_init, adding DSP_receive () at the beginning of lo_receive, adding DSP_send () at the beginning of lo_send, and adding DSP_exit () at the beginning of loop_exit; generating loop. ko; loading loop. ko, and initializing DSP_init (); if the user requests for reading disk data, modifying pos by the DSP_receive (); if the user requests for writing disk data, modifying pos by the DSP_send (); and if the user requests for exiting, unloading by the DSP_exit (). The invention can realize transparent protection on disk data.

Description

Towards the hard disk data protection method of class (SuSE) Linux OS
Technical field
The present invention relates to computer file system and field of storage, refer in particular to a kind of hard disk data protection method towards class (SuSE) Linux OS.
Background technology
In the operating system of main flow a few days ago, Windows operating system has occupied most of market.But close source property, the security of Windows system also bring day by day serious problem.Class (SuSE) Linux OS, due to its high-performance, the feature of high security and open source, obtains using more and more widely in the world.Especially in the key area relevant to national security such as government, military project, various countries will adopt autonomous controlled operating system as the important strategic planning project of country, and the user of domestic employing class linux system is also more and more.China has also developed autonomous controlled domestic operating system based on class (SuSE) Linux OS, as kylin, and red flag, acceptance of the bid etc.
Data in magnetic disk defencive function is the important component part that improves operating system security.Based on Windows operating system, the product that carries out data security protecting is mainly PowerShadow, and this system can realize the safeguard protection to data in hard disk, but it is commercial product, and core technology is covert.And due to the fundamental difference of operating system framework and realization, the data security protection method based on Windows operating system cannot be applied on class (SuSE) Linux OS.Towards class linux system, yet there are no disclosed data security protection method.Therefore,, along with the extensive utilization of class linux system, the data security protection method of research based on class linux system, has important theory significance and practical value.
Class (SuSE) Linux OS structure as shown in Figure 1.Protecting relevant part with data in magnetic disk is file system interface, file system module, Block Device Driver module.User's data in magnetic disk operation enters operating system nucleus by calling file system interface, is intercepted and captured by corresponding file system module, and file system module calls corresponding Block Device Driver module again and carries out data in magnetic disk operation, and result is returned to user.
The main flow file system that class (SuSE) Linux OS uses is at present Ext2, Ext3, and Ext4, these three kinds of file system have kept good compatibility, and method of the present invention can not add amendment and be directly applied for this three kinds of file system.These file system are referred to as Extx file system.
Fig. 2 top is divided into the magnetic disk structural drawing of Extx file system.Extx file system is taking piece as basic read-write and allocation unit, and first in disk partition retains as bootstrap block, is not subject to Extx file system management, and the remainder in disk partition, by Extx file system management, is divided into N equal-sized group.
Each group, as shown in part under Fig. 2, comprises superblock, group descriptor table, bitmap block, i node bitmap, i node table, six of data fields part:
Superblock takies the space of a piece, the information of whole file system is described, superblock content in 1 to N piece group is identical, comprise the number of i node in file system, the size of each i node structure, file system total block data, free block number, block size, contained number of each group, the each contained i nodes of group etc.Wherein, i node is called inode in english literature, for describing the metadata information of a file.
Group descriptor table is made up of piece group descriptor, and the content of the group descriptor table in 1 to N piece group is identical.Each group descriptor described the information of a piece group, takies 32 bytes.The piece group descriptor quantity comprising in group descriptor table is counted N with the piece group in this disk system and is equated.The information of each group descriptor description comprises the start address of bitmap block in this piece group, the start address of i node bitmap and i node table start address, and unit is piece number, in addition, also comprise piece number idle in this piece group, the idle i nodes in this piece group, catalogue number in this piece group.
Bitmap block is described the service condition of piece in this piece group, and each bit (bit) represents a piece in this piece group, and the value of bit is that the piece of 1 expression correspondence uses, is that this piece free time of 0 expression is available.
I node bitmap accounts for a piece, and wherein each bit represents that whether idle an i node is available.The value of bit is that the i node of 1 expression correspondence uses, is that the i node free time of 0 expression correspondence is available.
I node table comprises all i nodes in the piece group of place.Because file is except data need to be stored, some descriptors also need storage, such as file type (routine, catalogue, Symbolic Links etc.), and authority, file size, the establishment/amendment/access times etc., these information exist in i node.Each i node takies 128 bytes.The corresponding i node of each file, all i node composition i node tables in a piece group.Data in i node table are also referred to as the metadata of file.
Data field is for storing documents data.
In class (SuSE) Linux OS, the flow process of disk read-write as shown in Figure 3, user's data access request enters operating system nucleus by calling file system interface, intercepted and captured by Extx file system module, Extx file system module calls corresponding Block Device Driver module again and carries out data in magnetic disk operation, and result is returned to user.This method is due to the data of user on can direct control disk, and safety that cannot protected data may cause loss of data under malicious attack, system failure, and the situations such as information leakage occur.
Loop equipment, full name is Loopback equipment, is a dummy block will equipment in class linux system kernel.It can set up associated with a concrete physical disk block device.After setting up association, Loop equipment be it seems to file system, is exactly the disk block equipment of a standard, can format, traditional disk read-write etc.First user can be intercepted and captured by the driver of Loop equipment the data access request of Loop equipment, and then the driver of the concrete physical disk of this its association of routine call removes to carry out actual read-write operation.Thereby operation to Loop equipment, actual is operation to its associated physical disk block device.Like this, Loop equipment can be intercepted and captured the operation of all physical disk equipment associated with it, upper layer application is shielded to the details of concrete physical disk drives.
In the driver code of Loop equipment, when device loads, will call loop_init function, by loop_init function initialization apparatus, when equipment unloading, will call loop_exit function, free system resources, being lo_receive to the read operation function of associated bottom physical disk, is lo_send to the write operation function of bottom physical disk.
It is loop device descriptors that lo_recive function comprises three parameter: lo, describes loop device-dependent message; Bio describes the buffer location of store data in internal memory; Pos describes the position of the data that will read in disk.Lo_recive function is described the data stuffing that read pos indication position from disk core buffer to bio.
Lo_send function carries out disk write operation, and comprising equally three parameter: lo is loop device descriptor; Bio describes the buffer location of store data in internal memory; Pos represents the disk position that data will write.Data in the core buffer that lo_send function is described bio write the position of pos indication in disk.
Therefore; if utilize Loop equipment; Loop equipment is set up associated with the physical disk that will protect; amendment Loop device driver; before Loop equipment is to the read-write operation of bottom physical disk; as required the retouching operation of disk legacy data is carried out to transparent redirection, can realize the safeguard protection to data in magnetic disk.The method that does not at present also have open source literature to relate to utilize Loop equipment to carry out data in magnetic disk protection.
Summary of the invention
The technical problem to be solved in the present invention is in class linux system, to realize the protection to disk legacy data.
Technical scheme of the present invention is:
The first step; the loop_init of amendment Loop equipment, lo_receive, lo_send and loop_exit function; by being referred to as data security protecting module through the Loop device driver of amendment, referred to as DSP (Data Security Protection) module.Amending method is:
Add function DSP_init () in loop_init function beginning, by the primary data of DSP_init () initialization DSP module.
Add function DSP_receive () in lo_receive function beginning, revised as required the parameter p os of lo_receive by DSP_receive (), make pos point to the data position after being redirected.
Add function DSP_send () in lo_send function beginning, revised as required the parameter p os of lo_send by DSP_send (), make pos point to untapped disk space, to avoid destroying data with existing.
Add function DSP_exit () in loop_exit function beginning, discharged the memory source of DSP module initialization and program generation in service by DSP_exit ().
Second step, by amended loop_init, lo_receive, lo_send and loop_exit function compiling link, generate new loop device driver module loop.ko.
The 3rd step, loads loop device driver module loop.ko, calls initialization function loop_init, and this function call DSP_init () carries out following initialization:
3.1 build a linear linked list blocklist, two tuple (old_block_num of each element storage in blocklist, new_block_num), old_block_num is for recording the piece number of initial disk requests, and new_block_num is for recording the new piece number after being redirected.Blocklist is initialized as empty table by DSP_init ();
In 3.2 reading disk subregions, the superblock of file system, group descriptor table, be buffered in them in internal memory.According to the superblock of buffer memory and group descriptor table information, obtain following data:
3.2.1 obtain the number of free block 1 to N piece group from group descriptor table, the number of free block in 1 to N piece group be stored in to array free_block_count[] in, free_block_count[m] number of free block in m piece group of storage, 1≤m≤N.
3.2.2 the piece that obtains each group from superblock is counted block_count, and the piece number of each piece group equates;
3.2.3 obtain the starting block block_bitmap_start[of the bitmap block of 1 to N piece group from group descriptor table], block_bitmap_start[m] starting block number of bitmap block in m piece group of storage, 1≤m≤N;
3.2.4 obtain the starting block inode_table_start[of the i node table of 1 to N piece group from group descriptor table], inode_table_start[m] starting block number of i node table in m piece group of storage, 1≤m≤N;
3.2.5 the block size block_size that obtains 1 to N piece group from superblock, unit is byte, the block size of each piece group equates;
3.2.6 obtain the each contained i nodes inode_count of group from superblock, the contained i nodes of each piece group equates;
3.2.7 obtain the starting block inode_bitmap_start[of the i node bitmap of 1 to N piece group from group descriptor table], inode_bitmap_start[m] starting block number of i node bitmap in m piece group of storage, 1≤m≤N;
3.2.8 obtain file system total block data total_block_count from superblock;
3.3 calculate the size of 1 bitmap block to N piece group, the size of i node table, and the number of piece group:
3.3.1 the size of the bitmap block of m piece group:
Block_bitmap_size[m]=(inode_bitmap_start[m]? block_bitmap_start[m]) × block_size, inode_bitmap_start[m] be the starting block number of the i node bitmap of piece group m, block_bitmap_start[m] be the starting block number of the bitmap block of m piece group, block_size is block size, and unit is byte;
3.3.2 the big or small inode_table_size of the i node table of piece group:
Inode_table_size=inode_count × 128/block_size, symbol "/" represents division, and 128 is each i node size, and unit is byte, the equal and opposite in direction of the i node table of each piece group;
3.3.3 the number of piece group: group_count=total_block_count/block_count;
3.4 according to 1 to the bitmap block starting block block_bitmap_start[of N piece group], and the big or small block_bitmap_size[of bitmap block] read the bitmap block block_bitmap[of 1 to N piece group from disk];
3.5loop_init[] function initialization apparatus, this step is the same with the loop_init function initialization apparatus process of existing Loop equipment described in background technology.The 4th step, wait user's data access request, if data access request is reading disk data, carries out the 5th step; If user asks as writing data in magnetic disk, carry out the 6th step.If user asks as exiting disk protect module, carry out the 7th step.
The 5th step, lo_receive starts, and modify in the disk position that to call DSP_receive () be this data access request to the parameter p os of lo_receive, and method is:
5.1 calculate the disk block number that will read: represent to round under x.
5.2 use request_block_num inquiry linked list blocklist, if the old_block_num in all two tuples of blocklist all do not equate with request_block_num, show that this piece is not redirected, do not need to revise pos, carry out 5.3.If there is (old_block_num, new_block_num), make old_block_num equal request_block_num, the current disk position pos that will read is revised as to new_block_num × block_size+pos%block_size, wherein % is complementation.
The buffer zone of store data in internal memory in 5.3lo_recive function is described the data stuffing that read pos indication position from disk this data access request to bio, described bio is a parameter of lo_recive function, when starting lo_recive function, class linux system kernel to bio assignment, turns the 4th step.
The 6th step, lo_send starts, and calls the parameter p os of DSP_send () to lo_send, and modify in the disk position of this data access request, and method is:
6.1 calculate the disk block number that will write:
6.2 use request_block_num inquiry linked list blocklist, if there are two tuple (old_block_num in blocklist table, new_block_num), make old_block_num equal request_block_num, the current disk position pos that will write is revised as to new_block_num × block_size+pos%block_size, turns 6.7; If the old_block_num in blocklist table in all two tuples does not all equate with request_block_num, carry out 6.3.
6.3 judge whether the data block that request_block_num is corresponding is positioned at data field, and method is:
6.3.1 calculate the piece group number at the data block place that request_block_num is corresponding:
The data field starting block number of the piece group that 6.3.2 computing block group number is group_num:
data_start=inode_table_start[group_num]+inode_table_size;
If 6.3.3 request_block_num<data_start, show that the data block that request_block_num is corresponding is positioned at file system metadata district before, data field, be that the data block that request_block_num is corresponding is not positioned at data field, otherwise show that the data block that request_block_num is corresponding is positioned at data field;
If the data block that 6.4 request_block_num are corresponding is positioned at data field, turn 6.6; If the data block that request_block_num is corresponding is not positioned at data field, turn 6.5;
The data block that 6.5request_block_num is corresponding is positioned at meta-data region; the information in these regions is must be protected; by the not data block of use of the write operation transparent redirection to of data block corresponding request_block_num; concrete grammar is: call function findfree (); find a future use block new_block_num; if findfree () returns to mistake; show currently can use without free block; making lo_send function make mistakes exits; no longer carry out follow-up write operation; turn the 4th step, otherwise turn 6.7.Findfree () functional procedure is as follows:
6.5.1 make k=-1;
6.5.2 make k=m+1, if k >=group_count, group_count is piece group number, function reports an error and exits, otherwise check free_block_count[m] value judge k piece group whether in addition free block can use, if free_block_count[k]==0, turn 6.5.2, otherwise turn 6.5.3;
6.5.3 loop variable i=0 is set;
6.5.4 make block_bitmap[k] [i] represent the data of i bit of the bitmap block of k piece group, if block_bitmap[k] [i] be 0, represent that corresponding piece is vacant, judge again block_bitmap[k] whether piece that [i] is corresponding be positioned at data field, if this piece is positioned at data field, revise block_bitmap[k] value of [i] is 1, represents that corresponding piece takies, and by free_block_count[k] value subtract 1, return to the represented piece new_block_num of this bit; If block_bitmap[k] [i] be not 0 or block_bitmap[k] piece that [i] is corresponding is not positioned at data field, i=i+1, if i<block_bitmap_size[k] × 8, wherein 8 bit numbers that contain for each byte packet, turn 6.5.4, otherwise turn 6.5.2;
The bitmap block block_bitmap[group_num of 6.6 inquiry request_block_num place piece groups], if this piece is vacant, revise block_bitmap[group_num] in be 1 to the value of bit that should piece, represent that corresponding piece takies, and by free_block_count[group_num] value subtract 1, make new_block_num=request_block_num, turn 6.7; Otherwise, represent that this piece is one and has taken piece, the data of this piece are must be protected, and call function findfree () finds a free block new_block_num, if findfree () returns to mistake, show currently can use without free block, make lo_send function make mistakes and exit, no longer carry out follow-up write operation, turn the 4th step, if findfree () has found free block, turn 6.7;
6.7 join (request_block_num, new_block_num) in chained list blocklist, and the current disk position pos that will write is revised as to new_block_num × block_size+pos%block_size;
Data in the core buffer that 6.8lo_send function is described bio write the position of pos indication in disk, turn the 4th step, and described bio is a parameter of lo_send function, when class linux system kernel starts lo_send function to bio assignment; The 7th step, loop_exit starts Unload module, calls DSP_exit () module is proceeded as follows:
7.1DSP_exit () function discharges the memory source of DSP module initialization and program generation in service, prevents RAM leakage;
7.2 continue to carry out loop_exit function, and to the unloading of Loop equipment, this step is the same to Loop equipment uninstall process with the loop_exit function of existing Loop equipment described in background technology.Adopt the present invention can reach following technique effect:
The present invention can carry out transparency protected to data in magnetic disk; adopt after the present invention; user can carry out any read-write operation to local hard drive in the situation that perception data security protection module does not exist; all increasing newly disk legacy data; amendment; deletion action all by transparent redirection to the vacant space on disk, can not exert an influence to original data in magnetic disk, these cease to be in force automatically after operating in restarting operating systems or the unloading of data security protecting module.For example, if user has deleted file because of carelessness, or viral trojan horse program malice deleted data, and after the unloading of data security protecting module, these data are by as excellent as before.If viral wooden horse has been invaded system, or the rogue software that is difficult to unloading has been installed, after the unloading of data security protecting module, these softwares all will not exist, and system loaded disk operating system clean state before last time by being returned to.Therefore, all virus, trojan horse program, rogue software all cannot be encroached on the data on hard disk.The present invention can surf the Net for personal computer, Internet bar, and unit computer house, can streamlining management, has practical value widely.
Brief description of the drawings
Fig. 1: be related to schematic diagram described in background technology between existing each kernel interface of class (SuSE) Linux OS and kernel module.
Fig. 2: the magnetic disk structural drawing of existing Extx class file system described in background technology.
Fig. 3: the process flow diagram of existing class (SuSE) Linux OS to magnetic disc data accessing described in background technology.
Fig. 4: adopt the present invention to add the process flow diagram of class (SuSE) Linux OS to magnetic disc data accessing after disk protect module.
Fig. 5: overview flow chart of the present invention.
Embodiment
Fig. 4 adopts the present invention to add the process flow diagram of the rear class (SuSE) Linux OS of disk protect module (DSP module) to magnetic disc data accessing.DSP module loading is in the driver code of Loop equipment.User's data access request enters operating system nucleus by calling file system interface, intercepted and captured by Extx file system module, Extx file system module calls the Loop device driver module with DSP module again, call associated bottom physical disk device driver module by the Loop device driver module with DSP module and carry out data in magnetic disk and operate, and result is returned to user.
Fig. 5 is overview flow chart of the present invention.
The first step; the loop_init of amendment Loop equipment, lo_receive, lo_send and loop_exit function; by being referred to as data security protecting module through the Loop device driver of amendment, referred to as DSP (Data Security Protection) module.
Second step, by amended loop_init, lo_receive, lo_send and the compiling of loop_exit function, generate new loop device driver module loop.ko.Add function DSP_init () in loop_init function beginning, add function DSP_receive () in lo_receive function beginning, add function DSP_send () in lo_send function beginning, add function DSP_exit () in loop_exit function beginning.
The 3rd step, loads Loop device driver module, calls DSP_init () carry out the initialization of DSP module by initialization function loop_init before initialization apparatus.
The 4th step, waits for user's request.If data access request is reading disk data, carry out the 5th step; If user asks as writing data in magnetic disk, carry out the 6th step; If user asks as exiting disk protect module, carry out the 7th step.
The 5th step, lo_receive starts, and calls DSP_receive () pos is modified.
The 6th step, lo_send starts, and calls DSP_send () pos is modified.
The 7th step, loop_exit starts, and calls DSP_exit () and discharges the memory source of DSP module initialization and program generation in service, and unload Loop equipment by loop_exit function.

Claims (3)

1. towards a hard disk data protection method for class (SuSE) Linux OS, it is characterized in that comprising the following steps:
The first step, the loop_init of amendment Loop equipment, lo_receive, lo_send and loop_exit function, be DSP module by being referred to as data security protecting module through the Loop device driver of amendment, amending method is:
Add function DSP_init () in loop_init function beginning, by the primary data of DSP_init initialization DSP module;
Add function DSP_receive () in lo_receive function beginning, revised as required the parameter p os of lo_receive by DSP_receive (), make pos point to the data position after being redirected;
Add function DSP_send () in lo_send function beginning, revised as required the parameter p os of lo_send by DSP_send (), make pos point to untapped disk space;
Add function DSP_exit () in loop_exit function beginning, discharged the memory source of DSP module initialization and program generation in service by DSP_exit ();
Second step, by amended loop_init, lo_receive, lo_send and loop_exit function compiling link, generate new Loop device driver module loop.ko;
The 3rd step, loads Loop device driver module loop.ko, calls initialization function loop_init, and this function call DSP_init () carries out following initialization:
3.1 build a linear linked list blocklist, two tuple (old_block_num of each element storage in blocklist, new_block_num), old_block_num is for recording the piece number of initial disk requests, new_block_num is for recording the new piece number after being redirected, and blocklist is initialized as empty table by DSP_init ();
In 3.2 reading disk subregions, the superblock of file system, group descriptor table, be buffered in them in internal memory, according to the superblock of buffer memory and group descriptor table information, obtains following data:
3.2.1 obtain the number of free block 1 to N piece group from group descriptor table, the number of free block in 1 to N piece group is stored in to array free_block_count[] in, free_block_count[m] number of free block in m piece group of storage, 1≤m≤N, N is positive integer;
3.2.2 the piece that obtains each group from superblock is counted block_count, and the piece number of each piece group equates;
3.2.3 obtain the starting block block_bitmap_start[of the bitmap block of 1 to N piece group from group descriptor table], block_bitmap_start[m] starting block number of bitmap block in m piece group of storage;
3.2.4 obtain the starting block inode_table_start[of the i node table of 1 to N piece group from group descriptor table], inode_table_start[m] starting block number of i node table in m piece group of storage;
3.2.5 the block size block_size that obtains 1 to N piece group from superblock, unit is byte;
3.2.6 obtain the each contained i nodes inode_count of group from superblock;
3.2.7 obtain the starting block inode_bitmap_start[of the i node bitmap of 1 to N piece group from group descriptor table], inode_bitmap_start[m] starting block number of i node bitmap in m piece group of storage;
3.2.8 obtain file system total block data total_block_count from superblock;
3.3 calculate the size of 1 bitmap block to N piece group, the size of i node table, the number of piece group:
3.3.1 the size of the bitmap block of m piece group:
Block_bitmap_size[m]=(inode_bitmap_start[m]-block_bitmap_start[m]) × block_size, inode_bitmap_start[m] be the starting block number of the i node bitmap of m piece group, block_bitmap_start[m] be the starting block number of the bitmap block of m piece group, block_size is block size, and unit is byte;
3.3.2 the big or small inode_table_size of the i node table of piece group:
Inode_table_size=inode_count × 128/block_size, symbol "/" represents division, and 128 is each i node size, and unit is byte;
3.3.3 the number of piece group: group_count=total_block_count/block_count;
3.4 according to 1 to the bitmap block starting block block_bitmap_start[of N piece group], and the big or small block_bitmap_size[of bitmap block] read the bitmap block block_bitmap[of 1 to N piece group from disk];
3.5loop_init[] function initialization apparatus;
The 4th step, wait user's data access request, if data access request is reading disk data, carries out the 5th step; If user asks as writing data in magnetic disk, carry out the 6th step; If user asks as exiting disk protect module, carry out the 7th step;
The 5th step, lo_receive starts, and modify in the disk position that to call DSP_receive () be this data access request to the parameter p os of lo_receive, and method is:
5.1 calculate the disk block number that will read: represent to round under x;
5.2 use request_block_num inquiry linked list blocklist, if the old_block_num in all two tuples of blocklist all do not equate with request_block_num, show that this piece is not redirected, do not need to revise pos, carry out 5.3; If there is (old_block_num, new_block_num), make old_block_num equal request_block_num, the current disk position pos that will read is revised as to new_block_num × block_size+pos%block_size, wherein % is complementation;
The buffer zone of store data in internal memory in 5.3lo_recive function is described the data stuffing that read pos indication position from disk this data access request to bio, described bio is a parameter of lo_recive function, when starting lo_recive function, class linux system kernel to bio assignment, turns the 4th step;
The 6th step, lo_send starts, and calls the parameter p os of DSP_send () to lo_send, and modify in the disk position of this data access request, and method is:
6.1 calculate the disk block number that will write:
6.2 use request_block_num inquiry linked list blocklist, if there are two tuple (old_block_num in blocklist table, new_block_num), make old_block_num equal request_block_num, the current disk position pos that will write is revised as to new_block_num × block_size+pos%block_size, turns 6.7; If the old_block_num in blocklist table in all two tuples does not all equate with request_block_num, carry out 6.3;
6.3 judge whether the data block that request_block_num is corresponding is positioned at data field;
If the data block that 6.4 request_block_num are corresponding is positioned at data field, turn 6.6; If the data block that request_block_num is corresponding is not positioned at data field, turn 6.5;
6.5 by the not data block of use of the write operation transparent redirection to of data block corresponding request_block_num, method is: find a free block new_block_num, can use without free block if current, making lo_send function make mistakes exits, no longer carry out follow-up write operation, turn the 4th step, otherwise turn 6.7;
The bitmap block block_bitmap[group_num of 6.6 inquiry request_block_num place piece groups], if this piece is vacant, revise block_bitmap[group_num] in be 1 to the value of bit that should piece, and by free_block_count[group_num] value subtract 1, make new_block_num=request_block_num, turn the piece group number that 6.7, group_num refers to the data block place that request_block_num is corresponding; If this piece, for taking piece, is found a free block new_block_num, can use without free block if current, make lo_send function make mistakes and exit, no longer carry out follow-up write operation, turn the 4th step, if found free block, turn 6.7;
6.7 join (request_block_num, new_block_num) in chained list blocklist, and the current disk position pos that will write is revised as to new_block_num × block_size+pos%block_size;
Data in the core buffer that 6.8lo_send function is described bio write the position of pos indication in disk, turn the 4th step, and described bio is a parameter of lo_send function, when class linux system kernel starts lo_send function to bio assignment;
The 7th step, loop_exit starts Unload module, calls DSP_exit () module is proceeded as follows:
7.1DSP_exit () function discharges the memory headroom of DSP module initialization and program generation in service;
7.2 continue to carry out loop_exit function, and Loop equipment is unloaded.
2. the hard disk data protection method towards class (SuSE) Linux OS as claimed in claim 1, is characterized in that the method whether data block that the described request_block_num of judgement is corresponding is positioned at data field is:
Step 2.1 is calculated the piece group number at the data block place that request_block_num is corresponding:
The data field starting block number of the piece group that step 2.2 computing block group number is group_num:
data_start=inode_table_start[group_num]+inode_table_size;
If step 2.3 request_block_num<data_start, show that the data block that request_block_num is corresponding is positioned at file system metadata district before, data field, otherwise show that the data block that request_block_num is corresponding is positioned at data field.
3. the hard disk data protection method towards class (SuSE) Linux OS as claimed in claim 1; the method that it is characterized in that a free block new_block_num of described searching is call function findfree (), and findfree () functional procedure is as follows:
Step 3.1 makes k=-1;
Step 3.2 makes k=k+1, if k >=group_count, group_count is piece group number, function reports an error and exits, otherwise check free_block_count[m] value judge k piece group whether in addition free block can use, if free_block_count[k]=0, go to step 3.2, otherwise go to step 3.3;
Step 3.3 arranges loop variable i=0;
Step 3.4 makes block_bitmap[k] [i] represent the data of i bit of the bitmap block of k piece group, if block_bitmap[k] [i] be 0, represent that corresponding piece is vacant, judge block_bitmap[k] whether piece that [i] is corresponding be positioned at data field, if be positioned at data field, revise block_bitmap[k] value of [i] is 1, represents that corresponding piece takies, by free_block_count[k] value subtract 1, return to the represented piece new_block_num of this bit; If block_bitmap[k] [i] be not 0 or block_bitmap[k] piece that [i] is corresponding is not positioned at data field, i=i+1, if i<block_bitmap_size[k] × 8, wherein 8 bit numbers that contain for each byte packet, go to step 3.4, otherwise go to step 3.2.
CN201210121227.2A 2012-04-23 2012-04-23 Disk data protection method oriented to Linux operating system Expired - Fee Related CN102646079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210121227.2A CN102646079B (en) 2012-04-23 2012-04-23 Disk data protection method oriented to Linux operating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210121227.2A CN102646079B (en) 2012-04-23 2012-04-23 Disk data protection method oriented to Linux operating system

Publications (2)

Publication Number Publication Date
CN102646079A CN102646079A (en) 2012-08-22
CN102646079B true CN102646079B (en) 2014-07-16

Family

ID=46658907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210121227.2A Expired - Fee Related CN102646079B (en) 2012-04-23 2012-04-23 Disk data protection method oriented to Linux operating system

Country Status (1)

Country Link
CN (1) CN102646079B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294958B (en) * 2013-05-21 2015-07-22 中国人民解放军国防科学技术大学 Kernel-level virtual polymerization and parallel encryption method for class-oriented Linux system
CN103309773B (en) * 2013-07-03 2016-08-10 厦门市美亚柏科信息股份有限公司 The data reconstruction method of the RAID0 under EXT3 file system
US10152412B2 (en) * 2014-09-23 2018-12-11 Oracle International Corporation Smart flash cache logger
CN104536903B (en) * 2014-12-25 2018-02-23 华中科技大学 A kind of mixing storage method and system stored classifiedly by data attribute
CN110633173B (en) * 2019-09-30 2022-12-23 郑州信大捷安信息技术股份有限公司 Write filtering system and method based on Linux system disk
CN111190550B (en) * 2019-12-31 2024-03-29 深圳市安云信息科技有限公司 Metadata acceleration method and device and storage equipment
CN116737055A (en) * 2022-03-03 2023-09-12 中兴通讯股份有限公司 File processing method, electronic device and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246458A (en) * 2008-02-29 2008-08-20 中国科学院计算技术研究所 Hard disk data protection method and system
CN101901313A (en) * 2010-06-10 2010-12-01 中科方德软件有限公司 Linux file protection system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2906624A1 (en) * 2006-10-03 2008-04-04 Bull S A S Soc Par Actions Sim Generated data storing system, has management module utilizing calculation result for managing priorities in accessing time by system for copying data of new volumes of virtual bookshop to virtual volumes of physical bookshop

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246458A (en) * 2008-02-29 2008-08-20 中国科学院计算技术研究所 Hard disk data protection method and system
CN101901313A (en) * 2010-06-10 2010-12-01 中科方德软件有限公司 Linux file protection system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Hardware-in-the-loop Simulation System of Diesel;Fangyi Jiang, et al.;《Power and Energy Engineering Conference, 2009. APPEEC 2009. Asia-Pacific》;20090331;全文 *
Fangyi Jiang, et al..A Hardware-in-the-loop Simulation System of Diesel.《Power and Energy Engineering Conference, 2009. APPEEC 2009. Asia-Pacific》.2009,

Also Published As

Publication number Publication date
CN102646079A (en) 2012-08-22

Similar Documents

Publication Publication Date Title
CN102646079B (en) Disk data protection method oriented to Linux operating system
US11016932B2 (en) Systems, methods, and apparatuses for simplifying filesystem operations utilizing a key-value storage system
CN111684440B (en) Secure data sharing in a multi-tenant database system
Ligh et al. The art of memory forensics: detecting malware and threats in windows, linux, and Mac memory
Ulusoy et al. GuardMR: Fine-grained security policy enforcement for MapReduce systems
US7676481B2 (en) Serialization of file system item(s) and associated entity(ies)
JP6362316B2 (en) Method, system and computer program product for hybrid table implementation using buffer pool as resident in-memory storage for memory resident data
EP2863310B1 (en) Data processing method and apparatus, and shared storage device
US8914323B1 (en) Policy-based data-centric access control in a sorted, distributed key-value data store
US20060259854A1 (en) Structuring an electronic document for efficient identification and use of document parts
KR20170019352A (en) Data query method and apparatus
GB2507192A (en) Sharing data between distributed computer systems that use different classification schemes for data access control
CN107479922A (en) A kind of flash data management method, device and computer-readable recording medium
CN104143069B (en) A kind of method and system of protection system file
CN110018998A (en) A kind of file management method, system and electronic equipment and storage medium
JP2006178944A (en) File format representing document, its method and computer program product
JP7358396B2 (en) Secure dataset management
US9418232B1 (en) Providing data loss prevention for copying data to unauthorized media
CN108268609A (en) A kind of foundation of file path, access method and device
WO2023124217A1 (en) Method and device for acquiring comprehensively sorted data of multi-column data
US8538980B1 (en) Accessing forms using a metadata registry
US20210286806A1 (en) Personal information indexing for columnar data storage format
CN103049546B (en) The method and apparatus of a kind of management, access system daily record
Sarkar et al. Query language support for timely data deletion
CN107577492A (en) The NVM block device drives method and system of accelerating file system read-write

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140716

Termination date: 20200423

CF01 Termination of patent right due to non-payment of annual fee