CN114443248A - Object life cycle management method and device, electronic equipment and storage medium - Google Patents

Object life cycle management method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114443248A
CN114443248A CN202111680650.1A CN202111680650A CN114443248A CN 114443248 A CN114443248 A CN 114443248A CN 202111680650 A CN202111680650 A CN 202111680650A CN 114443248 A CN114443248 A CN 114443248A
Authority
CN
China
Prior art keywords
processed
priority
processing
queue
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111680650.1A
Other languages
Chinese (zh)
Inventor
黄鹄
林洁琬
钟龙山
颜文强
黄润怀
李旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202111680650.1A priority Critical patent/CN114443248A/en
Publication of CN114443248A publication Critical patent/CN114443248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a method and a device for managing a life cycle of an object, electronic equipment and a storage medium, and relates to the technical field of data processing. For any processing queue, based on the work thread allocated to any processing queue, the object information of the object to be processed can be read from any processing queue, and the data of the object to be processed is processed according to the object information. The processing queue may include object information of at least one object to be processed, the object priority of the object to be processed in the processing queue matches the queue priority of the processing queue to which the object to be processed belongs, and the work thread allocated to each processing queue is related to the queue priority of the corresponding processing queue. Therefore, the data of the objects can be isolated and processed through the priority, the problem that the processing speed of all the objects is reduced due to the fact that the data of a certain object is processed at a low speed is avoided, and the efficiency of life cycle management of the objects is improved.

Description

Object life cycle management method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for managing a life cycle of an object, an electronic device, and a storage medium.
Background
The ceph object store provides the life cycle management function of the object, and can delete expired objects or migrate expired objects to a storage pool of a specified category according to life cycle configuration rules. ceph may read the modification time of the object one by scanning the index pool sharded data of the bucket, compare with the lifecycle configuration rule, and delete or migrate the object if the condition is met.
However, the fragmentation of the index is based on the hash rule, the indexes of different services and different storage pools are mixed in the index pool, and if the pressure of a certain service is high or a certain storage pool fails, the speed of deleting or migrating the object related to the service or the storage pool is reduced, and further the efficiency of performing life cycle management on all the objects is reduced.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present application provide a method and an apparatus for managing a life cycle of an object, an electronic device, and a storage medium, which can improve efficiency of managing life cycles of all objects.
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a method for managing a lifecycle of an object, where the method includes:
for any processing queue, reading object information of an object to be processed from the any processing queue based on a working thread allocated to the any processing queue, and processing data of the object to be processed according to the read object information of the object to be processed;
the processing queue comprises at least one object information of an object to be processed, the object priority of the object to be processed in the processing queue is matched with the queue priority of the processing queue to which the object to be processed belongs, the number of the working threads of the processing queue with higher queue priority is greater than that of the processing queue with lower queue priority, or the starting time of the working threads of the processing queue with higher queue priority is earlier than that of the working threads of the processing queue with lower queue priority.
According to the object life cycle management method provided by the embodiment of the application, for any processing queue, based on the working thread allocated to any processing queue, the object information of the object to be processed can be read from any processing queue, and the data of the object to be processed is processed according to the read object information of the object to be processed. The processing queue may include object information of at least one object to be processed, the object priority of the object to be processed in the processing queue matches the queue priority of the processing queue to which the object to be processed belongs, and the number of work threads of the processing queue with a higher queue priority is greater than that of the processing queue with a lower queue priority, or the start time of the work thread of the processing queue with a higher queue priority is earlier than that of the processing queue with a lower queue priority. Because the object information of the objects to be processed with different object priorities can be placed in the processing queues with different queue priorities, and the data of the objects to be processed are processed by adopting the working thread according to the object information of the objects to be processed read from the corresponding processing queues, the objects with different priorities can be isolated by the queues, the problem that the data of a certain object is processed at a low speed to cause the reduction of the processing speed of all the objects is avoided, and the efficiency of life cycle management of the objects is improved.
In an alternative embodiment, the object priority of the object to be processed is determined by:
and setting corresponding object priority for the object to be processed according to the system condition or the service requirement corresponding to the object to be processed.
In this embodiment, different object priorities may be set for the objects to be processed with different system conditions or business requirements, so that the objects to be processed may be processed according to the different object priorities, thereby improving the efficiency of processing the objects.
In an optional embodiment, the setting, according to a system condition or a service requirement corresponding to the object to be processed, a corresponding object priority for the object to be processed includes:
if the system condition corresponding to the object to be processed is normal, or the service requirement corresponding to the object to be processed is immediate processing, setting a first object priority for the object to be processed;
if the system condition corresponding to the object to be processed fails or the service requirement corresponding to the object to be processed is processed later, setting a second object priority for the object to be processed; the first object priority is higher than the second object priority.
In this embodiment, if the system status corresponding to the object to be processed operates normally, or the service requirement corresponding to the object to be processed is immediate processing, a first object priority is set for the object to be processed, and if the system status corresponding to the object to be processed fails, or the service requirement corresponding to the object to be processed is later processing, a second object priority is set for the object to be processed. Wherein the first object priority is higher than the second object priority. Therefore, according to the normal operation or fault of the system condition and the instant processing or later processing of the service requirement, the object priorities which are relatively higher or relatively lower are respectively set for different objects to be processed, and then the data of different objects can be respectively processed according to different object priorities, so that the efficiency of processing the whole objects is improved.
In an optional embodiment, before the reading the object information of the object to be processed from any one of the processing queues, the method further includes:
determining a queue priority which is the same as the object priority according to the object priority of the object to be processed;
and placing the object information of the object to be processed in a processing queue corresponding to the queue priority.
In this embodiment, the queue priority that is the same as the object priority may be determined according to the object priority of the object to be processed, and the object information of the object to be processed is placed in the processing queue corresponding to the queue priority. Therefore, all objects to be processed can be isolated through the processing queues with different queue priorities, and then the data of the objects to be processed in the processing queues with different queue priorities can be processed respectively by adopting corresponding working threads, so that the speed of processing the objects is improved.
In an optional embodiment, the object information includes an identifier and a processing requirement corresponding to the object to be processed; the processing the data of the object to be processed according to the read object information of the object to be processed includes:
searching index information matched with the identification from an index memory according to the identification corresponding to the object to be processed;
and determining the original storage position of the data of the object to be processed in a data storage according to the index information, and processing the data of the object to be processed according to the processing requirement corresponding to the object to be processed.
In this embodiment, according to the identifier corresponding to the object to be processed included in the object information, the index information matching with the identifier may be searched from the index memory, the original storage location of the data of the object to be processed in the data memory may be determined according to the searched index information, and the data of the object to be processed may be processed according to the processing requirement corresponding to the object to be processed included in the object information, so that the object to be processed may be processed more efficiently, and the efficiency of processing the object may be improved.
In an optional embodiment, the processing the data of the object to be processed according to the processing requirement corresponding to the object to be processed includes:
if the processing requirement indicates that the data of the object to be processed is deleted, deleting the data of the object to be processed from the original storage position, and deleting the index information from the index memory;
if the processing requirement indicates that the data of the object to be processed is migrated, migrating the data of the object to be processed from the original storage position to a target storage position included in the object information, and updating the index information in the index memory.
In this embodiment, if the processing requirement corresponding to the object to be processed indicates that the data of the object to be processed is to be deleted, the data of the object to be processed may be deleted from the original storage location in the data storage, and the corresponding index information may be deleted from the index storage, and if the processing requirement corresponding to the object to be processed indicates that the data of the object to be processed is to be migrated, the data of the object to be processed may be migrated from the original storage location in the data storage to the target storage location included in the object information of the object to be processed, and the corresponding index information in the index storage may be updated, so that the deletion or migration operation on the object may be performed according to the processing requirement corresponding to the object, respectively.
In a second aspect, an embodiment of the present application further provides an apparatus for managing a life cycle of an object, where the apparatus includes:
an object information reading unit, configured to read, for any one processing queue, object information of an object to be processed from the any one processing queue based on a work thread allocated to the any one processing queue; the processing queue comprises at least one object information of an object to be processed, the object priority of the object to be processed in the processing queue is matched with the queue priority of the processing queue to which the object to be processed belongs, the number of the working threads of the processing queue with higher queue priority is greater than that of the processing queue with lower queue priority, or the starting time of the working threads of the processing queue with higher queue priority is earlier than that of the working threads of the processing queue with lower queue priority;
and the data processing unit is used for processing the data of the object to be processed according to the read object information of the object to be processed.
In an optional embodiment, the apparatus further comprises a priority determining unit configured to:
and setting corresponding object priority for the object to be processed according to the system condition or the service requirement corresponding to the object to be processed.
In an optional embodiment, the priority determining unit is specifically configured to:
if the system condition corresponding to the object to be processed is normal, or the service requirement corresponding to the object to be processed is immediate processing, setting a first object priority for the object to be processed;
if the system condition corresponding to the object to be processed fails or the service requirement corresponding to the object to be processed is processed later, setting a second object priority for the object to be processed; the first object priority is higher than the second object priority.
In an optional embodiment, the apparatus further comprises a queue joining unit, configured to:
determining a queue priority which is the same as the object priority according to the object priority of the object to be processed;
and placing the object information of the object to be processed in a processing queue corresponding to the queue priority.
In an optional embodiment, the object information includes an identifier and a processing requirement corresponding to the object to be processed; the data processing unit is specifically configured to:
searching index information matched with the identification from an index memory according to the identification corresponding to the object to be processed;
and determining the original storage position of the data of the object to be processed in a data storage according to the index information, and processing the data of the object to be processed according to the processing requirement corresponding to the object to be processed.
In an optional embodiment, the data processing unit is further configured to:
if the processing requirement indicates that the data of the object to be processed is deleted, deleting the data of the object to be processed from the original storage position, and deleting the index information from the index memory;
if the processing requirement indicates that the data of the object to be processed is subjected to the migration operation, migrating the data of the object to be processed from the original storage position to a target storage position included in the object information, and updating the index information in the index memory.
In a third aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the life cycle management method for the object of the first aspect is implemented.
In a fourth aspect, this application further provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the processor is enabled to implement the life cycle management method of the object of the first aspect.
For technical effects brought by any one implementation manner in the second aspect to the fourth aspect, reference may be made to technical effects brought by a corresponding implementation manner in the first aspect, and details are not described here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a method for managing a life cycle of an object according to an embodiment of the present application;
fig. 2 is a schematic diagram of a life cycle management method for an object according to an embodiment of the present application;
FIG. 3 is a flowchart of another object lifecycle management method according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a process of a lifecycle management system for objects according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an object lifecycle management apparatus according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an apparatus for lifecycle management of another object according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that references in the specification of the present application to the terms "comprises" and "comprising," and variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solutions provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
An embodiment of the present application provides a method for managing a life cycle of an object, as shown in fig. 1, including the following steps:
step S101 is to read object information of an object to be processed from any one of the processing queues based on the work thread allocated to any one of the processing queues.
The processing queue comprises at least one object information of an object to be processed, the object priority of the object to be processed in the processing queue is matched with the queue priority of the processing queue to which the object to be processed belongs, the number of the working threads of the processing queue with higher queue priority is greater than that of the processing queue with lower queue priority, or the starting time of the working threads of the processing queue with higher queue priority is earlier than that of the working threads of the processing queue with lower queue priority.
Optionally, a corresponding object priority may be set for the object to be processed according to a system condition or a service requirement corresponding to the object to be processed.
Specifically, if the system status corresponding to the object to be processed operates normally, or the service requirement corresponding to the object to be processed is immediate processing, the first object priority is set for the object to be processed. And if the system condition corresponding to the object to be processed fails or the service requirement corresponding to the object to be processed is processed later, setting a second object priority for the object to be processed. Wherein the first object priority is higher than the second object priority.
After the object priority of the object to be processed is set, the queue priority identical to the object priority can be determined according to the object priority of the object to be processed, and the object information of the object to be processed is placed in the processing queue corresponding to the queue priority.
For example, as shown in fig. 2, there are 3 objects to be processed in total, an object a to be processed, an object B to be processed, and an object C to be processed, and the object priority of the object a to be processed is 1, the object priority of the object B to be processed is 2, and the object priority of the object C to be processed is 3. Wherein the object priority is 1 greater than the object priority is 2 greater than the object priority is 3.
According to the object priorities respectively corresponding to the objects to be processed, the object information of the object a to be processed with the object priority of 1 can be placed in the processing queue a corresponding to the queue priority of 1, the object information of the object B to be processed with the object priority of 2 is placed in the processing queue B corresponding to the queue priority of 2, and the object information of the object C to be processed with the object priority of 3 is placed in the processing queue C corresponding to the queue priority of 3.
According to the queue priority, 3 work threads including a work thread a1, a work thread a2 and a work thread a3 can be allocated to the processing queue a with the queue priority of 1, 2 work threads including a work thread b1 and a work thread b2 are allocated to the processing queue b with the queue priority of 2, and 1 work thread including a work thread c1 is allocated to the processing queue c with the queue priority of 3.
And the starting time of reading the object information of the object A to be processed from the processing queue a by the worker thread a1, the worker thread a2 and the worker thread a3 is earlier than the starting time of reading the object information of the object B to be processed from the processing queue B by the worker thread B1 and the worker thread B2, and the starting time of reading the object information of the object B to be processed from the processing queue B by the worker thread B1 and the worker thread B2 is earlier than the starting time of reading the object information of the object C to be processed from the processing queue C by the worker thread C1.
And step S102, processing the data of the object to be processed according to the read object information of the object to be processed.
The object information of the object to be processed comprises an identifier and a processing requirement corresponding to the object to be processed.
After reading the object information of the object to be processed, the working thread may search the index information matched with the identifier from the index memory according to the identifier corresponding to the object to be processed included in the object information, and determine the original storage location of the data of the object to be processed in the data memory according to the searched index information.
For example, the object to be processed may be a file, the identifier corresponding to the object to be processed may be a file name, the worker thread may search, according to the file name, index information matched with the file name from the index memory, where the index information may be storage information of the file, and according to the storage information of the file, an original storage location of the file in the data memory may be determined.
After the original storage position of the data of the object to be processed in the data memory is determined, if the processing requirement corresponding to the object to be processed indicates that the data of the object to be processed is deleted, the data of the object to be processed can be deleted from the original storage position, and the index information corresponding to the object to be processed is deleted from the index memory.
If the processing requirement corresponding to the object to be processed indicates that the data of the object to be processed is migrated, the data of the object to be processed may be migrated from the original storage location to the target storage location included in the object information of the object to be processed, and the index information corresponding to the object to be processed in the index storage may be updated.
In an embodiment, the method for managing a life cycle of an object provided in this embodiment may also be implemented according to a process shown in fig. 3, as shown in fig. 3, including the following steps:
step S301, determining the object to be processed and the object priority of the object to be processed.
Before processing the data of the objects to be processed, it is necessary to set a corresponding object priority for each object to be processed according to the system status or business requirement corresponding to each object to be processed.
Step S302, according to the object priority of the object to be processed, determining the queue priority identical to the object priority, and placing the object information of the object to be processed in the processing queue corresponding to the queue priority.
After the object priority corresponding to each object to be processed is set, for one object to be processed, the queue priority identical to the object priority can be determined according to the object priority of the object to be processed, and the object information of the object to be processed is placed in the processing queue corresponding to the queue priority.
The queue priority is the same as the object priority in number, and the object information of the object to be processed corresponding to one object priority can be placed in the processing queue corresponding to the queue priority which is the same as the object priority.
In step S303, object information of the object to be processed is read from the processing queue based on the assigned work thread.
After the queue to be processed is placed in the processing queue, a corresponding work thread may be allocated to the processing object. The number of the working threads of the processing queue with higher queue priority is greater than that of the working threads of the processing queue with lower queue priority, or the starting time of the working threads of the processing queue with higher queue priority is earlier than that of the working threads of the processing queue with lower queue priority.
Based on the assigned worker thread, object information of the object to be processed may be read from the processing queue.
Step S304, according to the mark corresponding to the object to be processed included in the object information, searching the index information matched with the mark corresponding to the object to be processed from the index memory.
The object information of the object to be processed may include an identifier corresponding to the object to be processed, and according to the identifier corresponding to the object to be processed, the index information matching the identifier may be searched from the index memory.
Step S305, according to the index information, determining the original storage position of the data of the object to be processed in the data memory.
According to the searched index information, the original storage position of the data of the object to be processed in the data memory can be determined.
Step S306, if the processing requirement corresponding to the object to be processed included in the object information indicates that the data of the object to be processed is to be deleted, deleting the data of the object to be processed from the original storage location, and deleting the index information from the index memory.
The object information of the object to be processed may further include a processing requirement corresponding to the object to be processed, and if the processing requirement indicates that the data of the object to be processed is to be deleted, the data of the object to be processed may be deleted from the original storage location in the data storage, and the index information may be deleted from the index storage.
Step S307, if the processing request indicates that the data of the object to be processed is to be migrated, migrating the data of the object to be processed from the original storage location to the target storage location included in the object information, and updating the index information in the index memory.
If the processing requirement indicates that the data of the object to be processed is subjected to the migration operation, the data of the object to be processed can be migrated from the original storage position in the data storage to the target storage position included in the object information, and the index information in the index storage is updated.
In some embodiments, the present application further provides a life cycle management system of an object, the system mainly includes: a priority policy manager, an index store, a data store, an index scanner, a processing queue, and a task scheduling manager.
Wherein the priority policy manager: receiving a priority setting policy from a system administrator, setting priorities for different services (corresponding to buckets and data pools in the system), and managing the corresponding relationship between the services and the processing queues.
An index memory: the bucket index data used for storing the object memory can be processed with hash fragmentation when the number of objects in the bucket is large.
A data memory: the method is used for storing data stored by the object, different pools can be divided according to services, and different pools have different hardware resources and are not mutually influenced.
An index scanner: the index scanner is a background thread, scans index fragments according to a strategy (operating in what time period, how long to perform a round of scanning and the like) set by a system, reads object metadata obtained from the index fragments one by one, and stores the object metadata and operation types (deleted or migrated) to be processed into corresponding processing queues according to bucket names, object names, modification time, life cycle configuration rules of the buckets and priority configuration rules in the metadata, wherein the index scanner is a producer of the processing queues.
And (3) processing the queue: the processing queues with different queue priorities are message queues for storing the metadata of the objects to be processed scanned from the index scanner.
The task scheduling manager: the task scheduling manager manages a set of work threads, which are consumers of the processing queue, for reading object metadata information from the processing queue, executing corresponding operations according to the operation types, and dynamically allocating the work threads according to the priority setting.
Specifically, the processing flow of each component in the lifecycle management system of the object provided in the embodiment of the present application may be as shown in fig. 4. As shown in fig. 4, a system administrator sets a lifecycle task priority policy according to a system condition and a service requirement, and the priority policy manager may receive a priority setting policy sent by the system administrator, set a priority of an object and a priority of a processing queue according to the priority setting policy, and persistently store the priority setting policy and a priority setting result.
The index scanner can load a priority setting strategy and a life cycle rule, periodically scan according to system setting, acquire index fragment data from an index memory, analyze each object in the fragment according to the life cycle rule, write object information of the object to be processed into a corresponding processing queue if the object to be processed needs to be processed, and continue to analyze the next object if the object does not need to be processed. The object information of the object may include object metadata, an operation type, and the like.
The task scheduling manager can load a priority setting strategy, and allocates working threads to different processing queues according to the priority setting strategy, the working threads poll the processing queues allocated to the working threads, and the data of the objects to be processed are processed one by one: if the operation is deletion operation, deleting the data of the object from the data memory and updating the index; if the operation is a migration operation, the data of the object is migrated from the original storage location to the target storage location in the data store, and the index is updated.
Compared with the related technology, the life cycle management system of the object has the following advantages:
1. the operability of the system is improved. And the task scheduling can be carried out according to the specific conditions of the service and the system, so that the overall resource utilization of the system is more efficient.
2. The stability of the system is improved. Tasks among different services can be prevented from being influenced by the isolation of the task queue and the work thread, and the phenomenon that the local oscillation of a certain service causes the overall oscillation is avoided.
The application provides an object life cycle management method, which can solve the problems that objects are processed in sequence originally existing in ceph object storage, and if a certain service pressure is high or a certain storage pool fails, the processing speed of the whole object is reduced, and the management efficiency of the whole life cycle is further influenced. According to the object life cycle management method provided by the application, the ceph object storage can accurately schedule life cycle management tasks according to service characteristics, the operability of the system is improved, meanwhile, the scanning and processing of indexes are decoupled through priority queues, the scanning results of different services (or data pools) are stored in different queues, the task manager performs task scheduling according to the priority of the services, the task exception of one queue cannot influence the execution of other tasks, the isolation of tasks among the services is achieved, the integral oscillation of the system caused by the local oscillation of a single service is avoided, and the stability of the system is improved.
The object life cycle management method shown in fig. 1 is based on the same inventive concept, and an object life cycle management apparatus is further provided in the embodiment of the present application. Because the device is a device corresponding to the lifecycle management method of the application object, and the principle of the device for solving the problem is similar to that of the method, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Fig. 5 is a schematic structural diagram illustrating an object lifecycle management apparatus according to an embodiment of the present application, and as shown in fig. 5, the object lifecycle management apparatus includes an object information reading unit 501 and a data processing unit 502.
The object information reading unit 501 is configured to, for any one processing queue, read object information of an object to be processed from any one processing queue based on a work thread allocated to any one processing queue; the processing queue comprises at least one object information of an object to be processed, the object priority of the object to be processed in the processing queue is matched with the queue priority of the processing queue to which the object to be processed belongs, the number of the working threads of the processing queue with higher queue priority is more than that of the processing queue with lower queue priority, or the starting time of the working threads of the processing queue with higher queue priority is earlier than that of the working threads of the processing queue with lower queue priority;
a data processing unit 502, configured to process the data of the object to be processed according to the read object information of the object to be processed.
In an alternative embodiment, as shown in fig. 6, the apparatus may further include a priority determining unit 601, configured to:
and setting corresponding object priority for the object to be processed according to the system condition or the service requirement corresponding to the object to be processed.
In an alternative embodiment, the priority determining unit 601 is specifically configured to:
if the system condition corresponding to the object to be processed operates normally or the service requirement corresponding to the object to be processed is immediate processing, setting a first object priority for the object to be processed;
if the system condition corresponding to the object to be processed fails or the service requirement corresponding to the object to be processed is processed later, setting a second object priority for the object to be processed; the first object priority is higher than the second object priority.
In an alternative embodiment, as shown in fig. 6, the apparatus may further include a queue joining unit 602, configured to:
determining a queue priority which is the same as the object priority according to the object priority of the object to be processed;
and placing the object information of the object to be processed in a processing queue corresponding to the queue priority.
In an optional embodiment, the object information includes an identifier and a processing requirement corresponding to the object to be processed; the data processing unit 502 is specifically configured to:
searching index information matched with the identification from an index memory according to the identification corresponding to the object to be processed;
and according to the index information, determining the original storage position of the data of the object to be processed in the data storage, and processing the data of the object to be processed according to the processing requirement corresponding to the object to be processed.
In an alternative embodiment, the data processing unit 502 is further configured to:
if the processing requirement indicates that the data of the object to be processed is deleted, deleting the data of the object to be processed from the original storage position, and deleting the index information from the index memory;
and if the processing requirement indicates that the data of the object to be processed is subjected to migration operation, migrating the data of the object to be processed from the original storage position to a target storage position included in the object information, and updating the index information in the index memory.
The electronic equipment is based on the same inventive concept as the method embodiment, and the embodiment of the application also provides the electronic equipment. The electronic device may be used to perform a lifecycle management process for an object. In this embodiment, the structure of the electronic device may be as shown in fig. 7, including a memory 701 and one or more processors 702.
A memory 701 for storing a computer program executed by the processor 702. The memory 701 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 701 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 701 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or any other medium which can be used to carry or store desired program code in the form of instructions or data structures and which can be accessed by a computer. The memory 701 may be a combination of the above memories.
The processor 702 may include one or more Central Processing Units (CPUs), or be a digital processing unit, etc. A processor 702 for implementing the life cycle management method of the above objects when calling the computer program stored in the memory 701.
The specific connection medium between the memory 701 and the processor 702 is not limited in the embodiments of the present application. In fig. 7, the memory 701 and the processor 702 are connected by a bus 703, the bus 703 is represented by a thick line in fig. 7, and the connection manner between other components is merely illustrative and not limited. The bus 703 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the life cycle management method of the object in the above embodiments.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A method for lifecycle management of an object, comprising:
for any processing queue, reading object information of an object to be processed from the any processing queue based on a working thread allocated to the any processing queue, and processing data of the object to be processed according to the read object information of the object to be processed;
the processing queue comprises at least one object information of an object to be processed, the object priority of the object to be processed in the processing queue is matched with the queue priority of the processing queue to which the object to be processed belongs, the number of the working threads of the processing queue with higher queue priority is greater than that of the processing queue with lower queue priority, or the starting time of the working threads of the processing queue with higher queue priority is earlier than that of the working threads of the processing queue with lower queue priority.
2. The method of claim 1, wherein the object priority of the object to be processed is determined by:
and setting corresponding object priority for the object to be processed according to the system condition or the service requirement corresponding to the object to be processed.
3. The method of claim 2, wherein the setting a corresponding object priority for the object to be processed according to the system status or the service requirement corresponding to the object to be processed comprises:
if the system condition corresponding to the object to be processed is normal, or the service requirement corresponding to the object to be processed is immediate processing, setting a first object priority for the object to be processed;
if the system condition corresponding to the object to be processed fails or the service requirement corresponding to the object to be processed is processed later, setting a second object priority for the object to be processed; the first object priority is higher than the second object priority.
4. The method of claim 1, wherein before reading the object information of the object to be processed from the any one processing queue, the method further comprises:
determining a queue priority which is the same as the object priority according to the object priority of the object to be processed;
and placing the object information of the object to be processed in a processing queue corresponding to the queue priority.
5. The method according to claim 1, wherein the object information includes an identifier and a processing requirement corresponding to the object to be processed; the processing the data of the object to be processed according to the read object information of the object to be processed includes:
searching index information matched with the identification from an index memory according to the identification corresponding to the object to be processed;
and determining the original storage position of the data of the object to be processed in a data storage according to the index information, and processing the data of the object to be processed according to the processing requirement corresponding to the object to be processed.
6. The method according to claim 5, wherein the processing the data of the object to be processed according to the processing requirement corresponding to the object to be processed comprises:
if the processing requirement indicates that the data of the object to be processed is deleted, deleting the data of the object to be processed from the original storage position, and deleting the index information from the index memory;
if the processing requirement indicates that the data of the object to be processed is subjected to the migration operation, migrating the data of the object to be processed from the original storage position to a target storage position included in the object information, and updating the index information in the index memory.
7. An apparatus for lifecycle management of objects, comprising:
an object information reading unit, configured to read, for any one processing queue, object information of an object to be processed from the any one processing queue based on a work thread allocated to the any one processing queue; the processing queue comprises at least one object information of an object to be processed, the object priority of the object to be processed in the processing queue is matched with the queue priority of the processing queue to which the object to be processed belongs, the number of the working threads of the processing queue with higher queue priority is greater than that of the processing queue with lower queue priority, or the starting time of the working threads of the processing queue with higher queue priority is earlier than that of the working threads of the processing queue with lower queue priority;
and the data processing unit is used for processing the data of the object to be processed according to the read object information of the object to be processed.
8. The apparatus of claim 7, wherein the apparatus further comprises a priority determination unit to:
and setting corresponding object priority for the object to be processed according to the system condition or the service requirement corresponding to the object to be processed.
9. An electronic device, comprising a processor and a memory, wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that it comprises program code for causing an electronic device to carry out the steps of the method according to any one of claims 1 to 6, when said program code is run on said electronic device.
CN202111680650.1A 2021-12-27 2021-12-27 Object life cycle management method and device, electronic equipment and storage medium Pending CN114443248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111680650.1A CN114443248A (en) 2021-12-27 2021-12-27 Object life cycle management method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111680650.1A CN114443248A (en) 2021-12-27 2021-12-27 Object life cycle management method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114443248A true CN114443248A (en) 2022-05-06

Family

ID=81366245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111680650.1A Pending CN114443248A (en) 2021-12-27 2021-12-27 Object life cycle management method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114443248A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737400A (en) * 2023-08-15 2023-09-12 中移(苏州)软件技术有限公司 Queue data processing method and device and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737400A (en) * 2023-08-15 2023-09-12 中移(苏州)软件技术有限公司 Queue data processing method and device and related equipment
CN116737400B (en) * 2023-08-15 2023-11-03 中移(苏州)软件技术有限公司 Queue data processing method and device and related equipment

Similar Documents

Publication Publication Date Title
US11425194B1 (en) Dynamically modifying a cluster of computing nodes used for distributed execution of a program
US11340803B2 (en) Method for configuring resources, electronic device and computer program product
US8321558B1 (en) Dynamically monitoring and modifying distributed execution of programs
US8418181B1 (en) Managing program execution based on data storage location
US9280390B2 (en) Dynamic scaling of a cluster of computing nodes
US8533334B2 (en) Message binding processing technique
CN106371894B (en) Configuration method and device and data processing server
US20190052528A1 (en) Network function virtualization management orchestration apparatus, method
US9477460B2 (en) Non-transitory computer-readable storage medium for selective application of update programs dependent upon a load of a virtual machine and related apparatus and method
CN111324427B (en) Task scheduling method and device based on DSP
CN111176818B (en) Distributed prediction method, device, system, electronic equipment and storage medium
CN107515781B (en) Deterministic task scheduling and load balancing system based on multiple processors
JP2005234637A (en) Method and device for managing computer resource and processing program
US20050278584A1 (en) Storage area management method and system
CN111143331A (en) Data migration method and device and computer storage medium
CN107977275B (en) Task processing method based on message queue and related equipment
CN113010265A (en) Pod scheduling method, scheduler, memory plug-in and system
CN114443248A (en) Object life cycle management method and device, electronic equipment and storage medium
CN112631994A (en) Data migration method and system
CN116594734A (en) Container migration method and device, storage medium and electronic equipment
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN114116158A (en) Task scheduling method and system based on SD-WAN system
US10884950B2 (en) Importance based page replacement
CN115268950A (en) Mirror image file importing method and device
CN115061813A (en) Cluster resource management method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination