CN117251383B - Software component detection method, device, equipment and storage medium based on cache - Google Patents

Software component detection method, device, equipment and storage medium based on cache Download PDF

Info

Publication number
CN117251383B
CN117251383B CN202311532684.5A CN202311532684A CN117251383B CN 117251383 B CN117251383 B CN 117251383B CN 202311532684 A CN202311532684 A CN 202311532684A CN 117251383 B CN117251383 B CN 117251383B
Authority
CN
China
Prior art keywords
cache
level cache
data
queue
target data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311532684.5A
Other languages
Chinese (zh)
Other versions
CN117251383A (en
Inventor
万振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seczone Technology Co Ltd
Original Assignee
Seczone Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seczone Technology Co Ltd filed Critical Seczone Technology Co Ltd
Priority to CN202311532684.5A priority Critical patent/CN117251383B/en
Publication of CN117251383A publication Critical patent/CN117251383A/en
Application granted granted Critical
Publication of CN117251383B publication Critical patent/CN117251383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/221Column-oriented storage; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to the technical field of data caching, and discloses a software component detection method, device, equipment and storage medium based on caching, wherein the method comprises the following steps: constructing three-level cache through a preset script language and a preset cache component; carrying out data distribution on preset software component service data to obtain a service data queue; storing the service data queues into the three-level cache one by one; inquiring a target data queue corresponding to preset target software component data in the three-level cache, and merging the data streams of the inquired target data queue to obtain the target software component data. Through the implementation of the method and the device, the memory performance can be effectively improved, the processing speed of the data stream is accelerated, and the overall interface response performance is improved, so that the efficiency of detecting the software components is improved.

Description

Software component detection method, device, equipment and storage medium based on cache
Technical Field
The present disclosure relates to the field of data caching technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting software components based on caching.
Background
With the increase of the traffic, the data volume is increased continuously, when a large amount of data is stored, a data engine is used for storing the data, and for enterprises with mass data, a distributed cluster is generally used for processing, and the concurrency of data access is improved by adding machines, but in order to improve the performance of the mass data when the mass data is accessed, the performance of the mass data when the mass data is stored needs to be optimized, so that the efficiency of data detection is improved.
In practical application, for a delivery type client deployment mode, single-machine deployment is often adopted, configuration is often middle-low, the access performance to bottom data is low, the time for detecting a file is long, even abnormal conditions of memory exhaustion can occur, and therefore the efficiency for detecting the software components is low.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for detecting software components based on cache, which mainly aim to solve the problem of low efficiency in detecting the software components in the related technology.
In order to achieve the above object, the present application provides a method for detecting a software component based on cache, including: constructing three-level cache through a preset script language and a preset cache component; carrying out data distribution on preset software component service data to obtain a service data queue; storing the service data queues into the three-level cache one by one; inquiring a target data queue corresponding to preset target software component data in the three-level cache, and merging the data streams of the inquired target data queue to obtain the target software component data.
In order to solve the above problem, the present application further provides a software component detection device based on cache, including: the third-level cache construction module is used for constructing a third-level cache through a preset script language and a preset cache component; the data distribution module is used for distributing data of preset software component service data to obtain a service data queue; the service data queue storage module is used for storing the service data queues into the three-level cache one by one; and the data stream merging module is used for inquiring a target data queue corresponding to the preset target software component data in the three-level cache, and carrying out data stream merging on the inquired target data queue to obtain the target software component data.
In order to solve the above-mentioned problem, the present application further provides an apparatus, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the cache-based software component detection method described above.
In order to solve the above-mentioned problems, the present application further provides a storage medium having stored therein at least one computer program that is executed by a processor in the device to implement the above-mentioned cache-based software component detection method.
According to the embodiment of the application, the server configuration is read through an automatic script language, the configuration parameters of the columnar database are generated based on the server configuration parameters, so that the columnar database achieves maximum performance, meanwhile, the read cache and the write cache of the columnar database are started, the high-performance local cache is introduced as a secondary cache based on the local cache, and the access pressure of the database is relieved through a self-defined cache expiration strategy and a cache update strategy; and introducing a remote dictionary service component, forming a three-level cache by matching with the local cache, caching hot spot data with less fluctuation in the local cache, and carrying out split operation on complex service and batch retrieval service by adopting a single-thread multi-channel, so that the data processing capacity is improved by utilizing the CPU performance. Therefore, the method, the device, the equipment and the storage medium for detecting the software component based on the cache can solve the problem of low efficiency in detecting the software component.
Drawings
FIG. 1 is a flowchart of a method for detecting a software component based on a cache according to an embodiment of the present application;
FIG. 2 is a flow chart of a software component business data query according to an embodiment of the present application;
FIG. 3 is a detailed flowchart of a software component data query according to an embodiment of the present application;
FIG. 4 is a functional block diagram of a cache-based software component detection apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device for implementing a method for detecting a software component based on a cache according to an embodiment of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the related art, software component data is stored through MySQL, clickhouse, mongodb, when the software component data is detected, the performance is often reduced sharply based on the data storage type, for enterprises with mass data, distributed clusters are adopted to process, the concurrency of data access is improved by adding machines, the budget cost of the enterprises is greatly increased, meanwhile, for a single machine, the performance of the machine is not fully exerted, the performance is wasted, the software component data detection efficiency is lower, and the embodiment of the application provides a software component detection method based on cache.
Referring to fig. 1, a flowchart of a method for detecting a software component based on a cache according to an embodiment of the present application is shown. In this embodiment, the method for detecting software components based on cache includes the following steps:
s11, constructing three-level cache through a preset script language and a preset cache component.
In one of the practical application scenarios, the performance of the data engine commonly used by the enterprise is not obviously improved under the condition of rapid increase of data volume, for example, the performance of MySQL (database) can be drastically reduced when millions of data are stored, and the performance of Clickhouse (complete column database management system) can be reduced when hundreds of millions of data are stored. For enterprises with mass data, a distributed cluster is generally adopted to process, the concurrency of data access is improved by adding machines, the budget cost of the enterprises is greatly increased, and the performance is wasted due to incomplete performance of a single machine, so that the single machine data access capability is improved, the interface performance is improved, and the data access pressure is relieved based on a cache technology.
In this embodiment of the present application, the third level cache includes that the first level cache is a storage space of the database itself, the second level cache is a Caffeine (local cache, high performance cache plug-in), and the third level cache is a dis (Remote Dictionary Serve, remote dictionary service), and through the third level cache, an access capability of stand-alone data may be improved, so that an SCA (software composition analysis, software component analysis) detection efficiency is improved.
In this embodiment of the present application, three levels of caching are constructed by a preset scripting language and a preset caching component, including: generating configuration parameters of a preset column database through a preset script language, and taking the configuration parameters and the column database after the preset caching strategy is applied as a first-level cache; extracting a local cache component in the cache component, configuring a cache policy in the local cache component, and taking the local cache component configured by the cache policy as a second-level cache; taking a remote dictionary service component in the cache component as a third-level cache; combining the first level cache, the second level cache and the third level cache into a three-level cache.
In detail, configuration parameters of a columnar database (click house) are set based on the existing server configuration in an automatic script language (such as Python, go, shell) mode, so that the columnar database achieves the highest performance, a read_cache (read cache) and a write_cache (write cache) of the columnar database are opened, and the columnar database after the read_cache (read cache) and the write_cache (write cache) are opened and a self-defined set cache policy are used as a first-level cache in the three-level cache. The cache policy comprises query cache, the column database supports caching of query results, and the cache results are directly returned when the subsequent same query is executed, so that query operation, cache invalidation policy and cache elimination policy do not need to be executed again; when the columnar database is set as the first-level cache, tuning parameters of the relevant columnar database need to be automatically generated based on the performance of the server, so that the columnar database reaches the highest performance.
In the embodiment of the present application, the configuration parameters include a tuning configuration parameter and an opening slow command parameter of the columnar database, and then the configuration of the database is automatically optimized based on the configuration parameters, so as to ensure the formation of the first-level cache.
In this embodiment of the present application, generating configuration parameters of a preset columnar database through a preset scripting language includes: reading configuration parameters of a server through a script language; generating tuning parameters of the columnar database according to the configuration parameters; generating a cache starting parameter of a column database according to a preset cache command; and generating configuration parameters of the columnar database according to the tuning parameters and the cache opening parameters.
In detail, the configuration parameters comprise the CPU core number, the memory size, the disk type, the disk speed and the capacity of the server, the server configuration is read based on a scripting language (such as Python, golang, shell), and the tuning parameters of the related column database are automatically generated according to the performance index of the server, the server configuration parameters are mapped to the tuning parameters of the column database according to the evaluation of the performance of the server, the tuning parameters of the column database are automatically generated by writing the script code, namely, the tuning parameters comprise the maximum thread number, the number of the threads of the query and the task which can be simultaneously executed by the column database is controlled, and the processing capacity of concurrent query can be improved by increasing the value of the maximum thread number; the maximum memory usage amount limits the maximum memory amount which can be used by each inquiry or task, when one inquiry or task exceeds the maximum memory usage amount in the execution process, the column database can allocate the inquiry or task to a disk for processing, and the parameter is reasonably set so as to avoid memory overflow and overuse of the disk, thereby improving the overall performance of the system; the merge tree reads the distribution evenly, controls the strategy of the columnar database when reading the distributed table data, can make the columnar database read the data from different nodes in the cluster as evenly as possible to improve the query performance and balance the load, and can be dynamically adjusted by modifying the configuration file (such as clickhouse-server.xml) of the columnar database or using SET SETTINGS command.
Specifically, after optimizing and configuring the parameters of the ClickHouse, the caching strategy of the ClickHouse is set by user definition, the parameters of the read_cache and the write_cache are set to proper values according to the established ClickHouse configuration file or a SET SETTINGS command, the read_cache is used for caching query data, the write_cache is used for caching write operation, and the caching starting parameters comprise the parameters of the read_cache and the write_cache, so that the read_cache and the write_cache of the ClickHouse are started.
Further, after taking the configured clickHouse as the first level cache, a local high-efficiency cache component (Caffeine) is required to be referenced, and the cache and the synchronous cache are automatically updated through a self-defined cache invalidation strategy and a cache synchronous strategy, so that a secondary cache can be formed, and the data is reduced to directly hit the database, wherein the cache invalidation strategy refers to that the cached data possibly becomes outdated with the passage of time, so that a cache invalidation strategy is required to ensure the accuracy of the data, and the cache invalidation strategy comprises but is not limited to a time-based strategy and a dependency-based strategy; the cache synchronization policy refers to that when a certain data fails in the first level cache, the data can be reloaded from the database in an asynchronous or synchronous manner, the first level cache is updated, and an appropriate cache synchronization policy, such as a write-back policy, a refresh policy or a background thread policy, can be selected according to service requirements and access modes, so that a local cache component configured by the cache policy is used as a second level cache.
In this application implementation, taking the remote dictionary service component in the cache component as the third-level cache includes: setting a monitor in the remote dictionary service component, wherein the monitor monitors the data access times and the data change times of target data in the first-level cache, and preheats the target data in the first-level cache according to the data access times and the data change times; the remote dictionary service component of the set listener is used as a third level cache.
In detail, a remote dictionary service (Redis) component is introduced as a third-level cache, so that data with little fluctuation and frequent access in a Clickhouse is cached in a project starting process, namely, a listener License is self-defined in the remote dictionary service cache, the fluctuation of the data is listened, the data change of cafline is listened, the data with little fluctuation and much access in the Clickhouse is preheated, the data are loaded into the cache in advance through a preheating mechanism and a strategy of optimizing cache synchronization is adopted, the problem of cache transaction consistency is solved, the first-level cache, the second-level cache and the third-level cache are combined into a three-level cache, the data access can be reduced to be directly beaten to a database through the multi-level cache, the influence on the whole data response due to I/O (input/output) performance is avoided, in addition, the database is regarded as the first-level cache, the cafline is regarded as the second-level cache, the remote dictionary service is regarded as the third-level cache, and in the aspect of inquiring the business data, the cafline is regarded as the first-level cache, the remote dictionary service is regarded as the third-level cache, and the third-level cache is regarded as the third-level cache.
Further, based on the three-level cache, the data response efficiency can be improved, and for the complexity service and the batched service, the data needs to be split, and the split data is stored in the three-level cache respectively, so that the data response time is improved.
S12, carrying out data distribution on service data of preset software components to obtain a service data queue.
In this embodiment, the service data queue performs data splitting processing on service data of the software component, so that the service data of the software component is split into different data queues.
In this embodiment of the present application, data offloading is performed on service data of a preset software component to obtain a service data queue, including: carrying out data splitting on preset software component service data according to preset service requirements to obtain software component splitting data; and distributing the split data of the software component into a preset target queue to obtain a service data queue.
In detail, the software component service data is split according to the detected service demand, a plurality of queues are created, and the split data is uniformly distributed to the target queues, and each queue can store the split software component service data through a queue technology, wherein the queue technology comprises, but is not limited to, kafka (high throughput distributed message system), rabhimq (open source message broker).
Further, the software component service data is split, the split software component service data can be stored in different caches in the three-level caches respectively, when all data are stored in the same cache, the situation that the data are not continuously queried when the data are queried in the cache, and the data are stored in different levels of caches in a split mode can be prevented, and therefore the data access efficiency can be improved.
S13, storing the service data queues into the three-level cache one by one.
In the embodiment of the application, the service data queues after splitting the service data of the software component are stored in the first-level cache, the second-level cache and the third-level cache in the three-level cache separately, so that the high efficiency of data access is ensured.
In this embodiment of the present application, the service data queues are stored into the three-level caches one by one, that is, the service data queues are grouped according to the queue sequence numbers in the service data queues, so that there is a three-level cache, the three service data queues may be divided into a group, and the service data queues are respectively stored into each level cache of the three-level cache according to the group order, that is, the service data queues are stored according to the API interface provided by the cache, after the storage is completed, it is required to confirm that the data has been cached successfully in time, and update the relevant state and metadata information. Therefore, the problems of repeated caching or data loss and the like can be avoided, after the caching of one batch of data is completed, the next batch of data is continuously taken out from the service data queue for processing and caching, and the service data queue is stored into the three-level cache, so that the quick and efficient processing and management of the data can be realized, and the performance and reliability of the system are improved. Meanwhile, the cache can also provide a persistence and backup mechanism of the data, so that the data cannot be lost due to system faults or other reasons.
Further, after all the software component service data are split, the split software component service data are respectively stored in each level of the three levels of caches, and when data query is needed, the service data queues corresponding to the three levels of caches are needed to be queried, so that the efficiency of querying the software component service data can be improved.
S14, inquiring a target data queue corresponding to the preset target software component data in the three-level cache, and merging the data streams of the inquired target data queue to obtain the target software component data.
In the embodiment of the application, for the target software component data to be queried, the target data queue corresponding to the target SCA data is queried based on the three-level cache, so that omission can be prevented from occurring during data query.
In this embodiment of the present application, querying a target data queue corresponding to preset target software component data in a three-level cache includes: inquiring whether a target data queue corresponding to the service data of the preset target software component exists in a first level cache in the three-level cache; outputting the target data queue when the target data queue exists in the first-level cache; inquiring whether the target data queue exists in a second level cache in the three-level cache when the target data queue does not exist in the first level cache; when the target data queue exists in the second-level cache, outputting the target data queue, synchronizing the target data queue to a preset cache synchronizing queue, and updating the third-level cache according to the target data queue in the cache synchronizing queue; inquiring whether the target data queue exists in a third-level cache in the third-level cache when the target data queue does not exist in the second-level cache; when the target data queue exists in the third-level cache, outputting the target data queue; and when the target data queue does not exist in the third-level cache, inquiring whether the target data queue exists in a preset database, synchronizing the target data queue into a cache synchronous queue, and updating the third-level cache according to the target data queue in the cache synchronous queue.
In detail, when the software component service data query is performed based on the three-level cache, the data query can be performed in any level cache of the three-level cache respectively, so that the data can be completely queried, and the efficiency of the data query can be improved, as shown in fig. 2, from the service flow point of view, the software component service data query is a flow chart, from the service point of view, when the data query is performed, caffeine is used as a level one cache, redis is used as a level two cache, a Clickhouse database is used as a level three cache, namely, when the SCA detection is performed, the service data is acquired from the SCA service center, namely, when the data query is performed in the level three cache based on the queue sequence number of the service data queue after the service data splitting, the data is queried in the first level Caffeine cache, if the target data queue sequence number corresponding to the target SCA data exists, the queried data queue is returned, if the data is not exists, the queried in the second level Redis cache, and the queried data queue is returned and the queried data is synchronously cached to the synchronization queue; if the data does not exist, the data is queried in the three-level cache of the database, if the data does exist, the queried data queue is returned, if the data does not exist, the data is queried in the database, a monitor is set in a Redis custom mode, the data is obtained through the monitor, and the first-level cache, the second-level cache and the third-level cache are updated according to a cache policy.
Further, when the target data queue is queried in the Redis cache, the queried SCA data is synchronized to the cache synchronization queue, so that the problem of cache transaction consistency is solved.
In this embodiment of the present application, updating the third level cache according to the target data queue in the cache synchronization queue includes: monitoring a target data queue in a cache synchronous queue through a monitor in a third-level cache; and updating the target data queue into the three-level cache according to a preset cache strategy.
In detail, a monitor License is set in a Redis cache in a self-defined manner, the change of data is monitored, the data change of Caffine is monitored, and the data with less change and more access in the Clickhouse is preheated, so that a target data queue in a cache synchronous queue is synchronously monitored by the monitor, and the target data queue is synchronously updated to a first-level cache, a second-level cache and a third-level cache in the three-level cache based on a self-defined caching strategy in each cache.
Further, for the target data queue acquired in the three-level cache, the target data queue is split data, so that the split data is required to be combined to obtain complete target SCA data, and therefore the performance of the CPU and the memory is utilized to accelerate the processing speed of the data stream and improve the data response time.
In this embodiment of the present application, performing data stream merging on the queried target data queue to obtain target software component data includes: creating a consumer group corresponding to the target data queue by using a preset stream instance; performing data logic processing on the data in the target data queue according to the consumer group; and splicing the target data queues after the data logic processing according to the preset queue sequence numbers to obtain the target software component data.
In detail, the Stream instance refers to a Stream, the Stream creates multiple consumer groups, each group is responsible for consuming data in the Stream, and a corresponding consumer ID is designated for each group, so that a consumer can read data from the Stream by using an xreatdgroup command and process the data accordingly, different consumer groups can process the data according to their own needs, and a multithreading or distributed processing framework (such as Flink) can be used for processing the data in parallel. Each consumer is responsible for acquiring data from the queue and performing corresponding processing; according to specific business requirements, the consumers can perform data logic processing on the acquired data, including cleaning, calculating, aggregating and other operations. The idea of the Flink can be consulted, and the data can be processed in real time by utilizing a stream processing mode; after the data of each queue is processed, the processing results are combined, and the data structures (such as Map, list and the like) in the memory can be used for combining and summarizing the processing results; according to the service requirement, the final processing result is output to a target system or returned to a client, and can be output by using a message queue, a database, a file and the like, wherein the Stream flow is an abstract concept in Java8, a new set operation mode, the Stream flow is an element sequence generated from a source supporting data processing operation, and the source can be an array, a file, a set and a function. Streams are not collection elements, stream streams are not data structures, they do not hold data, and the main purpose is computation.
Specifically, as shown in fig. 3, a detailed flowchart of software component data query is shown, firstly, a scripting language (Python, go, shell) is utilized to automatically adjust and optimize a Clickhouse parameter, and a three-level cache is started, and a timing synchronous cache hit strategy is adopted in the three-level cache, then firstly, data is queried in a cafline first-level cache, and meanwhile, a synchronous cache queue is monitored, and the data in the synchronous cache queue are respectively updated into the three-level cache through a customized cache strategy in the cafline; if the data is not queried in the Caffin first-level buffer, querying the data in the Redis second-level buffer, monitoring a synchronous buffer queue, respectively updating the data in the synchronous buffer queue into the third-level buffer through a custom buffer strategy in the Redis, performing splitting processing on the data by using Stream flow, and performing data flow merging to obtain complete SCA data.
Fig. 4 is a functional block diagram of a software component detection apparatus based on cache according to an embodiment of the present application.
The software component detection apparatus 400 based on the cache may be installed in a device. Depending on the implementation, the cache-based software component detection apparatus 400 may include a three-level cache construction module 401, a data splitting module 402, a service data queue storage module 403, and a data stream merging module 404. A module of the present application may also be referred to as a unit, meaning a series of computer program segments capable of being executed by a processor of a device and of performing a fixed function, stored in a memory of the device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the third-level cache construction module 401 is configured to construct a third-level cache through a preset scripting language and a preset cache component;
the data splitting module 402 is configured to split data of service data of a preset software component to obtain a service data queue;
a service data queue storage module 403, configured to store the service data queues into the three-level cache one by one;
and the data stream merging module 404 is configured to query a target data queue corresponding to the preset target software component data in the three-level buffer, and merge the data streams from the queried target data queue to obtain the target software component data.
In detail, each module in the cache-based software component detection apparatus 400 in the embodiment of the present application adopts the same technical means as the above-mentioned cache-based software component detection method in fig. 1 to 3, and can produce the same technical effects, which are not described herein.
Fig. 5 is a schematic structural diagram of an electronic device for implementing a method for detecting a software component based on a cache according to an embodiment of the present application.
The electronic device may comprise a processor 501, a memory 502, a communication bus 503 and a communication interface 504, and may further comprise a computer program stored in the memory 502 and executable on the processor 501, such as a software component detection program based on a cache.
The processor 501 may be formed by an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed by a plurality of integrated circuits packaged with the same function or different functions, including one or more central processing units (Central Processing unit, CPU), a microprocessor, a digital processing chip, a combination of a graphics processor and various control chips, etc. The processor 501 is a Control Unit (Control Unit) of the device, connects various components of the entire device using various interfaces and lines, and executes various functions of the device and processes data by running or executing programs or modules stored in the memory 502 (e.g., executing a cache-based software component detection program, etc.), and invoking data stored in the memory 502.
Memory 502 includes at least one type of storage medium including flash memory, a removable hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. Memory 502 may be an internal storage unit of an electronic device in some embodiments, such as a removable hard disk of the device. The memory 502 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device. The memory 502 may be used not only for storing application software installed in an electronic device and various types of data, such as code of a software component detection program based on a cache, but also for temporarily storing data that has been output or is to be output.
The communication bus 503 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable connected communication between the memory 502 and the at least one processor 501 etc.
Communication interface 504 is used for communication between the devices described above and other devices, including network interfaces and user interfaces. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the device and other devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), or alternatively a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
Only an electronic device having components is shown, and it will be understood by those skilled in the art that the structures shown in the figures do not constitute limitations on the electronic device, and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
For example, although not shown, the device may further include a power source (such as a battery) for powering the various components, the power source may preferably be logically connected to the at least one processor 501 via a power management device, such that charge management, discharge management, and power consumption management functions are performed by the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The device may also include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described in detail herein.
It should be understood that the examples are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
In particular, the specific implementation method of the above instruction by the processor 501 may refer to the description of the relevant steps in the corresponding embodiment of the drawings, which is not repeated herein.
Further, the device-integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a storage medium. The storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present application also provides a storage medium storing a computer program which, when executed by a processor of a device, can implement the cache-based software component detection method of any of the above embodiments. The storage medium may be volatile or nonvolatile. For example, the storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
In the several embodiments provided in this application, it should be understood that the disclosed apparatus, device, and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of modules is merely a logical function division, and other manners of division may be implemented in practice.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the foregoing description, and all changes which come within the meaning and range of equivalency of the scope of the invention are therefore intended to be embraced therein.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application.

Claims (9)

1. A method for detecting software components based on cache, the method comprising:
generating configuration parameters of a preset column database through a preset script language, and taking the configuration parameters and the column database after the application of a preset caching strategy as a first level cache;
extracting a local cache component in a preset cache component, configuring a cache policy in the local cache component, and taking the local cache component configured by the cache policy as a second-level cache;
taking a remote dictionary service component in the cache component as a third-level cache;
combining the first-level cache, the second-level cache and the third-level cache into a three-level cache; the first-level cache of the three-level cache is a storage space of the database, the second-level cache is a local cache, and the third-level cache is a remote dictionary service;
carrying out data distribution on preset software component service data to obtain a service data queue;
storing the service data queues into the three-level cache one by one;
inquiring whether a target data queue corresponding to the service data of the preset target software component exists in a first level cache in the three-level cache;
outputting the target data queue when the target data queue exists in the first-level cache;
querying whether the target data queue exists in a second level cache in the tertiary cache when the target data queue does not exist in the first level cache;
when the target data queue exists in the second-level cache, outputting the target data queue, synchronizing the target data queue into a preset cache synchronizing queue, and updating the three-level cache according to the target data queue in the cache synchronizing queue;
inquiring whether the target data queue exists in a third-level cache in the third-level cache when the target data queue does not exist in the second-level cache;
outputting the target data queue when the target data queue exists in the third-level cache;
when the target data queue does not exist in the third-level cache, inquiring whether the target data queue exists in a preset database, synchronizing the target data queue into a cache synchronous queue, and updating the third-level cache according to the target data queue in the cache synchronous queue;
and carrying out data stream combination on the inquired target data queue to obtain the target software component data.
2. The method for detecting software components based on cache as recited in claim 1, wherein said generating configuration parameters of a preset columnar database by a preset scripting language comprises:
reading configuration parameters of a server through the script language;
generating tuning parameters of the column database according to the configuration parameters;
generating a cache starting parameter of the column database according to a preset cache command;
and generating configuration parameters of the column database according to the tuning parameters and the cache starting parameters.
3. The method for detecting software components based on cache as recited in claim 1, wherein said using a remote dictionary service component of said cache components as a third level cache comprises:
setting a monitor in the remote dictionary service component, wherein the monitor monitors the data access times and the data change times of target data in the first-level cache, and preheats the target data in the first-level cache according to the data access times and the data change times;
the remote dictionary service component of the set listener is used as a third level cache.
4. The method for detecting software components based on cache as claimed in claim 1, wherein said performing data offloading on service data of preset software components to obtain a service data queue comprises:
carrying out data splitting on preset software component service data according to preset service requirements to obtain software component splitting data;
and distributing the split data of the software component into a preset target queue to obtain a service data queue.
5. The method for detecting software components based on cache as recited in claim 1, wherein said updating the tertiary cache based on the target data queue in the cache synchronization queue comprises:
monitoring a target data queue in the cache synchronous queue through a monitor in a third-level cache;
and updating the target data queue into the three-level cache according to a preset cache strategy.
6. The method for detecting a software component based on cache as claimed in claim 1, wherein said merging the data stream of the queried target data queue to obtain the target software component data comprises:
creating a consumer group corresponding to the target data queue by using a preset flow instance;
performing data logic processing on the data in the target data queue according to the consumer group;
and splicing the target data queues after the data logic processing according to the preset queue sequence numbers to obtain the target software component data.
7. A cache-based software component detection apparatus, the apparatus comprising:
the three-level cache construction module is used for generating configuration parameters of a preset column database through a preset script language, and taking the configuration parameters and the column database after the preset cache policy is applied as a first-level cache; extracting a local cache component in a preset cache component, configuring a cache policy in the local cache component, and taking the local cache component configured by the cache policy as a second-level cache; taking a remote dictionary service component in the cache component as a third-level cache; combining the first-level cache, the second-level cache and the third-level cache into a three-level cache; the first-level cache of the three-level cache is a storage space of the database, the second-level cache is a local cache, and the third-level cache is a remote dictionary service;
the data distribution module is used for distributing data of preset software component service data to obtain a service data queue;
the service data queue storage module is used for storing the service data queues into the three-level cache one by one;
the data stream merging module is used for inquiring whether a target data queue corresponding to the preset target software component service data exists in a first level cache in the three-level cache; outputting the target data queue when the target data queue exists in the first-level cache; querying whether the target data queue exists in a second level cache in the tertiary cache when the target data queue does not exist in the first level cache; when the target data queue exists in the second-level cache, outputting the target data queue, synchronizing the target data queue into a preset cache synchronizing queue, and updating the three-level cache according to the target data queue in the cache synchronizing queue; inquiring whether the target data queue exists in a third-level cache in the third-level cache when the target data queue does not exist in the second-level cache; outputting the target data queue when the target data queue exists in the third-level cache; when the target data queue does not exist in the third-level cache, inquiring whether the target data queue exists in a preset database, synchronizing the target data queue into a cache synchronous queue, and updating the third-level cache according to the target data queue in the cache synchronous queue; and carrying out data stream combination on the inquired target data queue to obtain the target software component data.
8. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the cache-based software component detection method of any one of claims 1 to 6.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the cache-based software component detection method according to any one of claims 1 to 6.
CN202311532684.5A 2023-11-17 2023-11-17 Software component detection method, device, equipment and storage medium based on cache Active CN117251383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311532684.5A CN117251383B (en) 2023-11-17 2023-11-17 Software component detection method, device, equipment and storage medium based on cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311532684.5A CN117251383B (en) 2023-11-17 2023-11-17 Software component detection method, device, equipment and storage medium based on cache

Publications (2)

Publication Number Publication Date
CN117251383A CN117251383A (en) 2023-12-19
CN117251383B true CN117251383B (en) 2024-03-22

Family

ID=89126755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311532684.5A Active CN117251383B (en) 2023-11-17 2023-11-17 Software component detection method, device, equipment and storage medium based on cache

Country Status (1)

Country Link
CN (1) CN117251383B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783381A (en) * 2019-01-07 2019-05-21 中国银行股份有限公司 A kind of test data generating method, apparatus and system
CN110674432A (en) * 2019-09-09 2020-01-10 中国平安财产保险股份有限公司 Second-level caching method and device and computer readable storage medium
CN110753099A (en) * 2019-10-12 2020-02-04 平安健康保险股份有限公司 Distributed cache system and cache data updating method
WO2020233374A1 (en) * 2019-05-21 2020-11-26 深圳壹账通智能科技有限公司 Business platform cache strategy test method and apparatus
CN115114335A (en) * 2022-06-29 2022-09-27 上海汇付支付有限公司 Software application multi-level caching and flow filtering method
CN115729555A (en) * 2022-09-07 2023-03-03 深圳开源互联网安全技术有限公司 Software component analysis method, device, terminal device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783381A (en) * 2019-01-07 2019-05-21 中国银行股份有限公司 A kind of test data generating method, apparatus and system
WO2020233374A1 (en) * 2019-05-21 2020-11-26 深圳壹账通智能科技有限公司 Business platform cache strategy test method and apparatus
CN110674432A (en) * 2019-09-09 2020-01-10 中国平安财产保险股份有限公司 Second-level caching method and device and computer readable storage medium
CN110753099A (en) * 2019-10-12 2020-02-04 平安健康保险股份有限公司 Distributed cache system and cache data updating method
CN115114335A (en) * 2022-06-29 2022-09-27 上海汇付支付有限公司 Software application multi-level caching and flow filtering method
CN115729555A (en) * 2022-09-07 2023-03-03 深圳开源互联网安全技术有限公司 Software component analysis method, device, terminal device and storage medium

Also Published As

Publication number Publication date
CN117251383A (en) 2023-12-19

Similar Documents

Publication Publication Date Title
US9619430B2 (en) Active non-volatile memory post-processing
CN108009008A (en) Data processing method and system, electronic equipment
CN103782282B (en) There is the computer system of the treater locally coherence for virtualization I/O
CN103177059A (en) Split processing paths for database calculation engine
US20130036425A1 (en) Using stages to handle dependencies in parallel tasks
JP2020535559A (en) Resource scheduling methods, scheduling servers, cloud computing systems, and storage media
EP2763055B1 (en) A telecommunication method and mobile telecommunication device for providing data to a mobile application
US11036608B2 (en) Identifying differences in resource usage across different versions of a software application
US20210081248A1 (en) Task Scheduling
CN106354729A (en) Graph data handling method, device and system
CN111258759A (en) Resource allocation method and device and electronic equipment
US10572463B2 (en) Efficient handling of sort payload in a column organized relational database
CN113468111A (en) Log monitoring management system and method for container cloud
US20170149864A1 (en) Distributed applications management with dependent resilient distributed services
US9292405B2 (en) HANA based multiple scenario simulation enabling automated decision making for complex business processes
CN112052082A (en) Task attribute optimization method, device, server and storage medium
CN111488323A (en) Data processing method and device and electronic equipment
CN113157411B (en) Celery-based reliable configurable task system and device
CN107209763B (en) Rules for specifying and applying data
CN103729166A (en) Method, device and system for determining thread relation of program
CN117251383B (en) Software component detection method, device, equipment and storage medium based on cache
CN105808451A (en) Data caching method and related apparatus
JP5687603B2 (en) Program conversion apparatus, program conversion method, and conversion program
CN114168594A (en) Secondary index creating method, device, equipment and storage medium of horizontal partition table
CN114896164A (en) Interface optimization method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant