CN117472930A - Storage method, device and equipment for high concurrency data - Google Patents

Storage method, device and equipment for high concurrency data Download PDF

Info

Publication number
CN117472930A
CN117472930A CN202311806406.4A CN202311806406A CN117472930A CN 117472930 A CN117472930 A CN 117472930A CN 202311806406 A CN202311806406 A CN 202311806406A CN 117472930 A CN117472930 A CN 117472930A
Authority
CN
China
Prior art keywords
data
thread
distributed lock
lock
timeout
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311806406.4A
Other languages
Chinese (zh)
Inventor
钟波
蓝聪
周育玺
曾乙林
薛俊
李成富
郑建波
曹冰兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Dacheng Juntu Technology Co ltd
Original Assignee
Chengdu Dacheng Juntu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Dacheng Juntu Technology Co ltd filed Critical Chengdu Dacheng Juntu Technology Co ltd
Priority to CN202311806406.4A priority Critical patent/CN117472930A/en
Publication of CN117472930A publication Critical patent/CN117472930A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method, a device and equipment for storing high concurrency data, and relates to the field of data storage. Acquiring log data of a first APP as first data; establishing a first multithreaded connection handle with the first data; a distributed lock for the first thread Cheng Jiadi according to the first multithreaded connection handle and setting a first timeout time; after first data is written into a local memory, unlocking the first distributed lock; judging the first thread service state after the first timeout time is reached; when the first thread service state is an operation state, prolonging the timeout time of the first distributed lock to be a second timeout time; that is, the present application checks the thread service state after the timeout, and then performs unlocking according to the thread service state or determines the time for checking the service state next time.

Description

Storage method, device and equipment for high concurrency data
Technical Field
The present disclosure relates to the field of data storage, and in particular, to a method, an apparatus, and a device for storing high concurrency data.
Background
With gradual enlargement of processing data of business applications, log records of multi-user operation processes are more and more refined and more perfected, data volume is increased at geometric double speed, and the situation of storing concurrent operation data is more and more, so that the performance of an application system is not affected, log independence is kept, log storage services basically start to be provided in an independent mode, unified log services are commonly used by various applications, and a multi-source database supporting and high-concurrency log data storage service needs to be researched.
The problem that deadlock is easy to occur in high-concurrency log storage is solved, the prior art aims at solving the problem that the deadlock problem is that a distributed lock is provided with a timeout time, and a database is unlocked after the timeout time is reached, so that the problem of deadlock is avoided. However, if the time of operating the database is longer than the timeout period, the lock is expired in advance, and thus the lock is directly disabled, that is, the thread is running and the lock is released in advance, and thus the distributed lock is directly disabled.
Disclosure of Invention
The main purpose of the application is to provide a method, a device and equipment for storing high concurrency data, which aim to solve the technical problem that threads are running and terminated.
A method of storing high concurrency data, the method comprising the steps of:
acquiring first data from a first APP, wherein the first data are log data generated by the first APP;
establishing a first multithreading connection handle of the first data according to the first data;
blocking a corresponding first thread according to the first multithreaded connection handle, setting a first timeout time for the first thread Cheng Jiadi as a distributed lock;
after the first thread writes the first data into a local memory, unlocking the first distributed lock;
judging the service state of the first thread after the first timeout time is reached;
when the first thread service state is an operation state, prolonging the timeout time of the first distributed lock to be a second timeout time;
optionally, before the step of establishing a first multithreaded connection handle of the first data according to the first data, the method further includes:
pushing the first data to a main data queue, the main data queue being a main ingress queue for all data.
Optionally, the step of blocking the corresponding first thread according to the first multithreaded connection handle, setting a first timeout time for the first thread Cheng Jiadi by using a first distributed lock body, and adding the first distributed lock body is as follows: and obtaining the first thread identification, and then, performing first distributed lock based on the Redis database according to the first thread identification.
Optionally, according to the first distributed lock, writing the first data into a local memory, and in the step of unlocking after writing, the specific steps of unlocking are as follows: and comparing the value of the lock, wherein the stored value corresponds to the current thread to unlock.
Optionally, the connection handle includes a read connection handle and a write connection handle.
Optionally, the number of the read connection handle and the write connection handle is dynamically allocated according to the request amount of the read and the write.
Optionally, before establishing the first multithreaded connection handle with the first data according to the first data, a trust check is performed on the request information.
Optionally, in the step of querying the first thread service state according to the first timeout signal, unlocking or extending the timeout time of the first distributed lock to the second timeout time according to the first thread service state, the second timeout time of extending the first distributed lock is determined according to a big data model.
In yet another aspect, a storage device for high concurrency data includes:
the data receiving module is used for receiving log data generated by the first APP, and the generated log data is the first data;
the distributed lock module establishes a first multithreaded connection handle with the first data; blocking a corresponding first thread, setting a first timeout time for the first thread Cheng Jiadi-a distributed lock, for the first distributed lock;
the read-write module is used for unlocking the first distributed lock after the first thread writes the first data into the local memory within the first timeout time;
the timeout module judges the service state of the first thread after the first timeout time is reached;
when the first thread service state is an operation state, prolonging the timeout time of the first distributed lock to be a second timeout time; unlocking the first distributed lock when the first thread business state is other states.
In yet another aspect, an electronic device includes a memory having a computer program stored therein and a processor executing the computer program to implement the method of any of the above.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the method, the device and the equipment for storing the high concurrency data, the multithread connection with the concurrency log is used for storing the quick response log, the distributed lock is used for avoiding inaccuracy of data caused by resource competition, and the distributed lock is easy to deadlock under the condition of abnormal interruption of threads, so that the prior art can set timeout time when the distributed lock is added, and can automatically unlock when the timeout time is reached, but if the time for operating a database is longer than the timeout time, the lock is expired in advance, so that the lock is directly invalid, namely the problem that the lock is released in advance due to running of the threads, and the distributed lock is directly invalid is caused.
In order to solve the problem that the lock is released in advance, the lock is not unlocked immediately after the overtime time is reached, whether the thread is operating normally or not is checked, and the problem that the thread is operating normally and unlocked by mistake is solved according to the state of the thread to unlock or delay the overtime time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will make brief description of the drawings used in the description of the embodiments or the prior art. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
FIG. 1 is a flow chart of a method for storing high concurrency data according to the present application;
FIG. 2 is a schematic diagram of a multi-handle connection and a trusted verification process performed before connection of a method for storing high concurrency data in the present application;
fig. 3 is a schematic diagram of a memory device for high concurrency data according to the present application.
Detailed Description
In order that those skilled in the art will better understand the present disclosure, a clear and complete description of the technical solutions of the embodiments of the present disclosure will be provided below in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
As shown in fig. 1, a method for storing high concurrency data may be performed by a database storing high concurrency data, including:
s110, acquiring first data from a first APP, wherein the first data are log data generated by the first APP;
wherein, high concurrency means that the access amount or the data amount is relatively large at a certain moment. The database receives requests of a plurality of APPs which simultaneously request to read or write to form high concurrency data.
A log message is information that a computer system, device, software, etc. reacts to generating under some trigger. The log data is information in a log message that tells you why the message was generated, e.g., APP will typically log when someone accesses the requested resource (picture, file, etc.). If the page accessed by the user needs to be authenticated, the log message will contain the user name.
Specifically, pod is the smallest unit of application execution in Kubernetes (also top level resource), also called a container group, since there can be multiple containers per Pod, multiple containers can be divided into core and non-core containers, and a popular expression of "sidocar mode" can be obtained by mapping mechanical sidocar to container resources in Pod:
the core container, also called Main container, is responsible for the Main work. The uncore container, also called the Sidecar container, is responsible for auxiliary and extensibility operations.
In the log collection example, the Main container, i.e., the application container, and the Sidecar container, i.e., the FileBeat log collection component.
The container of the first APP and the Sidecar container are deployed on the same host. The first APP container is responsible for running the first APP itself, while the Sidecar container is specifically responsible for running the log collection agent. The Sidecar container is connected with the first APP container to acquire log data to be acquired, namely first data, and the connection is realized in a sharing file system mode. The log collection agent running in the Sidecar container monitors log output of the first APP container and collects log data. Different log formats may be parsed in a particular manner.
Optionally, after the log data is collected, the log data is pushed to the main data queue. A queue is a special linear table that allows delete operations only at one end (head of queue) and insert operations at the other end (tail of queue). (in addition, there is a queue called double-ended queue, that is, both ends can insert and delete a linear table, and only a common queue is discussed herein.) the queues can be further divided into sequential queues and linked list queues according to the implementation of the queues. For example, the data Queue may use a distributed MQ (Message Queue), which may support distributed expansion, or may enable a frame to have high availability, and may still have considerable performance when large data is processed.
Optionally, pushing the high concurrency data to a main data queue and responding to a first APP that sends the high concurrency data. The main data queue is a main entrance queue of all data, and after the data is pushed to the main entrance queue, the result is returned immediately, so that the first APP can quickly receive the response result, and the response speed is improved.
S120, establishing a first multithreaded connection handle with the first data;
the handle is an address, and the program transmits data among different functions through the handle, so that the address can be used for transmitting values, and the memory can be saved. Handles can be categorized into context handles, connection handles, statement handles. Environment handle: is used to store the global context of the data. Such as: environmental status, status diagnostics, etc. The environment handle should be created before the connection handle is created. Connection handle: one context handle may establish multiple connection handles, namely 1: n. Each connection handle may be connected to a data source, and each connection handle may be connected to multiple statement handles. Thus, mutual access between the plurality of databases is achieved in this manner. Statement handle: the address of an SQL sentence is stored, and the result of the execution of the SQL sentence can be also included.
Specifically, when the server is started, a certain number of threads are started, namely the first multithreading, and when the server is started, the number of threads can be specified, and each thread corresponds to one queue.
Each request received by the server is put into a global read-write queue, and elements in the queue are distributed to the queue corresponding to each thread, and the work is executed in the main thread.
Each thread (including a main thread and a sub-thread) receives the request parameters and parses.
The main thread distributes tasks to the queues corresponding to each thread in a round training mode, so that a multithreaded connection handle with the first data is established.
S130, setting a first timeout time for the first distributed lock for the first line Cheng Jiadi;
where a lock is a mechanism by which a computer coordinates multiple processes or threads to concurrently access a resource. In addition to the contention of traditional computing resources (e.g., CPU, RAM, I/O, etc.), in databases, data is also a resource that is shared by many users. How to guarantee the consistency and effectiveness of the concurrent access of data is a problem that all databases must solve, and lock conflicts are also an important factor affecting the performance of the concurrent access of the databases. For a stand-alone system, this may be achieved by conventional locking methods such as synchronized or reintrantlock, however, for a distributed cluster system, the problem is not solved by a simple local lock, so a distributed lock is required, and a three-party component or service is usually introduced to solve the problem, such as Redis, zookeeper.
Specifically, the current Unix time is obtained in milliseconds. Attempts are made in turn to acquire the lock, i.e. the first distributed lock, from N instances using the same key and random value. When setting a lock to Redis, the first thread should set a network connection and response timeout that should be less than the lock failure time, i.e., the first timeout. For example, your lock auto-disable time is 10 seconds, then the response timeout should be between 5-50 milliseconds. Therefore, the first APP can be prevented from waiting for a response result in dead places under the condition that the server Redis is hung up. If the Redis does not respond within a specified time, the first thread should try another Redis instance as soon as possible.
The first thread uses the current time minus the time to begin acquiring the lock to obtain the time to acquire the lock usage. A lock will be successful if and only if it is fetched from most (here 3 nodes) of the Redis nodes and used for less than the lock failure time.
If a lock is taken, the true valid time of the key is equal to the valid time minus the time it takes to acquire the lock.
If, for some reason, the acquire lock fails (no lock is acquired or the lock acquisition time has exceeded the valid time for at least N/2+1 Redis instances), the first thread should unlock on all of the Redis instances (even if some Redis instances have not been successfully locked at all).
Optionally, before acquiring the lock, acquiring the first thread identifier, and then making a first distributed lock based on the Redis database according to the first thread identifier, that is, setting a value of the lock according to the identifier of the first thread.
S140, after the first thread writes the first data into the local memory, unlocking the first distributed lock;
specifically, after the first distributed lock is obtained, the first thread writes first data into the local memory, the written area should correspond to the first thread, after the first data is written, the value of the key is obtained first, whether the value of the key corresponds to the first thread identification is compared, and if the value of the key corresponds to the first thread identification, the unlocking is performed.
Alternatively, the unlock code may be placed into the script, and the script is executed using the redistemplate. Thus ensuring the atomicity of unlocking.
Optionally, to prevent an individual file from expanding indefinitely, redis periodically splits the file as it is written, according to two dimensions, file size and time, respectively. The default splitting threshold is to write a new log file and index file when the log file size reaches 128M or at the same time every one hour and the number of log entries is greater than 10w, respectively.
S150-S160, judging the service state of the first thread after reaching the overtime time;
specifically, thread state may be obtained by calling thread. The states in a thread can be divided into five types: new (New state), runnable (Running state), blocked (Blocked state), dead (Dead state). And judging the service state of the first thread.
S180-S190, wherein the thread service state is an operation state, and the timeout time of the first distributed lock is prolonged;
specifically, the service state of the first thread is determined, and when the service state of the first thread is Running, whether the key value of the lock is equal to the value corresponding to the first thread identifier is determined. If the key of the lock also exists and the value is also equal to the value corresponding to the first thread identifier, a Lua script is sent to the redis service instance to allow the redis service to extend the lock time.
Alternatively, the time for extending the lock may be fixed or determined from a big data model.
S170-S200, the thread service state is other state, and the first distributed lock is unlocked.
Specifically, when the service state of the first thread is judged to be other Running states except for Running, the value of the key is acquired first, the key still exists and is equal to the value corresponding to the first thread identifier, and then unlocking is performed.
Alternatively, the unlock code may be placed into the script, and the script is executed using the redistemplate. Thus ensuring the atomicity of unlocking.
Example procedure:
$token = rand(1, 100000);function lock() { return Redis::set("my:lock", $token, "nx", "ex", 10);
} function unlock() {
$script = `if redis.call("get",KEYS[1]) == ARGV[1]
then return redis.call("del",KEYS[1])else
return 0end return Redis::eval($script, "my:lock", $token)
} if (lock()) { // do something
unlock();
}
the token is a random number, when the token is locked, the token in the token is stored in the token of rediss, when the token is unlocked, the token in the token is first fetched, if the token is consistent with the token to be deleted, the lock is indicated as the previous set, otherwise, the lock is indicated as the expired, and the lock is indicated as the additional set, and no operation should be performed on the lock.
Example 2
As shown in fig. 2, the method for storing high concurrency data further includes, based on embodiment 1:
s210, optionally, before establishing the first multithreaded connection handle with the first data, further includes performing a trust check on the request information.
Specifically, the database receives the connection from the first APP by listening to a TCP port or Unix socket, and after a connection is established, the database performs the following operations:
first, the first APP socket will be set to non-blocking mode, and then a readable file event is created for listening to the data transmission of this first appdock.
Then, before the data of the first APPsocket is sent, user information verification information is sent first, after the database receives the information, the information is verified, after the verification is passed, log information of the first APP is received and processed, the verification is not passed, and the connection with the first APP is disconnected.
The specific verification process is as follows:
the client logs in, inputs a user name and a password, performs verification in the background, and returns a login failure prompt if the verification fails. If the verification is successful, then a token is generated and then the usernames and the token are bound in both directions (the token can be fetched according to the usernames or the usernames can be fetched according to the token) and stored in the redis, and meanwhile, the token+usernames is used as a key to store the current time stamp in the redis. And sets an expiration time for both.
The interceptor is walked by the interface every time it is requested, and if the interface is annotated with an @ AuthToken annotation, the Authorization field transmitted by the client is checked to obtain the token. Because the token is bound with the usernames in two directions, the acquisition of usernames from redis can be attempted through the acquired token, if the acquired token can be acquired, the correct token is indicated, otherwise, the incorrect token is indicated, and the authentication failure is returned.
the token can dynamically adjust its expiration time according to the use condition of the user. When the distance creation time of the current time is about to reach the set redis expiration time, the time stamp of the token expiration time is reset, and the expiration time is prolonged. If the user does not perform any operation (no request is made) for the set period of the redis expiration time, the token will expire in the redis.
S220, optionally, the connection handle thereof includes a read connection handle and a write connection handle. Alternatively, the number of read connection handles and the number of write connection handles may be fixed values or dynamically allocated according to the request amount of reading and writing.
Specifically, the first APP sends a get key command, the socket receives data to become readable, the read connection handle listens for a readable event, the read event is lost to the event queue, and the event dispatcher distributes the read event to the command request processor for execution.
After receiving the data, the command request processor analyzes the data, executes the get command, inquires the data corresponding to the key from the memory, associates the ae_writeable write event with the response processor, and monitors the data by the IO multiplexing program. The client is ready to receive data, the command request processor generates an ae_writeable event, the write connection handle monitors the write event, the write event is lost to the event queue, and the event distributor sends the write event to the command response processor for processing. The command response processor writes the data back to the socket and returns the data to the first APP. Alternatively, the number of read connection handles and the number of write connection handles may be fixed values or dynamically allocated according to the request amount of reading and writing.
Creating database connection handle example code:
introducing dependencies
Generally, the two are used together
<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-redis</artifactId></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-pool2</artifactId></dependency>
yml arrangement
yml, the connections of the redis library are configured and the maximum number of connections is noted here: the SpringDataRedis schema is a configuration of lettuce that requires the introduction of a dependency library of jedis if it is desired to change to the configuration of jedis
spring:redis:host: 192.168.1.180port: 6379password: 123456lettuce:pool:max-active: 8min-idle: 0max-wait: 100ms
Example 3
The embodiment provides a specific implementation scheme on the basis of embodiment 1 and embodiment 2, and the storage device for high concurrency data comprises a first APP and a database which are deployed on the same server, wherein the database supports real-time storage of log data generated by the first APP and the second APP, can read the log data generated by the first APP and the second APP, and further comprises a plurality of terminals provided with at least one APP, and the terminals can be a PC, a mobile phone, a pad or other internet of things equipment and user equipment.
Identity verification and log information collection:
the container of the first APP and the Sidecar container are deployed on the same host. The log collection agent running in the Sidecar container monitors log output of the first APP container and collects log data.
Before the data of the first APPsocket is sent, user information verification information is sent first, after the database receives the information, the information is verified, after verification is passed, log information of the first APP is received and processed, verification is not passed, and connection with the first APP is disconnected.
Pushing the high concurrency data and the first APP identification to a main data queue, and responding to the first APP for sending the high concurrency data.
Multithreaded connection:
when the server is started, a certain number of threads are started, and each thread corresponds to one queue.
Each request received by the server is put into a global read-write queue, and elements in the queue are distributed to the queue corresponding to each thread, and the work is executed in the main thread.
Each thread (including a main thread and a sub-thread) receives the request parameters and parses.
The main thread distributes tasks to the queues corresponding to each thread in a round training mode, so that a multithreaded connection handle with the first data is established.
Locking:
before acquiring the lock, the first thread identification is acquired, and sequentially attempts to acquire the lock and the first distributed lock from the N instances by using the same key and the random value. If, for some reason, the acquire lock fails (no lock is acquired or the lock acquisition time has exceeded the valid time for at least N/2+1 Redis instances), the first thread should unlock on all of the Redis instances (even if some Redis instances have not been successfully locked at all). After the lock is successfully acquired, the value of the lock should correspond to the first thread identification.
And (3) finishing operation unlocking:
when the first thread completes the operation, the database is unlocked, and the value of the corresponding lock before unlocking corresponds to the first thread identification. The unlocking can be completed correspondingly.
Timeout extends timeout time or unlock:
judging the service state of the first thread, and judging whether the key value of the lock is equal to the value corresponding to the first thread identifier when the service state of the first thread is Running. If the key of this lock also exists and the value is also equal to the value to which the first thread identification corresponds, then the lock time is extended. The timeout may be determined according to a big data model.
When the service state of the first thread is other than Running, firstly acquiring the key value, comparing whether the key value is equal to the value corresponding to the first thread identification, and unlocking if the key value is equal to the value corresponding to the first thread identification.
Example 4
Referring to fig. 3, based on the same inventive concept, an embodiment of the present application further provides a storage device for high concurrency data, including:
the data receiving module is used for receiving log data generated by the first APP, and the generated log data is the first data;
the distributed lock module establishes a first multithreaded connection handle with the first data; blocking a corresponding first thread, setting a first timeout time for the first thread Cheng Jiadi-a distributed lock, for the first distributed lock;
the read-write module is used for unlocking the first distributed lock after the first thread writes the first data into the local memory;
the timeout module judges the service state of the first thread after the first timeout time is reached;
when the first thread service state is an operation state, prolonging the timeout time of the first distributed lock to be a second timeout time; unlocking the first distributed lock when the first thread business state is other states.
Example 5
The present embodiment provides a computer device including a memory and a processor, the memory storing a computer program, the processor executing the computer program to implement any of the methods described above.
Example 6
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, and a processor executes the computer program to implement any one of the methods described above.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories. The computer may be a variety of computing devices including smart terminals and servers.
In the foregoing embodiments of the present disclosure, the descriptions of the various embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of units may be a logic function division, and there may be another division manner in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable non-volatile storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a non-volatile storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present disclosure. And the aforementioned nonvolatile storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present disclosure and are intended to be comprehended within the scope of the present disclosure.

Claims (10)

1. A method of storing high concurrency data, the method comprising the steps of:
acquiring first data from a first APP, wherein the first data are log data generated by the first APP;
establishing a first multithreading connection handle of the first data according to the first data;
blocking a corresponding first thread according to the first multithreaded connection handle, setting a first timeout time for the first thread Cheng Jiadi as a distributed lock;
after the first thread writes the first data into a local memory, unlocking the first distributed lock;
judging the first thread service state after the first timeout time is reached;
and when the first thread service state is an operation state, prolonging the timeout time of the first distributed lock to be a second timeout time.
2. The method of claim 1, wherein prior to the step of establishing a first multi-threaded connection handle for the first data based on the first data, further comprising:
pushing the first data to a main data queue, the main data queue being a main ingress queue for all data.
3. The method according to claim 1, characterized in that: the step of blocking the corresponding first thread according to the first multithreaded connection handle, setting a first timeout time for the first thread Cheng Jiadi and the first distributed lock includes:
and acquiring a first thread identifier, and performing first distributed lock based on the Redis database according to the first thread identifier.
4. A method according to claim 3, characterized in that: after the first thread writes the first data into the local memory, the specific unlocking step in the step of unlocking the first distributed lock is as follows: and comparing the value of the lock, wherein the stored value corresponds to the first thread identification to unlock.
5. The method of claim 1, wherein the first multi-threaded connection handle comprises a read connection handle and a write connection handle.
6. The method of claim 5, wherein the number of read connection handles and the number of write connection handles are dynamically allocated according to a request amount of read and write.
7. The method of claim 1, further comprising performing a trust check on the requested information prior to the step of establishing a first multithreaded connection handle with the first data based on the first data.
8. The method of claim 1, wherein the step of extending the timeout of the first distributed lock to a second timeout when the first thread traffic state is an operational state, wherein the extending the second timeout of the first distributed lock is determined according to a big data model.
9. A memory device for high concurrency data, comprising:
the data receiving module is used for receiving log data generated by the first APP, and the generated log data is the first data;
the distributed lock module establishes a first multithreaded connection handle with the first data; blocking a corresponding first thread, setting a first timeout time for the first thread Cheng Jiadi-a distributed lock, for the first distributed lock;
the read-write module is used for unlocking the first distributed lock after the first thread writes the first data into the local memory;
the timeout module judges the service state of the first thread after the first timeout time is reached;
when the first thread service state is an operation state, prolonging the timeout time of the first distributed lock to be a second timeout time; unlocking the first distributed lock when the first thread business state is other states.
10. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, the processor executing the computer program to implement the method of any of claims 1-8.
CN202311806406.4A 2023-12-26 2023-12-26 Storage method, device and equipment for high concurrency data Pending CN117472930A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311806406.4A CN117472930A (en) 2023-12-26 2023-12-26 Storage method, device and equipment for high concurrency data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311806406.4A CN117472930A (en) 2023-12-26 2023-12-26 Storage method, device and equipment for high concurrency data

Publications (1)

Publication Number Publication Date
CN117472930A true CN117472930A (en) 2024-01-30

Family

ID=89625992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311806406.4A Pending CN117472930A (en) 2023-12-26 2023-12-26 Storage method, device and equipment for high concurrency data

Country Status (1)

Country Link
CN (1) CN117472930A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782669A (en) * 2020-06-28 2020-10-16 百度在线网络技术(北京)有限公司 Method and device for realizing distributed lock and electronic equipment
CN112100146A (en) * 2020-09-21 2020-12-18 重庆紫光华山智安科技有限公司 Efficient erasure correction distributed storage writing method, system, medium and terminal
CN115543643A (en) * 2022-09-29 2022-12-30 北京自如信息科技有限公司 Distributed lock reentry execution method, device, equipment and readable storage medium
CN116107772A (en) * 2022-12-27 2023-05-12 中国邮政储蓄银行股份有限公司 Multithreading data processing method and device, processor and electronic equipment
CN116820790A (en) * 2023-07-04 2023-09-29 康键信息技术(深圳)有限公司 Delay processing method, device, equipment and medium for distributed lock

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782669A (en) * 2020-06-28 2020-10-16 百度在线网络技术(北京)有限公司 Method and device for realizing distributed lock and electronic equipment
CN112100146A (en) * 2020-09-21 2020-12-18 重庆紫光华山智安科技有限公司 Efficient erasure correction distributed storage writing method, system, medium and terminal
CN115543643A (en) * 2022-09-29 2022-12-30 北京自如信息科技有限公司 Distributed lock reentry execution method, device, equipment and readable storage medium
CN116107772A (en) * 2022-12-27 2023-05-12 中国邮政储蓄银行股份有限公司 Multithreading data processing method and device, processor and electronic equipment
CN116820790A (en) * 2023-07-04 2023-09-29 康键信息技术(深圳)有限公司 Delay processing method, device, equipment and medium for distributed lock

Similar Documents

Publication Publication Date Title
WO2021218328A1 (en) Multi-tenant access service implementation method, apparatus and device, and storage medium
US6622155B1 (en) Distributed monitor concurrency control
CN109684285B (en) User mode network file system file locking method, device and equipment
CN110188110B (en) Method and device for constructing distributed lock
CN111447102B (en) SDN network device access method and device, computer device and storage medium
US7739283B2 (en) System and method for using an RMI activation system daemon with non-java applications
CN108073823B (en) Data processing method, device and system
CN111625301A (en) Idempotent processing method, apparatus, device and storage medium
CN110971700B (en) Method and device for realizing distributed lock
CN114780569B (en) Input and output proxy method and device of mimicry redis database
US9544296B2 (en) Transferring web-application prerequisite files while authentication interface occludes web-application interface
CN112131002B (en) Data management method and device
US10360057B1 (en) Network-accessible volume creation and leasing
US20210263784A1 (en) Efficient and scalable use of shared resources
US20070067257A1 (en) Synchronizing shared resources in a collection
US20070078852A1 (en) Synchronizing shared resources in a collection
CN112148480A (en) Task processing method, device and equipment based on multithreading and storage medium
CN109766317B (en) File deletion method, device, equipment and storage medium
CN112015563B (en) Message queue switching method and device, electronic equipment and storage medium
CN111324573B (en) Network file system state management method and system
CN116820790A (en) Delay processing method, device, equipment and medium for distributed lock
CN117472930A (en) Storage method, device and equipment for high concurrency data
CN111930503A (en) Resource lock acquisition method based on ETCD
CN111225007B (en) Database connection method, device and system
CN111327680A (en) Authentication data synchronization method, device, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination