CN114463162A - Image cache processing method and device, electronic equipment and storage medium - Google Patents

Image cache processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114463162A
CN114463162A CN202011130180.7A CN202011130180A CN114463162A CN 114463162 A CN114463162 A CN 114463162A CN 202011130180 A CN202011130180 A CN 202011130180A CN 114463162 A CN114463162 A CN 114463162A
Authority
CN
China
Prior art keywords
cache
image
node
service request
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011130180.7A
Other languages
Chinese (zh)
Inventor
江萍
石岩
梁红伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN202011130180.7A priority Critical patent/CN114463162A/en
Publication of CN114463162A publication Critical patent/CN114463162A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention discloses an image cache processing method and device, electronic equipment and a storage medium. The method comprises the following steps: receiving at least one image caching service request issued by front-end equipment, and sending the at least one image caching service request to an image caching processing thread; performing task processing on the at least one image cache service request in a target cache space through an image cache processing thread; the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part. By adopting the scheme, the user-defined setting can be carried out on the basic unit of the cache length used by the target cache space according to the business requirement before caching, and the good universality and expansibility on the cache data length are ensured.

Description

Image cache processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to an image cache processing method and device, electronic equipment and a storage medium.
Background
In the technical field of security storage, a plurality of application scenes need to provide large-scale asynchronous concurrent processing on a large data block so as to improve the read-write performance. However, different cache policies and structures are often required due to inconsistency between the data structure and the service type, which results in low universality of cache applications, and even results in difficulty in guaranteeing structure consistency and limited utilization rate of cache space. For this reason, it becomes important how to guarantee that the image is effectively cached.
Disclosure of Invention
The embodiment of the invention provides an image cache processing method, an image cache processing device, electronic equipment and a storage medium, and aims to achieve universality of an image cache strategy and improve utilization rate of a cache space.
In a first aspect, an embodiment of the present invention provides an image cache processing method, including:
receiving at least one image caching service request issued by front-end equipment, and sending the at least one image caching service request to an image caching processing thread;
performing task processing on the at least one image cache service request in a target cache space through an image cache processing thread;
the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
In a second aspect, an embodiment of the present invention further provides an image cache processing apparatus, including:
the service request receiving module is used for receiving at least one image cache service request issued by the front-end equipment and sending the at least one image cache service request to an image cache processing thread;
the service request processing module is used for performing task processing on the at least one image cache service request in a target cache space through an image cache processing thread;
the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processing devices;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processing devices, the one or more processing devices are caused to implement the image caching processing method according to any one of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processing apparatus, implements the image caching processing method according to any one of the embodiments of the present invention.
The embodiment of the application provides an image cache processing method, a target cache space adopts a preset image data cache structure to initialize and determine a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part; after receiving at least one image caching service request issued by the front-end equipment, sending the at least one image caching service request to an image caching processing thread, and performing task processing on the at least one image caching service request in a target caching space through the image caching processing thread.
By adopting the technical scheme, before caching, the cache length basic unit used by the target cache space can be set in a self-defining mode according to business requirements, good universality and expansibility on the length of cache data are guaranteed, the cache space comprises different types of data parts, and the lengths of various types of data are unified into a fixed cache length basic unit, so that addressing and management can be uniformly carried out through id of the fixed cache length basic unit during caching, meanwhile, the use efficiency of the cache space can be greatly improved, and the situation that partial type information is used completely and other information is greatly remained due to different format lengths of different types of information is avoided as far as possible.
The above summary of the present invention is only an overview of the technical solutions of the present invention, and the present invention can be implemented in accordance with the content of the description so as to make the technical means of the present invention more clearly understood, and the above and other objects, features, and advantages of the present invention will be more clearly understood.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of an image caching processing method provided in an embodiment of the present invention;
fig. 2 is a schematic overall structure diagram of an image data caching structure provided in an embodiment of the present invention;
fig. 3 is a schematic node structure diagram of a cache node according to an embodiment of the present invention;
fig. 4 is a block diagram of an image cache processing apparatus provided in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations (or steps) can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of an image caching processing method provided in an embodiment of the present invention. The technical scheme of the embodiment of the application can be suitable for the condition of caching the image data sent by the front-end equipment in the monitoring scene, the method can be executed by an image caching processing device, and the device can be realized in a software and/or hardware mode and is integrated on any electronic equipment with a network communication function. As shown in fig. 1, the image caching method in the embodiment of the present application may include the following steps:
s110, receiving at least one image cache service request issued by the front-end equipment, and sending the at least one image cache service request to an image cache processing thread.
And S120, performing task processing on the at least one image cache service request in the target cache space through the image cache processing thread.
The target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
In this embodiment, the front-end device may acquire and obtain image data, perform caching processing on the acquired image data, and further perform centralized brushing on the cached image data on the disk for storage, so that it is avoided that the front-end device directly stores data in the disk, which causes frequent reading and writing on the disk and affects the reading and writing performance of the disk. Thus, it is necessary to design an image buffer using an efficient image buffer space.
In this embodiment, fig. 2 is a schematic diagram of an overall structure of an image data caching structure provided in an embodiment of the present invention. Referring to fig. 2, the image data cache is divided into 3 sections from the structure, which may include a cache structure overall section head, a cache node description section inode, and a cache node storage section cache _ group. Wherein, the cache structure overall part head mainly stores the overall information of the whole image data cache structure; the cache node description part inode represents the use condition of each cache node in the storage part of the cache node; the cache node storage part cache _ group stores the specific information of each cache node.
In this embodiment, referring to fig. 2, the target buffer space adopts a preset image data buffer structure to initialize and determine a preset physical space according to a preset buffer length basic unit. The physical space may include physical memory and other caches. In the target cache space, the lengths of all types of data carried in a cache structure overall part head, a cache node description part inode and a cache _ group of a cache node storage part in the image data cache are unified into a cache length basic unit slice with a fixed length. Therefore, the addressing and the management can be uniformly carried out through the id of the basic unit slice of the cache length, and the convenience of the addressing and the management is improved; meanwhile, the situation that partial type information is used up and the rest information has large residue due to different format lengths of different types of information can be avoided as much as possible, and the use efficiency of the cache space is improved.
In this embodiment, when caching the image data of the front-end device, a set of standard interfaces for the image caching service request may be provided for the front-end device, for example, the standard interfaces for the image caching service request include a cache node creation request interface open, a cache data write request interface write, a cache node close request interface close, and a cache data read request interface read. Since a large number of front-end devices send different image caching service requests at the same time, the standard interface of the image caching service request is set to an asynchronous non-blocking mode.
In this embodiment, after receiving the image cache service request sent by the front-end device, each image cache service request may be sent to the image cache processing thread. Furthermore, at least one image cache service request can be processed by the corresponding cache node under the target cache space through the image cache processing thread.
In yet another alternative of this embodiment, this embodiment may be combined with each of the alternatives of one or more of the embodiments described above. Wherein, sending at least one image cache service request to the image cache processing thread may comprise the following steps a1-a 2:
step A1, through the service request arbitration, the task ordering is performed to each image cache service request issued by the front-end equipment.
Step A2, sending each image cache service request to the image cache processing thread in turn according to the task sorting result; wherein the image caching service request comprises at least one of the following items: a cache node create request, a cache data write request, a cache node close request, and a cache data read request.
In this embodiment, the main function of the service request arbitration is to sort the received image cache service requests, so as to implement the release and the opening of the cache node and implement dynamic balance according to the free capacity of the cache structure. The image cache service request comprises a cache node creation request open _ in, a cache data writing request write _ in, a cache data reading request read _ in, a cache node closing request close _ in and a free task request generated by the fact that the processing of an internal cache node is finished and the recovery is finished.
In this embodiment, it should be noted that all the internal service request tasks are non-blocking, and these tasks only interact with the memory. For the cache node creation request open _ in, when the cache space does not have enough space to create the cache node, the cache node is sorted again by the adjust and is not discarded. For other service request tasks, when the service request tasks fail to be processed, the service request tasks are directly discarded and response errors are returned or corresponding warning information is printed. After the ordering by the adjust, each image cache service request can be sequentially issued to the image cache processing thread, and the non-blocking service request tasks are processed by the image cache processing thread.
In yet another alternative of this embodiment, this embodiment may be combined with each of the alternatives of one or more of the embodiments described above. Wherein, the task processing of at least one image cache service request under the target cache space through the image cache processing thread may include the following steps B1-B2:
step B1, if the image cache service request is a cache node creation request, calling a cache node creation interface through the image cache processing thread, and creating a cache node added with description information and label information under the cache node storage part of the target cache space.
Step B2, after creating a cache node, mounting the cache node to the head of the waiting filling link table in the overall part of the cache structure of the target cache space, so that the state of the cache node is changed to be filled.
In this embodiment, the request for creating a cache node carries description information and tag information of image data to be cached, an interface create interface is created in the calling cache node, parameter information and a data block size are tracked, a cache node is created under a cache node storage portion of a target cache space, and the description information and the tag information are synchronously added to the created cache node.
In this embodiment, the cache structure overall part head includes management information and structure information of the cache node cache. The management information mainly includes 5 list headers, which represent different stage states of the cache node, and they respectively include waiting to fill the list header, waiting to operate the list header, waiting to call back the list header, waiting to reload the list header, and waiting to release the list header. The structure information includes an inode check value of the cache node description part, structure header information, a verification magic signature, a version, a starting position of a cache structure overall part head in a physical space (e.g., a physical memory), a total cache size, a size of a cache length basic unit slice, the number of cache nodes, and a reserved space.
In this embodiment, after the creation of the cache node is completed, the cache node information is added to the wait-to-Fill chain header Waiting _ Fill _ Head chain table in the Head of the overall part of the cache structure, and the state of the cache node is updated to be filled, and the writing of the image data is waited. Here, the cache node creation request open _ in does not find a cache node location function, avoiding unnecessary time consuming operations in the thread. The callback function goback function in the cache node can call back to inform that the cache task is issued to the destination, reading operation in a cache structure is not needed, and time-consuming traversal action can be avoided.
In this embodiment, referring to fig. 2, optionally, the cache node description part inode describes the use of the cache set in the storage part of the cache node, each 2 map bits represents the use of the cache set in the storage part of one cache node, 00 represents idle, 01 represents in use, and 11 represents not full. The effective length of the cache node description part inode is determined by the total cache capacity of the cache node storage part.
In yet another alternative of this embodiment, this embodiment may be combined with each of the alternatives in one or more of the embodiments described above. Wherein, through the image cache processing thread, the task processing is performed on the at least one image cache service request in the target cache space, and the method further comprises the following steps:
after a cache node is created, if the image cache service request comprises a cache node write-in request, calling a cache node write-in interface through an image cache processing thread, and writing image data transmitted by front-end equipment into a corresponding cache node under a cache node storage part of a target cache space.
In this embodiment, referring to fig. 2, in the process of filling data, the cache node write interface is called, and the image data transferred by the front-end device is written to the corresponding cache node under the cache node storage portion of the target cache space.
In yet another alternative of this embodiment, this embodiment may be combined with each of the alternatives of one or more of the embodiments described above. Wherein, through the image cache processing thread, the task processing is performed on at least one image cache service request under the target cache space, and the method further comprises the following steps C1-C2:
step C1, after writing the image data into one cache node, if the image cache service request includes a cache node closing request, calling a cache node closing interface through the image cache processing thread, and closing the cache node under the cache node storage portion of the target cache space.
And step C2, mounting the created cache node to the head of the waiting operation chain table under the overall part of the cache structure of the target cache space, and changing the state of the cache node to be executed.
In this embodiment, after the front-end device completes data transmission to the cache node in the cache node storage portion of the target cache space, it may continue to issue a corresponding cache node closing request, so as to determine whether the cache node has completed data writing through the cache node closing request. And further, under the image cache processing thread, closing the cache nodes under the cache node storage part of the target cache space by calling the cache node closing interface.
In this embodiment, referring to fig. 2, after determining that the cache node is ready, and a self-defined operation Opt task that needs to execute a response is performed, the ready cache node may be mounted to a Waiting operation chain header Waiting _ Opt _ Head node in the overall part of the cache structure of the target cache space, and the state of the cache node is updated to be filled. For example, in the application of the monitoring gate, the written data needs to be further processed, for example, key information in the image data is extracted and uploaded to the database, so that the Opt function is customized, a plurality of to-be-executed cache tasks in the cache are processed in a multithread mode, and the processing result is notified to the corresponding cache node by the callback function. Therefore, the cache task is processed normally, and the task state is updated to be released.
In yet another alternative of this embodiment, this embodiment may be combined with each of the alternatives of one or more of the embodiments described above. Wherein, after mounting the created cache node to the head of the waiting operation chain table under the overall part of the cache structure of the target cache space, the following steps D1-D3 can be further included:
and D1, taking out the cache node from the waiting operation chain table head of the overall part of the cache structure of the target cache space, and executing the custom operation function of the description information record of the storage part of the cache node of the target cache space.
And D2, taking out the cache node from the waiting callback link table header under the overall part of the cache structure of the target cache space, and executing the custom callback function recorded by the description information under the storage part of the cache node of the target cache space.
And D3, in the loading process, mounting the unexecuted cache nodes to the waiting reloading chain table head under the overall part of the cache structure of the target cache space, and executing the custom reloading function of the description information record under the storage part of the cache nodes of the target cache space.
In this embodiment, referring to fig. 2, the cache node storage part cache _ group is mainly divided into 2 parts, the head information head of the cache node storage part and the iteam array of the cache node storage part. The head information head of the storage part of the cache node describes the length and the use condition of the cache _ group of the storage part of the cache node, and specifically comprises the check of the head, the total number of iteam (the reason that the value is not fixed is that the supported length of the last cache _ group is smaller than the standard), the number of uses, inode and reservation. The cache node description part inode identifies the type and use of one iteam per 2 bits, 00 identifies that the cache node is not started, 01 identifies that the iteam type of the cache node is label information (tag) of the cache node, 10 identifies that the iteam type is description information (desc) of the cache node, and 11 identifies that the iteam type is data information of the cache node.
In this embodiment, fig. 3 is a schematic node structure diagram of a cache node provided in an embodiment of the present invention. Referring to fig. 3, one cache node includes three kinds of information, which are tag information (tag), description information (desc), and data information (data), respectively. Wherein, the label information and the description information are necessary, and the data information is determined according to the requirement of the user.
In this embodiment, referring to fig. 3, the tag information (tag) records the general information of the cache node, including hang _ info (state mount information), tag information check, tag _ flag (identifying that iteam is tag information), status (identifying the state of the cache node, which is described later), check value of description information, id of description information start iteam, data information check value, id list of iteam where the data information is located, execution result of executed task, id of sub-cache header information, and reserved space. Considering the data information of the effective task, the memid-list in the header information of one cache is not enough to store all data information, so the total id storage amount of the data information iteam is increased by the information of the linked list.
In this embodiment, referring to fig. 3, the description information (desc) describes the execution function of the task at each stage and the parameters required by the task, which can be specifically divided into two parts: execution function information and execution parameter information. The execution function information comprises an identifier (desc _ flag identifies that the iteam belongs to the description information), a pointer of the next description information, names of three execution functions, a full path of a dynamic library where the execution functions are located, and reservation. The three execution functions are opt (function for task execution), goback (function for callback after task execution), and load (function for reloading after device restart and cache reloading). The execution parameter information includes identification information, a pointer to next identification information, and parameter information. Therefore, the mode of writing the dynamic library path and the function name into the cache structure enables the cache task configuration to be highly flexible, and the specific function of the callback function is completely realized by self definition of a service layer as long as the fixed interface type is met.
In this embodiment, optionally, the cache node may be taken out from the Waiting operation chain header Waiting _ Opt _ Head in the overall part of the cache structure of the target cache space, and an idle thread may be selected to execute the task according to the custom-created operation function. And updating an execution result field in the tag information of the cache node under the storage part of the cache node of the target cache space according to the execution result, and mounting the cache node to a Waiting callback link header Waiting _ goback _ Head under the overall part of the cache structure of the target cache space. For example, the task separator Dispatcher 1 fetches the cache node from the Waiting _ Opt _ Head, selects a thread to execute a task, and executes the task by the Opt function defined by the creation time, which is blocked. Updating a result field in tag information in the cache node according to the returned error code according to the execution result; and then adding a Waiting call-back chain Head Waiting _ goback _ Head, and updating the state of the cache node as a Waiting call-back.
In this embodiment, optionally, during a callback, the cache node may be taken out from the Waiting callback link header Waiting _ goback _ Head link list to execute the callback function. The callback function is a user-defined goback function during creation, and the callback function may perform different processing according to the result described above, or certainly nothing may be done. After the processing is finished, the state of the cache node is updated to be released, and a Waiting release chain header Waiting _ free _ Head is added. The Lib _ ACC _ MNG mainly dynamically stores the user-defined callback interface of the cache node, and the overhead of repeatedly loading a dynamic library is avoided.
In this embodiment, the reloading process mainly includes adding cache node information that has not been executed in the loading process to wait for reloading of the Head chain _ Reload _ Head after the cache information is loaded into the memory from other nonvolatile media, and then processing according to a custom reloading method. Optionally, the released tasks and other created cache nodes may be ordered according to the adjust to ensure that the released space and the created space are balanced, and the cache nodes are recycled through the non-blocking thread.
In this embodiment, the loading process supports checksum reconstruction when data is inconsistent, so as to ensure consistency of the cache structure. The method specifically comprises the following steps: checking the header, and reformatting the header information through the buffer size and the slice size if the problem exists; initializing the mg _ info in the head, and the information must be reconstructed; checking the inode information, if invalid, finding each cache node according to the identifier of the cache node, checking, and finally rebuilding an inode area; verifying each cache node, deleting the cache node in an invalid manner, and effectively mounting the cache node on different linked lists in the head according to the state; starting each service thread; the initialization is successful.
In this embodiment, the cache task to be released enters the adjust through the free interface and sorts the requests of other front-end devices, the cache space is released, and the data is flushed into the next-level storage space, such as a disk. If the front end sends a reading request for the picture data, before the picture data is flushed to a disk, the image data in the cache node can be directly read by calling a read interface of the cache node reading interface.
In the image cache processing scheme of the embodiment of the application, before caching, a cache length basic unit used by a target cache space can be set in a user-defined mode according to business requirements, and good universality and expansibility on the length of cache data are guaranteed.
Fig. 4 is a block diagram of an image cache processing apparatus provided in the embodiment of the present invention. The technical scheme of the embodiment of the application can be suitable for the condition of caching the image data sent by the front-end equipment in the monitoring scene, and the device can be realized in a software and/or hardware mode and is integrated on any electronic equipment with a network communication function. As shown in fig. 4, the image cache processing apparatus in the embodiment of the present application may include the following: a service request receiving module 410 and a service request processing module 420. Wherein:
a service request receiving module 410, configured to receive at least one image cache service request issued by a front-end device, and send the at least one image cache service request to an image cache processing thread;
a service request processing module 420, configured to perform task processing on the at least one image cache service request in a target cache space through an image cache processing thread;
the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
On the basis of the above embodiment, optionally, the overall cache structure portion stores overall information of the entire image data cache structure; the cache node description part stores the use condition of each cache node; the cache node storage part stores specific information of each cache node, and the specific information of the cache node comprises tag information, description information and data information.
On the basis of the foregoing embodiment, optionally, the service request receiving module 410 includes:
task ordering is carried out on each image cache service request issued by the front-end equipment through service request arbitration;
sequentially issuing each image cache service request to an image cache processing thread according to the task sequencing result;
wherein the image caching service request comprises at least one of the following items: a cache node create request, a cache data write request, a cache node close request, and a cache data read request.
On the basis of the foregoing embodiment, optionally, the service request processing module 420 includes:
if the image cache service request is a cache node creation request, calling a cache node creation interface through an image cache processing thread, and creating a cache node added with description information and label information under a cache node storage part of a target cache space;
after a cache node is created, the cache node is mounted to a head of a waiting filling chain table under the overall part of the cache structure of the target cache space, so that the state of the cache node is changed into the state to be filled.
On the basis of the foregoing embodiment, optionally, the service request processing module 420 includes:
after a cache node is created, if the image cache service request comprises a cache node write-in request, calling a cache node write-in interface through an image cache processing thread, and writing image data transmitted by front-end equipment into a corresponding cache node under a cache node storage part of a target cache space.
On the basis of the foregoing embodiment, optionally, the service request processing module 420 includes:
after writing image data into a cache node, if the image cache service request comprises a cache node closing request, calling a cache node closing interface through an image cache processing thread, and closing the cache node under a cache node storage part of a target cache space;
and mounting the created cache node to a waiting operation chain table head of the overall part of the cache structure of the target cache space, and changing the state of the cache node into a state to be executed.
On the basis of the foregoing embodiment, optionally, after mounting the created cache node to the head of the waiting operation chain table under the overall part of the cache structure of the target cache space, the method further includes:
taking out the cache node from a waiting operation chain table head under the overall part of the cache structure of the target cache space, and executing a custom operation function of the description information record under the storage part of the cache node of the target cache space;
taking out the cache node from a waiting callback link table header under the overall part of the cache structure of the target cache space, and executing a custom callback function recorded by description information under the storage part of the cache node of the target cache space;
in the loading process, the cache nodes which are not completely executed are mounted to the head of the chain table waiting for reloading under the overall part of the cache structure of the target cache space, and the user-defined reloading function of the description information record under the storage part of the cache nodes of the target cache space is executed.
The image cache processing apparatus provided in the embodiment of the present application may execute the image cache processing method provided in any embodiment of the present application, and has corresponding functions and benefits for executing the image cache processing method.
Fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention. As shown in fig. 5, the electronic device provided in the embodiment of the present invention includes: one or more processors 510 and storage 520; the processor 510 in the electronic device may be one or more, and fig. 5 illustrates one processor 510 as an example; storage 520 is used to store one or more programs; the one or more programs are executed by the one or more processors 510, so that the one or more processors 510 implement the image cache processing method according to any one of the embodiments of the present invention.
The electronic device may further include: an input device 530 and an output device 540.
The processor 510, the storage device 520, the input device 530 and the output device 540 in the electronic apparatus may be connected by a bus or other means, and fig. 5 illustrates an example of connection by a bus.
The storage device 520 in the electronic device is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image caching method provided in the embodiments of the present invention. The processor 510 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the storage device 520, namely, implements the image cache processing method in the above method embodiment.
The storage device 520 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the storage 520 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage 520 may further include memory located remotely from the processor 510, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. The output device 540 may include a display device such as a display screen.
And, when the one or more programs included in the above electronic device are executed by the one or more processors 510, the programs perform the following operations:
receiving at least one image caching service request issued by front-end equipment, and sending the at least one image caching service request to an image caching processing thread;
performing task processing on the at least one image cache service request in a target cache space through an image cache processing thread;
the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
Of course, it can be understood by those skilled in the art that when one or more programs included in the electronic device are executed by the one or more processors 510, the programs may also perform related operations in the image caching processing method provided in any embodiment of the present invention.
An embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program for executing a memory card processing method when executed by a processor, the method including:
receiving at least one image caching service request issued by front-end equipment, and sending the at least one image caching service request to an image caching processing thread;
performing task processing on the at least one image cache service request in a target cache space through an image cache processing thread;
the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
Optionally, the program may be further configured to perform an image caching processing method provided in any embodiment of the present invention when executed by a processor.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image cache processing method, comprising:
receiving at least one image caching service request issued by front-end equipment, and sending the at least one image caching service request to an image caching processing thread;
performing task processing on the at least one image cache service request in a target cache space through an image cache processing thread;
the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
2. The method according to claim 1, wherein the cache structure overall section stores overall information of the entire image data cache structure; the cache node description part stores the use condition of each cache node; the cache node storage part stores specific information of each cache node, and the specific information of the cache node comprises tag information, description information and data information.
3. The method of claim 1, wherein sending the at least one image cache service request to an image cache processing thread comprises:
task ordering is carried out on each image cache service request issued by the front-end equipment through service request arbitration;
sequentially issuing each image cache service request to an image cache processing thread according to the task sequencing result;
wherein the image caching service request comprises at least one of the following items: a cache node create request, a cache data write request, a cache node close request, and a cache data read request.
4. The method of claim 1, wherein the task processing of the at least one image cache service request in the target cache space by an image cache processing thread comprises:
if the image cache service request is a cache node creation request, calling a cache node creation interface through an image cache processing thread, and creating a cache node added with description information and label information under a cache node storage part of a target cache space;
after a cache node is created, the cache node is mounted to a head of a waiting filling chain table under the overall part of the cache structure of the target cache space, so that the state of the cache node is changed into the state to be filled.
5. The method of claim 1, wherein the task processing of the at least one image cache service request in the target cache space by an image cache processing thread comprises:
after a cache node is created, if the image cache service request comprises a cache node write-in request, calling a cache node write-in interface through an image cache processing thread, and writing image data transmitted by front-end equipment into a corresponding cache node under a cache node storage part of a target cache space.
6. The method of claim 1, wherein the task processing of the at least one image cache service request in the target cache space by an image cache processing thread comprises:
after writing image data into a cache node, if the image cache service request comprises a cache node closing request, calling a cache node closing interface through an image cache processing thread, and closing the cache node under a cache node storage part of a target cache space;
and mounting the created cache node to a waiting operation chain table head of the overall part of the cache structure of the target cache space, and changing the state of the cache node into a state to be executed.
7. The method of claim 6, further comprising, after mounting the created cache node to a head of a chain of pending operations under the overall portion of the cache structure of the target cache space:
taking out the cache node from a waiting operation chain table head under the overall part of the cache structure of the target cache space, and executing a custom operation function of the description information record under the storage part of the cache node of the target cache space;
taking out the cache node from a waiting callback link table header under the overall part of the cache structure of the target cache space, and executing a custom callback function recorded by description information under the storage part of the cache node of the target cache space;
in the loading process, the cache nodes which are not completely executed are mounted to the head of the chain table waiting for reloading under the overall part of the cache structure of the target cache space, and the user-defined reloading function of the description information record under the storage part of the cache nodes of the target cache space is executed.
8. An image cache processing apparatus, comprising:
the service request receiving module is used for receiving at least one image cache service request issued by the front-end equipment and sending the at least one image cache service request to an image cache processing thread;
the service request processing module is used for performing task processing on the at least one image cache service request in a target cache space through an image cache processing thread;
the target cache space adopts a preset image data cache structure to perform initialization determination on a preset physical space according to a preset cache length basic unit, and the preset image data cache structure comprises a cache structure overall part, a cache node description part and a cache node storage part.
9. An electronic device, comprising:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, cause the one or more processing devices to implement the image cache processing method of any of claims 1-7.
10. A computer-readable storage medium on which a computer program is stored, characterized in that the program, when executed by a processing apparatus, implements the image cache processing method of any one of claims 1 to 7.
CN202011130180.7A 2020-10-21 2020-10-21 Image cache processing method and device, electronic equipment and storage medium Pending CN114463162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011130180.7A CN114463162A (en) 2020-10-21 2020-10-21 Image cache processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011130180.7A CN114463162A (en) 2020-10-21 2020-10-21 Image cache processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114463162A true CN114463162A (en) 2022-05-10

Family

ID=81405047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011130180.7A Pending CN114463162A (en) 2020-10-21 2020-10-21 Image cache processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114463162A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539636A (en) * 2023-12-06 2024-02-09 摩尔线程智能科技(北京)有限责任公司 Memory management method and device for bus module, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539636A (en) * 2023-12-06 2024-02-09 摩尔线程智能科技(北京)有限责任公司 Memory management method and device for bus module, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109597818A (en) Data-updating method, device, storage medium and equipment
CN110555001B (en) Data processing method, device, terminal and medium
US9086920B2 (en) Device for managing data buffers in a memory space divided into a plurality of memory elements
US7770177B2 (en) System for memory reclamation based on thread entry and release request times
CN111949568B (en) Message processing method, device and network chip
TW200406672A (en) Free list and ring data structure management
CN103814354B (en) It is configured to perform the processor of transactional memory operation
CN110750356B (en) Multi-core interaction method, system and storage medium suitable for nonvolatile memory
CN109564502B (en) Processing method and device applied to access request in storage device
US11579874B2 (en) Handling an input/output store instruction
CN113438184B (en) Network card queue management method and device and electronic equipment
CN114463162A (en) Image cache processing method and device, electronic equipment and storage medium
US9021492B2 (en) Dual mode reader writer lock
CN116755635B (en) Hard disk controller cache system, method, hard disk device and electronic device
CN112948336B (en) Data acceleration method, cache unit, electronic device and storage medium
CN108874560B (en) Method and communication device for communication
CN111399753B (en) Method and device for writing pictures
JP2011248468A (en) Information processor and information processing method
CN113760465A (en) Transaction execution method, device, server, equipment and storage medium
US8112584B1 (en) Storage controller performing a set of multiple operations on cached data with a no-miss guarantee until all of the operations are complete
CN116821058B (en) Metadata access method, device, equipment and storage medium
CN116303125B (en) Request scheduling method, cache, device, computer equipment and storage medium
KR100725921B1 (en) Apparatus for TCP and UDP Socket Search
CN115098037A (en) CELL-based address application and release method
CN116383105A (en) Cache management method, processing module and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination