CN109670027B - Image query, cache and retention method and system - Google Patents

Image query, cache and retention method and system Download PDF

Info

Publication number
CN109670027B
CN109670027B CN201811607526.0A CN201811607526A CN109670027B CN 109670027 B CN109670027 B CN 109670027B CN 201811607526 A CN201811607526 A CN 201811607526A CN 109670027 B CN109670027 B CN 109670027B
Authority
CN
China
Prior art keywords
cache
service
image data
information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811607526.0A
Other languages
Chinese (zh)
Other versions
CN109670027A (en
Inventor
唐静芝
陶建林
顾强
黄育华
张立强
刘小栋
徐占海
孙启栓
轩怀亮
何文睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Rural Commercial Bank Co ltd
Original Assignee
Shanghai Rural Commercial Bank Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Rural Commercial Bank Co ltd filed Critical Shanghai Rural Commercial Bank Co ltd
Priority to CN201811607526.0A priority Critical patent/CN109670027B/en
Publication of CN109670027A publication Critical patent/CN109670027A/en
Application granted granted Critical
Publication of CN109670027B publication Critical patent/CN109670027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image query, cache and retention method and system, relating to the field of bank image retention, wherein the image retention method comprises the following steps: obtaining a cache configuration strategy corresponding to each organization service information; and if the current time is within an uploading time interval in a cache configuration strategy, uploading image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to a preset database. According to the invention, the data uploaded to the server by the client is firstly cached and then uploaded to the preset database in idle, so that the read-write separation of the data is realized, the transaction system is optimized, and the response speed is improved.

Description

Image query, cache and retention method and system
Technical Field
The invention relates to the field of bank image retention, in particular to an image query, cache and retention method and system.
Background
In recent years, with the continuous development of banking business, business data is greatly increased, in the transaction process, unstructured data mainly including audio records, video records and pictures are often called and stored for identity authentication and the like, and in the traditional retention mode, the unstructured data and structured data (such as serial numbers, file names and the like) are both stored in SAN storage or NAS storage, and data reading and writing from the storage are performed simultaneously to complete the transaction process.
Compared with structured data, unstructured data are often larger than structured data, and with the continuous increase of stock data, due to the structural limitation of a traditional retention mode, the response speed of the unstructured data in high-concurrency data access is low, and the existing mode of reading and writing data simultaneously causes certain interference to each other, so that the access speed of the unstructured data is further reduced, and the response speed of transaction is reduced.
Disclosure of Invention
The invention aims to provide an image query, cache and retention method and system, which can realize high-efficiency access of unstructured data, improve transaction response speed and improve the use experience of customers in a high-concurrency state of the data.
The technical scheme provided by the invention is as follows:
an image persistence method, comprising: obtaining a cache configuration strategy corresponding to each organization service information; and if the current time is within an uploading time interval in a cache configuration strategy, uploading image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to a preset database.
In the technical scheme, a big data technology is adopted, high-concurrency data access is supported, cached image data are uploaded to a preset database at a specified time, read-write interference is avoided, and a transaction system is optimized.
Further, the uploading of the image data corresponding to the organization service information corresponding to the cache configuration policy in the local cache to a preset database specifically includes: uploading the structured data in the image data to a preset first database; and uploading unstructured data in the image data to the preset first database or the preset second database according to a preset screening standard.
In the technical scheme, different unstructured data are stored in different preset databases, and high transmission speed is guaranteed.
Further, still include: and if the current time exceeds an uploading time interval in a cache configuration strategy, stopping uploading the image data corresponding to the mechanism service information corresponding to the cache configuration strategy.
In the technical scheme, the data are only uploaded to the preset database at the designated time, so that the influence on the service processing efficiency in the normal time period is avoided.
Further, still include: receiving batch information assembly messages sent by each mechanism; judging whether corresponding cache service is inquired or not according to the service type in each batch of information assembly messages; when the corresponding cache service is inquired, caching the image data in the corresponding batch information assembly message to the local; and uploading the image data in the corresponding batch information assembly message to the preset database when the corresponding cache service is not inquired.
In the technical scheme, the image data corresponding to the specified service type is cached locally, so that the phenomenon of reading and writing a large amount of data simultaneously is avoided, and the reading and writing separation is realized.
Further, still include: receiving an information query message sent by a mechanism; judging whether the corresponding cache service is inquired or not according to the service type in the information inquiry message; when the corresponding cache service is not inquired, inquiring corresponding image data from a preset database according to batch information and/or service types in the information inquiry message; and when the corresponding cache service is inquired, inquiring the image data corresponding to the batch information and/or the service type from a local cache.
In the technical scheme, the image data corresponding to the service type for starting the cache service is firstly inquired in the local cache and then inquired in the preset database, so that the inquiry efficiency is improved.
The invention also provides an image caching method, which comprises the following steps: receiving batch information assembly messages sent by each mechanism; judging whether corresponding cache service is inquired or not according to the service type in each batch of information assembly messages; when the corresponding cache service is inquired, caching the image data in the corresponding batch information assembly message to the local; and uploading the image data in the corresponding batch information assembly message to the preset database when the corresponding cache service is not inquired.
In the technical scheme, the image data corresponding to the specified service type is cached locally, so that the phenomenon of reading and writing a large amount of data simultaneously is avoided, and the reading and writing separation is realized.
The invention also provides an image query method, which comprises the following steps: receiving an information query message sent by a mechanism; judging whether the corresponding cache service is inquired or not according to the service type in the information inquiry message; when the corresponding cache service is not inquired, inquiring corresponding image data from a preset database according to batch information and/or service types in the information inquiry message; and when the corresponding cache service is inquired, inquiring the image data corresponding to the batch information and/or the service type from a local cache.
In the technical scheme, the image data corresponding to the service information of the cache service mechanism is firstly inquired in the local cache, and then the inquiry is carried out in the preset database, so that the inquiry efficiency is improved.
The present invention also provides an image retention system, comprising: a server cluster; the server cluster includes: the policy acquisition module is used for acquiring a cache configuration policy corresponding to each organization service information; and the data uploading module is used for uploading the image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to the storage database if the current time is within the uploading time interval in the cache configuration strategy.
In the technical scheme, the cached image data is uploaded to the preset database at the designated time, so that read-write separation is realized, and a transaction system is optimized.
Further, still include: storing a database; the storage database includes: the first storage module is used for storing the structural data in the image data; storing the structured data in the image data according to a preset screening standard; and the second storage module is used for storing the structured data in the image data according to a preset screening standard.
Further, the server cluster further comprises: the message receiving module is used for receiving batch information assembly messages sent by each mechanism; the judging module is used for judging whether the corresponding cache service is inquired according to the service type in the batch information assembly message; the cache module is used for caching the image data in the corresponding batch information assembly message to the local when the corresponding cache service is inquired; and the data uploading module is further used for uploading the image data in the corresponding batch information assembly message to the storage database when the corresponding cache service is not inquired.
Compared with the prior art, the image query, cache and retention method and the retention system have the beneficial effects that:
according to the invention, the data uploaded to the server by the client is firstly cached and then uploaded to the preset database in idle, so that the read-write separation of the data is realized, the transaction system is optimized, and the response speed is improved.
Drawings
The above features, technical features, advantages and implementations of an image query, caching, and persistence method and system are further described in the following detailed description of preferred embodiments in an explicitly understood manner in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of an embodiment of an image persistence method of the present invention;
FIG. 2 is a flowchart of an embodiment of an image caching method according to the present invention;
FIG. 3 is a flowchart of an embodiment of an image query method according to the present invention;
FIG. 4 is a schematic diagram of an embodiment of an image persistence system of the present invention;
FIG. 5 is a schematic structural diagram of one embodiment of the present invention in which unstructured data and structured data are stored separately;
FIG. 6 is a schematic structural diagram of one embodiment of the invention in which unstructured data and structured data are stored in Hbase.
The reference numbers illustrate:
100. the system comprises a server cluster, 110, a strategy acquisition module, 120, a data uploading module, 130, a message receiving module, 140, a service query module, 150, a judgment module, 160, a cache module, 170, an information query module, 200, a storage database, 210, a first storage module, 220 and a second storage module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
The invention realizes image retention, caching and query based on big data technology, and has the advantages of high throughput, high response speed and low cost.
Fig. 1 shows an embodiment of the present invention, and an image persistence method includes:
s101, the server cluster acquires a cache configuration strategy corresponding to each organization service information (the organization service information comprises an organization number and a service type).
S102 (server cluster circularly traverses each cache configuration strategy), if the current time is in an uploading time interval in a cache configuration strategy, uploading image data corresponding to mechanism service information corresponding to the cache configuration strategy in a local cache to a preset database (the preset database is a distributed database);
s103, if the current time exceeds an uploading time interval in a cache allocation policy, stopping uploading the image data corresponding to the mechanism service information corresponding to the cache allocation policy.
Specifically, the organization number refers to a unique identifier of an organization (e.g., a bank branch) and is used for identity authentication of the specific organization. The service type refers to a specific handled service, for example: personal loans, business cash, personal cash, etc. The types of services supported by each organization may be the same or different, for example: the card issuing and account information inquiry can be performed by self.
The image data includes structured data and unstructured data, the structured data including: organization number, business type, batch information, transaction time, etc., the unstructured data includes: picture files, video files, etc. of the transaction.
The cache configuration policy corresponding to each organization number and service type may be different, for example: the cache configuration strategy of the mechanism with relatively heavy traffic is as follows: the image data of all service types belonging to the organization are locally cached for 3 days, and then are uploaded from 10 pm to 8 am; the cache configuration policy of the organization with relatively idle business (basically no client after 3 pm) is as follows: the image data of all service types belonging to the organization are uploaded from 3 pm to 8 am 7 days after being cached locally.
The server cluster circularly traverses all cache configuration strategies, and whether image data needs to be uploaded to a preset database is determined according to the current time; if the current time is not in the uploading time interval, no matter whether the image data is uploaded completely, the image data is not uploaded, so that the normal operation of each transaction in the mechanism is not influenced.
For example: the organization A is a bank branch, the bank working time is 9:00-16:30 minutes, the time from Monday to Saturday, in the time period, the number of clients of the organization A is large, the frequency of inquiring information and uploading information to the server cluster is high, if the server cluster directly uploads the image data uploaded by the organization A to the preset database at the time, reading and writing are carried out simultaneously, the response speed of calling and storing the image data in the transaction process can be influenced in a certain program, and the use experience of the organization A for the users is poor. The cache configuration strategy corresponding to the organization number of the organization A in the server cluster can be that the image data uploaded by the organization A at 7 pm to 8 am are firstly cached in the local of the server cluster at other time points. Of course, each service type of the organization a may be configured with different cache configuration policies, meeting different usage requirements.
Therefore, according to different use requirements and actual conditions, cache configuration strategies in the server cluster correspond to the mechanism service information, one server cluster corresponds to a plurality of mechanisms, one mechanism has a plurality of service types, and a plurality of cache configuration strategies exist.
The server cluster circularly traverses each cache configuration strategy, and if the current time is in an uploading time interval of one cache configuration strategy, the server cluster uploads the image data corresponding to the mechanism service information corresponding to the cache configuration strategy; and if the current time is in the uploading time interval of the plurality of cache configuration strategies, calling a plurality of threads to upload the image data corresponding to the mechanism service information corresponding to each cache configuration strategy. If the current time is in the uploading time interval but no image data needs to be uploaded, the thread sleeps until the image data needs to be uploaded and the current time is in the uploading time interval in a certain cache configuration strategy.
Preferably, the uploading of the image data corresponding to the organization service information corresponding to the cache configuration policy in the local cache to the preset database specifically includes: and acquiring batch information of the organization business information corresponding to the cache configuration strategy, and uploading image data corresponding to the batch information to a preset database.
Specifically, the batch information includes: the batch serial number is the unique information identifier of each piece of data; name, gender, etc.
Optionally, batch information can be obtained in batch, and batch uploading can be realized during uploading. For example: 10 batches of information are acquired at a time, and image data corresponding to the 10 batches of information are uploaded at the same time.
It should be noted that the unstructured data in one piece of image data may include multiple pictures and video files at the same time. For example: when a user wants to open a card, unstructured data corresponding to batch information of the transaction comprise front and back pictures of an identity card, a video file of the face recognition process of the user and the like.
A control strategy is adopted in the process of uploading image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to a preset database; the control strategy includes any one or more of: flow control, data compression, priority control and breakpoint transmission.
The flow control means that the flow control is performed when the image data of each service type is uploaded, and the server performs dynamic or static adjustment according to the actual use condition.
Data compression refers to image compression performed when one or more file types are migrated to a preset database, and the image compression is configured by a server.
Priority control means that the service types existing on the server are uploaded according to the priority (when uploading is needed at the same time), and the priority is configured by an engineer.
The breakpoint continuous transmission means that when a plurality of image files are transmitted and retransmission is required due to transmission failure, only the file which is uploaded last time and fails is transmitted again, and the transmission efficiency is improved.
The device of the control strategy improves the efficiency of uploading data by the server cluster, and data with higher priority can be preferentially migrated, so that the safety of important data is protected.
Preferably, the uploading of the image data corresponding to the organization service information corresponding to the cache configuration policy in the local cache to the preset database specifically includes:
uploading structural data in the image data to a preset first database;
and uploading unstructured data in the image data to a preset first database or a preset second database according to a preset screening standard.
The preset screening criteria include any one or more of: configuration parameters such as service type, image type, format, size, etc.
Specifically, the first database is preset to be a distributed column database such as HBASE, Hive and the like (a special data storage mode of a big data technology supports high-concurrency data access); the second database is preset to be an HDFS (Hadoop distributed file system), which can provide high-throughput data access and is very suitable for application on large-scale data sets. The image file with larger file data is stored in the HDFS, the file transmission speed is higher in the first time, and the calling speed and the response speed are higher when a user inquires at a client, so that the user has good use experience.
The structured data is small and can be directly stored in HBASE, the small data such as the structured data and the like are not stored by adopting HDFS (Hadoop distributed File System) because the minimum unit of a storage block is large, and if small files are stored, a large amount of redundancy and waste exist, so that the efficiency is low; and the unstructured data needs to be selected and stored according to a preset screening standard, so that the use efficiency of each database is improved.
Examples of the actual use of each predetermined screening criterion are as follows:
and (4) service type: some service types are generally video files (e.g., recording actions made by users), and are very large, and whether the video files are stored in the HDFS can be directly confirmed according to the service types.
Image type: some unstructured data can be stored in the HDFS if the image type of the unstructured data is video, and then the picture is judged according to the subsequent format and size if the picture is picture.
The format is as follows: some formats are high-definition pictures, some formats are video files, and the difference of the formats represents the difference of the sizes of the data, so that the data can be screened and stored in a corresponding preset database according to the formats.
Size: and dividing a limit, storing the data which are larger than the limit into a preset second database, and storing the data which are smaller than or equal to the limit into a preset first database.
The preset screening standard is set by an engineer according to actual conditions.
As shown in fig. 5, the structured data is stored in Hbase, and the unstructured data is stored in HDFS; as shown in fig. 6, both the structured data and the unstructured data are stored in Hbase.
Optionally, after the image data is uploaded to the preset database, modifying a corresponding cache state of the image data in the local cache.
Specifically, the step of managing the data in the local cache by using the cache data list and querying the cache states of the data comprises the following steps: cached, migrated (or uploaded), etc., as needed.
In the embodiment, the image data is stored by adopting a big data technology, high concurrent data access is supported, the efficient storage of unstructured data is realized, and a foundation is laid for quick reading. In addition, the server cluster uploads the cached local image data again at idle according to the cache configuration strategy, so that the problem of read-write interference is avoided.
Fig. 2 shows an embodiment of the image caching method of the present invention, which includes:
s201, a server cluster receives batch information assembly messages sent by various mechanisms;
s203, the server cluster respectively judges whether the corresponding cache service is inquired (whether the cache service is started or not) according to the service types in the information assembly messages of each batch;
s204, when the corresponding cache service is inquired, the server cluster caches the image data in the corresponding batch information assembly message to the local (a cache directory is arranged in the cache service, and the image data is cached to the cache directory);
s205, when the corresponding cache service is not queried, the server cluster uploads the image data in the corresponding batch information assembly message to the preset database.
Specifically, an engineer configures a corresponding cache service (on/off, if it is on, which cache directory is) for each service type on the server cluster according to actual requirements, and of course, the engineer may also configure the corresponding cache service according to service information of each organization.
The batch information assembly message comprises: the key service information such as batch information, service type, organization number, file size, image file and the like.
Optionally, between S201 and S203, further comprising: s201, the server cluster respectively inquires the state of the corresponding mechanism according to the service information (namely the service type and the mechanism number) of each mechanism, if the state of the mechanism is not inquired or is abnormal, the abnormal information is returned to the mechanism, and the corresponding batch information assembly message is discarded; when the state is the normal state, S203 is executed.
The organization number inquires whether the state of the business type under the corresponding organization is normal or not, namely, whether the organization has the authority of executing the business type or not is judged, and if the organization is normal, the cache service is confirmed so as to execute the cache or upload operation.
If the cache service is started according to the mechanism number and/or the service type, caching the image data in the batch of information assembly messages to a locally corresponding cache directory, and uploading the image data to a preset server after a subsequent uploading time interval is reached.
And if the caching service is not started according to the mechanism number and/or the service type, directly uploading the image data in the batch information assembly message to a preset database.
Preferably, uploading the structured data in the image data to a preset first database; and uploading unstructured data in the image data to a preset first database or a preset second database according to a preset screening standard.
The preset screening criteria include any one or more of: configuration parameters such as service type, image type, format, size, etc. The same as the actual image retention method, which is not described herein again.
In this embodiment, after receiving image data sent by the mechanism, the server cluster caches the image data locally according to the configuration of the cache service, and uploads the image data to the preset database after subsequent idling, so as to perform read-write separation on the image service transaction, optimize a transaction system, and improve response speed.
Fig. 3 shows an embodiment of the image query method of the present invention, which includes:
s301, the server cluster receives the information inquiry message sent by the mechanism.
S303, the server cluster judges whether to inquire the corresponding cache service (judges whether to start the cache service) according to the service type in the information inquiry message; of course, the corresponding cache service may also be configured according to each organization service information, and then whether there is a corresponding cache service is determined by the organization service information.
S304, when the corresponding cache service is not inquired, the server cluster inquires the corresponding image data from the preset database according to the batch information and/or the service type in the information inquiry message.
S305, when the corresponding cache service is inquired, the server cluster inquires the batch information and/or the image data corresponding to the service type from the local cache.
S306, when the batch information and/or the image data corresponding to the service type do not exist in the local cache, the batch information and/or the image data corresponding to the service type are inquired from a preset database.
Specifically, the information query message includes: batch information, organization number, type of service, etc.
Optionally, between S301 and S302, further comprising: s302, inquiring the state of a corresponding mechanism according to the mechanism service information (namely, the mechanism number and the service type) in the information inquiry message, if the state of the mechanism is not inquired or is an abnormal state, returning abnormal information to the mechanism, and discarding the information inquiry message; when the state is the normal state, S303 is executed.
Specifically, the organization number queries whether the state of the service type under the corresponding organization is normal, that is, whether the organization has the authority to execute the service type is judged, and if the state is normal, the cache service is confirmed to execute the cache or query operation.
If the cache service is started according to the mechanism number and/or the service type, the local cache can be checked first, related structured or unstructured data are called, and if the local cache does not exist, a preset database is checked; if the cache service is not started according to the organization number and/or the service type (in other embodiments, the cache service can be inquired only according to the batch serial number and the organization number), it means that the cache service can only be inquired from the preset database.
And cross-organization data query can be realized by adopting service type and/or batch information query.
Optionally, when the local cache is queried, the state of each data may be queried from the cache data list, and it is determined whether the data is stored in the local cache, so as to improve the query efficiency.
In this embodiment, if the stored image data needs to be called and stored locally in the transaction process, the query speed is much faster than that of the query from the preset database, and the response speed of the transaction is improved in a certain procedure. And the whole system adopts a big data technology, supports high-concurrency data access, has higher response speed and processing efficiency in the transaction process, and greatly improves the use experience of customers.
In another embodiment of the image persistence method of the present invention, the method includes:
the server cluster receives batch information assembly messages sent by all mechanisms;
the server cluster respectively inquires the state of the corresponding client according to the service information (namely the service type and the mechanism number) of each mechanism, if the state is not inquired or the state of the mechanism is an abnormal state, the abnormal information is returned to the mechanism, and the batch information assembly message is discarded;
when the state is a normal state, the server cluster assembles the service type in the message according to the corresponding batch information, and judges whether to inquire the corresponding cache service (judges whether to start the cache service);
when the corresponding cache service is inquired, the server cluster caches the image data in the corresponding batch information assembly message to the local (a cache directory is arranged in the cache service, and the image data is cached to the cache directory);
and when the corresponding cache service is not inquired, the server cluster uploads the image data in the corresponding batch information assembly message to a preset database.
The server cluster receives the information inquiry message sent by the (each) mechanism;
the server cluster respectively inquires the state of the corresponding mechanism according to the mechanism service information in the information inquiry message(s), if the state of the mechanism is not inquired or is abnormal, returns abnormal information to the mechanism, and discards the corresponding information inquiry message;
when the state is a normal state, the server cluster inquires the service type in the message according to each corresponding information, and judges whether a corresponding cache service is inquired (judges whether the cache service is started or not);
when the corresponding cache service is not inquired, the server cluster inquires corresponding image data from a preset database according to batch information and/or service types in the information inquiry message;
when the corresponding cache service is inquired, the server cluster inquires batch information and/or image data corresponding to the service type from the local cache;
and when the batch information and/or the image data corresponding to the service type do not exist in the local cache, inquiring the batch information and/or the image data corresponding to the service type from a preset database.
The server cluster acquires a cache configuration strategy corresponding to each organization service information (the organization service information comprises an organization number and a service type).
The server cluster circularly traverses each cache configuration strategy, and if the current time is in an uploading time interval in one cache configuration strategy, the server cluster uploads the image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to a preset database;
and if the current time exceeds an uploading time interval in a cache configuration strategy, the server cluster stops uploading the image data corresponding to the mechanism service information corresponding to the cache configuration strategy.
Preferably, the uploading of the image data corresponding to the organization service information corresponding to the cache configuration policy in the local cache to the preset database specifically includes: and acquiring batch information of the organization business information corresponding to the cache configuration strategy, and uploading image data corresponding to the batch information to a preset database.
Optionally, a control strategy is adopted in the process of uploading the image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to a preset database; the control strategy includes any one or more of: flow control, data compression, priority control and breakpoint transmission.
Preferably, the uploading of the image data corresponding to the organization service information corresponding to the cache configuration policy in the local cache to the preset database specifically includes: uploading structural data in the image data to a preset first database; and uploading unstructured data in the image data to a preset first database or a preset second database according to a preset screening standard. The preset screening criteria include any one or more of: configuration parameters such as service type, image type, format, size, etc.
Optionally, after the image data is uploaded to the preset database, modifying a corresponding cache state of the image data in the local cache.
The implementation processes of the same parts of this embodiment as those of the above embodiments are the same, and for details, refer to the above embodiments, and are not described herein again.
In this embodiment, under a big data distributed system, data uploaded to a server cluster by a mechanism is cached first, and then uploaded to a preset database at idle time, so that read-write separation of the data is realized, and a transaction system is optimized; according to different requirements, configuring different cache configuration strategies and starting/closing cache services for each mechanism and the service type thereof, and laying a foundation for improving data transmission; during query, if the cache service is started, the query is firstly performed in the local cache, so that the mutual influence of mass data write-in and query is avoided, and the response speed is improved.
Fig. 4 shows an embodiment of an image persistence system of the invention, comprising: a cluster of servers 100;
the server cluster 100 includes:
a policy obtaining module 110, configured to obtain a cache configuration policy corresponding to each organization service information (including an organization number and a service type);
a data uploading module 120, configured to (cyclically traverse each cache configuration policy), if the current time is within an uploading time interval in one cache configuration policy, upload image data corresponding to the mechanism service information corresponding to the cache configuration policy in the local cache to a storage database (that is, the preset database in the foregoing method embodiment, where the preset database is a distributed database); and if the current time exceeds an uploading time interval in a cache configuration strategy, stopping uploading the image data corresponding to the mechanism service information corresponding to the cache configuration strategy.
Specifically, the organization number refers to a unique identifier of an organization (e.g., a bank branch) and is used for identity authentication of the specific organization. The service type refers to a specific handled service, for example: personal loans, business cash, personal cash, etc. The types of services supported by each organization may be the same or different, for example: the card issuing and account information inquiry can be performed by self.
The image data includes structured data and unstructured data, the structured data including: organization number, business type, batch information, transaction time, etc., the unstructured data includes: picture files, video files, etc. of the transaction.
The cache configuration strategies corresponding to each organization number and the service type may have differences, and the server cluster circularly traverses all the cache configuration strategies and confirms whether image data needs to be uploaded to a preset database according to the current time; if the current time is not in the uploading time interval, no matter whether the image data is uploaded completely, the image data is not uploaded, so that the normal use experience of each transaction in the mechanism is not influenced.
For a specific example, refer to an example of the image persistence method, which is not described herein again.
According to different use requirements and actual conditions, cache configuration strategies in the server cluster correspond to mechanism service information, one server cluster corresponds to a plurality of mechanisms, one mechanism has a plurality of service types, and a plurality of cache configuration strategies can exist.
The server cluster circularly traverses each cache configuration strategy, and if the current time is in an uploading time interval of one cache configuration strategy, the server cluster uploads the image data corresponding to the mechanism service information corresponding to the cache configuration strategy; and if the current time is in the uploading time interval of the plurality of cache configuration strategies, calling a plurality of threads to upload the image data corresponding to the mechanism service information corresponding to each cache configuration strategy. And when the current time is within the uploading time interval, the thread sleeps until the image data needing to be uploaded exists and the current time is within the uploading time interval of a certain cache configuration strategy.
Preferably, the uploading the image data corresponding to the organization service information corresponding to the cache configuration policy in the local cache to the storage database by the data uploading module 120 specifically includes:
the data uploading module 120 acquires batch information of the organization service information corresponding to the cache configuration policy, and uploads image data corresponding to the batch information to the storage database.
Specifically, the batch information includes: the batch serial number is the unique information identifier of each piece of data; name, gender, etc.
Optionally, batch information can be obtained in batch, and batch uploading can be realized during uploading. For example: 10 batches of information are acquired at a time, and image data corresponding to the 10 batches of information are uploaded at the same time.
It should be noted that the unstructured data in one piece of image data may include multiple pictures and video files at the same time. For example: when a user wants to open a card, unstructured data corresponding to batch information of the transaction comprise front and back pictures of an identity card, a video file of the face recognition process of the user and the like.
A control strategy is adopted in the process of uploading image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to a preset database; the control strategy includes any one or more of: flow control, data compression, priority control and breakpoint transmission. For a detailed explanation, reference is made to the embodiment of the image persistence method, which is not described herein again.
The device of the control strategy improves the efficiency of uploading data by the server, and data with higher priority can be preferentially migrated, so that the safety of important data is protected.
Preferably, the image retention system further comprises: the database 200 is stored. The storage database includes:
a first storage module 210, configured to store structured data in image data; storing the structured data in the image data according to a preset screening standard;
the second storage module 220 is configured to store the structured data in the image data according to a preset screening criterion.
The preset screening criteria include any one or more of: configuration parameters such as service type, image type, format, size, etc.
Specifically, the first database is preset to be a distributed column database such as HBASE, Hive and the like (a special data storage mode of a big data technology supports high-concurrency data access); the second database is preset to be HDFS, which can provide high-throughput data access and is very suitable for application on large-scale data sets. The image file with larger file data is stored in the HDFS, the file transmission speed is higher in the first time, and the calling speed and the response speed are higher when a user inquires at a client, so that the user has good use experience.
The structured data is small and can be directly stored in HBASE, the small data such as the structured data and the like are not stored by adopting HDFS (Hadoop distributed File System) because the minimum unit of a storage block is large, and if small files are stored, a large amount of redundancy and waste exist, so that the efficiency is low; the unstructured data need to be selected and stored according to preset screening standards, the service efficiency of each database is improved, and the preset screening standards are set by engineers according to actual conditions. For examples of the actual use of each preset screening criterion, please refer to the embodiment of the image retention method, which is not described herein again.
Optionally, after the image data is uploaded to the storage database, the corresponding state of the image data in the local cache is modified.
Specifically, the step of managing the data in the local cache by using the cache data list and querying the cache states of the data comprises the following steps: cached, migrated (or uploaded), etc., as needed.
Optionally, the server cluster 100 further includes:
the message receiving module 130 is configured to receive batch information assembly messages sent by each organization;
the service inquiry module 140 is configured to inquire the state of the mechanism according to the mechanism service information in each batch of information assembly messages, return abnormal information to the mechanism if the state of the mechanism is not inquired or is an abnormal state, and discard the batch information assembly messages;
a judging module 150, configured to, when the status is a normal status, judge whether to query a corresponding cache service (judge whether to enable the cache service) according to the service type in the batch information assembly message;
a caching module 160, configured to cache the image data in the corresponding batch information assembly message to a local location (a caching directory is set in the caching service, and the image data is cached in the caching directory) when the corresponding caching service is queried;
the data uploading module 120 is further configured to upload the image data in the corresponding batch information assembly message to the storage database when the corresponding cache service is not queried.
Specifically, an engineer configures a corresponding cache service (on/off, if it is on, which cache directory is) for each service type on the server according to actual requirements, and of course, the engineer may also configure the corresponding cache service according to service information of each organization.
The batch information assembly message comprises: the key service information such as batch information, service type, organization number, file size, image file and the like.
The organization number inquires whether the state of the business type under the corresponding organization is normal or not, namely, whether the organization has the authority of executing the business type or not is judged, and if the organization is normal, the cache service is confirmed so as to execute the cache or upload operation.
If the cache service is started according to the mechanism number and/or the service type, caching the image data in the batch of information assembly messages to a locally corresponding cache directory, and uploading the image data to a preset server after a subsequent uploading time interval is reached.
And if the caching service is not started according to the mechanism number and/or the service type, directly uploading the image data in the batch information assembly message to a preset database.
Optionally, the message receiving module 130 is further configured to receive an information query message sent by each mechanism;
the service query module 140 is configured to query the state of the corresponding mechanism according to the mechanism service information in each batch of information assembly messages, and if the state of the mechanism is not queried or is an abnormal state, return abnormal information to the mechanism and discard the batch information assembly messages;
the determining module 150 is configured to, when the status is a normal status, determine whether to query the corresponding cache service (determine whether to enable the cache service) according to the service type and/or the mechanism number in the information query message.
The server cluster further comprises: the information query module 170 is configured to query, when the corresponding cache service is not queried, the corresponding image data from the storage database according to the batch information and/or the service type in the information query message; when the corresponding cache service is inquired, inquiring the batch information and/or the image data corresponding to the service type from the local cache; and when the batch information and/or the image data corresponding to the service type do not exist in the local cache, inquiring the batch information and/or the image data corresponding to the service type from a preset database.
Specifically, the information query message includes: batch information, organization number, type of service, etc.
Specifically, the organization number queries whether the state of the service type under the corresponding organization is normal, that is, whether the organization has the authority to execute the service type is judged, and if the state is normal, the cache service is confirmed to execute the cache or upload operation.
If the cache service is started according to the mechanism number and/or the service type, the local cache can be checked first, related structured or unstructured data are called, and if the local cache does not exist, a preset database is checked; if the cache service is not started according to the organization number and/or the service type (in other embodiments, the cache service can be inquired only according to the batch serial number and the organization number), it means that the cache service can only be inquired from the preset database.
And cross-organization data query can be realized by adopting service type and/or batch information query.
Optionally, when the local cache is queried, the state of each data may be queried from the cache data list, and it is determined whether the data is stored in the local cache, so as to improve the query efficiency.
In this embodiment, under a big data distributed system, data uploaded to a server cluster by each mechanism is cached first, and then uploaded to a preset database when the data is idle, so that read-write separation of the data is realized, and a transaction system is optimized; according to different requirements, configuring different cache configuration strategies and starting/closing cache services for each mechanism and the service type thereof, and laying a foundation for improving data transmission; during query, if the cache service is started, the query is firstly performed in the local cache, so that the mutual influence of mass data write-in and query is avoided, and the response speed is improved.
The image retention system is specifically implemented in the practical use by adopting a big data platform to realize efficient storage and extraction of unstructured data of image data. The big data image platform comprises a three-layer framework, wherein the bottom layer is a data storage layer, different storage models (namely, HDFS or Hbase, currently, only one SAN is adopted for storage, and all unstructured data and structured data are stored in the SAN) can be automatically selected according to different parameters and different services, and unified storage and management are carried out on index data and image files, so that unified service support is provided for upper-layer application.
The middle layer is an application service layer and comprises a configuration management module, a cache module, an API (application program interface) module and the like, wherein the cache module can realize the read-write separation of the service data aiming at different services and different time under the condition of big data according to the configuration.
The top layer is a service access layer and is managed by a uniform access platform module, and each peripheral application can respectively realize different connection modes such as control loading, page embedding, API direct connection and the like according to respective service requirements and technical conditions so as to complete service functions such as scanning entry, deletion, updating, inquiry, display, annotation and the like.
The big data platform of the invention integrates data and service, changes the original image data access mode, and simplifies the access flow and difficulty of the peripheral service system. The storage model under the new architecture is uniformly delivered to the creation, maintenance and management of a data storage layer, a universal image storage model is adopted, structured data such as image batch serial numbers, service types, transaction time and the like are uniformly stored in distributed column databases (SQL language system) such as HBASE, HIVE and the like, unstructured data such as image files can be automatically selected and stored in HBASE or HDFS (Hadoop distributed file system, which needs 2-layer access and is suitable for large files) according to configuration parameters such as service types, image types, formats, sizes and the like according to performance test results, and transparent service is provided for the outside.
Taking the related business of the intelligent teller machine (as a client under an organization) as an example, the common flow of the transaction is as follows:
1. customer application
The customer inserts a debit card or a financial service card, selects the business to be transacted after verifying the password, and checks the transacted content.
2. Identity authentication
The client provides the original identity card, the system automatically conducts online checking and image storage, and the client signs the electronic signature to conduct business confirmation.
3. Auditing service
The system pushes the certificate image, the checking image, the scene front photo and the transacted business type to the handheld PAD of the hall manager, the hall manager verifies and confirms, and the transaction is submitted after the confirmation is successful. (the authorization mode can be set according to the business risk degree, namely authorization is not needed, PAD authorization and field auditing authorization are not needed).
4. Result return
And after the transaction is successful, the system returns a result to the intelligent teller machine and prints a corresponding receipt or receipt.
5. Follow-up service
If the customer needs to transact other counter services subsequently, the number calling function can be selected to wait in line.
The identity authentication link relates to image uploading transaction, the auditing link relates to image inquiry transaction, and the specific flow is as follows:
identity authentication: when a client transacts business, an identity card is automatically read, networking verification is carried out, a certificate image is reserved, a scene client front photo is shot, face recognition is carried out by an intelligent teller machine for comparison, and a comparison result, transacted business types, the certificate image, the verification image, the scene front photo and other image data are uploaded to an image reserving system.
And (4) auditing service: the intelligent teller machine pushes the identity authentication related information to a handheld PAD (namely, another client) of a lobby manager, the PAD acquires related image pictures from an image retention system, different auditing standards are executed by combining system identification degree indexes, if the system identification degree is lower than a certain proportion, the lobby manager can carry out on-site auditing fingerprint confirmation, and if the system identification degree is higher than the certain proportion, the on-site auditing can be carried out on the PAD.
Meanwhile, the client only needs to retain the field pictures shot when the client transacts the business for the first time, so that the reference is provided for identity authentication when the client transacts the business subsequently, and the client only needs to inquire the existing image retention data after the client transacts the business subsequently.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1. An image persistence method, comprising:
obtaining a cache configuration strategy corresponding to each organization service information;
if the current time is in an uploading time interval in a cache configuration strategy, uploading image data corresponding to the mechanism service information corresponding to the cache configuration strategy in a local cache to a preset database;
the uploading of the image data corresponding to the organization service information corresponding to the cache configuration policy in the local cache to a preset database specifically includes:
uploading the structured data in the image data to a preset first database;
uploading unstructured data in the image data to the preset first database or the preset second database according to a preset screening standard;
further comprising:
receiving batch information assembly messages sent by each mechanism;
judging whether corresponding cache service is inquired or not according to the service type in each batch of information assembly messages;
when the corresponding cache service is inquired, caching the image data in the corresponding batch information assembly message to the local;
and uploading the image data in the corresponding batch information assembly message to the preset database when the corresponding cache service is not inquired.
2. The image persistence method of claim 1, further comprising:
and if the current time exceeds an uploading time interval in a cache configuration strategy, stopping uploading the image data corresponding to the mechanism service information corresponding to the cache configuration strategy.
3. The image persistence method of claim 1, further comprising:
receiving an information query message sent by a mechanism;
judging whether the corresponding cache service is inquired or not according to the service type in the information inquiry message;
when the corresponding cache service is not inquired, inquiring corresponding image data from a preset database according to batch information and/or service types in the information inquiry message;
and when the corresponding cache service is inquired, inquiring the image data corresponding to the batch information and/or the service type from a local cache.
4. An image caching method based on the image persistence method according to any one of claims 1 to 3, comprising:
receiving batch information assembly messages sent by each mechanism;
judging whether corresponding cache service is inquired or not according to the service type in each batch of information assembly messages;
when the corresponding cache service is inquired, caching the image data in the corresponding batch information assembly message to the local;
and uploading the image data in the corresponding batch information assembly message to the preset database when the corresponding cache service is not inquired.
5. An image query method based on the image persistence method according to any one of claims 1 to 3, comprising:
receiving an information query message sent by a mechanism;
judging whether the corresponding cache service is inquired or not according to the service type in the information inquiry message;
when the corresponding cache service is not inquired, inquiring corresponding image data from a preset database according to batch information and/or service types in the information inquiry message;
and when the corresponding cache service is inquired, inquiring the image data corresponding to the batch information and/or the service type from a local cache.
6. An image retention system, comprising: a server cluster;
the server cluster includes:
the policy acquisition module is used for acquiring a cache configuration policy corresponding to each organization service information;
the data uploading module is used for uploading the image data corresponding to the mechanism service information corresponding to the cache configuration strategy in the local cache to the storage database if the current time is in the uploading time interval in the cache configuration strategy;
the storage database includes:
the first storage module is used for storing the structural data in the image data; storing the structured data in the image data according to a preset screening standard;
the second storage module is used for storing the structured data in the image data according to a preset screening standard;
the server cluster further comprises:
the message receiving module is used for receiving batch information assembly messages sent by each mechanism;
the judging module is used for judging whether the corresponding cache service is inquired according to the service type in the batch information assembly message;
the cache module is used for caching the image data in the corresponding batch information assembly message to the local when the corresponding cache service is inquired;
and the data uploading module is further used for uploading the image data in the corresponding batch information assembly message to the storage database when the corresponding cache service is not inquired.
CN201811607526.0A 2018-12-27 2018-12-27 Image query, cache and retention method and system Active CN109670027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811607526.0A CN109670027B (en) 2018-12-27 2018-12-27 Image query, cache and retention method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811607526.0A CN109670027B (en) 2018-12-27 2018-12-27 Image query, cache and retention method and system

Publications (2)

Publication Number Publication Date
CN109670027A CN109670027A (en) 2019-04-23
CN109670027B true CN109670027B (en) 2021-05-11

Family

ID=66147330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811607526.0A Active CN109670027B (en) 2018-12-27 2018-12-27 Image query, cache and retention method and system

Country Status (1)

Country Link
CN (1) CN109670027B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113726903B (en) * 2021-09-03 2022-09-20 中国银行股份有限公司 Data uploading method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270161B (en) * 2011-06-09 2013-03-20 华中科技大学 Methods for storing, reading and recovering erasure code-based multistage fault-tolerant data
CN104850509B (en) * 2015-04-27 2017-12-12 交通银行股份有限公司 A kind of operating method and system of banking business data memory cache
CN106095796A (en) * 2016-05-30 2016-11-09 中国邮政储蓄银行股份有限公司 Distributed data storage method, Apparatus and system
CN108319542B (en) * 2017-01-17 2022-10-28 百度在线网络技术(北京)有限公司 Information processing method, device and system
CN108053313A (en) * 2018-01-02 2018-05-18 中国工商银行股份有限公司 Cross-border data processing method of opening an account, apparatus and system

Also Published As

Publication number Publication date
CN109670027A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
KR102026225B1 (en) Apparatus for managing data using block chain and method thereof
US8572023B2 (en) Data services framework workflow processing
US9870268B2 (en) Virtual computing instance migration
JP5976258B1 (en) Light installer
CN113254466B (en) Data processing method and device, electronic equipment and storage medium
CN110032571A (en) Business flow processing method, apparatus, storage medium and calculating equipment
CN106991035A (en) A kind of Host Supervision System based on micro services framework
US20100121828A1 (en) Resource constraint aware network file system
US11010267B2 (en) Method and system for automatic maintenance of standby databases for non-logged workloads
WO2014143904A1 (en) Method and system for integrated color storage management
JP2019506643A (en) Controlled transfer of shared content
US11627122B2 (en) Inter-system linking method and node
KR102475435B1 (en) Apparatus for managing data using block chain and method thereof
CN106874145A (en) A kind of asynchronous data backup method based on message queue
CN106933868A (en) A kind of method and data server for adjusting data fragmentation distribution
CN106469087A (en) Metadata output intent, client and meta data server
CN113094434A (en) Database synchronization method, system, device, electronic equipment and medium
US20150020167A1 (en) System and method for managing files
CN109670027B (en) Image query, cache and retention method and system
CN109241712B (en) Method and device for accessing file system
US8498622B2 (en) Data processing system with synchronization policy
US11886439B1 (en) Asynchronous change data capture for direct external transmission
US8453166B2 (en) Data services framework visibility component
CN112565064A (en) Service processing method, device, equipment and medium based on remote multimedia
US11899651B2 (en) Automatic reclamation of storage space for database systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant