CN112596710A - Front-end system - Google Patents
Front-end system Download PDFInfo
- Publication number
- CN112596710A CN112596710A CN202011522114.4A CN202011522114A CN112596710A CN 112596710 A CN112596710 A CN 112596710A CN 202011522114 A CN202011522114 A CN 202011522114A CN 112596710 A CN112596710 A CN 112596710A
- Authority
- CN
- China
- Prior art keywords
- docking
- application
- service
- data access
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003032 molecular docking Methods 0.000 claims abstract description 132
- 210000001503 joint Anatomy 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims description 14
- 230000006978 adaptation Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 claims description 2
- 238000011161 development Methods 0.000 abstract description 9
- 238000012423 maintenance Methods 0.000 abstract description 9
- 238000002955 isolation Methods 0.000 abstract description 6
- 238000011160 research Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/20—Software design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Stored Programmes (AREA)
Abstract
The embodiment of the invention discloses a front-end system which is applied to the butt joint of a data access system and a core data system and comprises a butt joint mirror image, a butt joint service platform and a cloud native infrastructure. Therefore, the docking of different data access systems and core data systems and the isolation setting and independent operation of application services are realized, the working efficiency of the front-end system is improved, and the research, development, operation and maintenance costs of the front-end system are reduced.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a front-end system.
Background
Different data processing systems often have different access modes, and the interfacing of the different data processing systems is often realized through a front-end system.
Therefore, the method has important practical significance for improving the working efficiency of the front-end system and reducing the cost of research, development, operation and maintenance of the front-end system while ensuring the normal butt joint of different data processing systems.
Disclosure of Invention
In view of this, embodiments of the present invention provide a front-end system, which is beneficial to improving the working efficiency and reducing the costs of research, development, operation and maintenance while implementing normal docking of different data processing systems.
The embodiment of the invention aims to provide a front-end system, which is applied to the butt joint of a data access system and a core data system, and comprises the following components:
the system comprises a butt joint mirror image, a data access system and a data service module, wherein the butt joint mirror image is used for butt joint of at least one data access system, responding to a received butt joint application request, and running a corresponding butt joint application, the butt joint application request comprises at least one application service request, each application service request corresponds to one application service, and each application service corresponds to different business service processes;
the docking service platform is used for docking at least one core data system, encapsulating at least one application service and responding to the execution of the docking application to different application service nodes to execute the corresponding application service; and
and the cloud native infrastructure is used for supporting and running the application source codes corresponding to the docking application and the application service so as to realize the docking application.
Further, the docking mirror image includes:
the service server is used for receiving the docking application request, synchronizing the available product configuration of the docking application and forwarding the application service request to a docking service platform; and
and the non-business server is used for monitoring the execution process of the docking application request.
Further, the non-service server includes:
a naming service layer for generating a globally unique name for the docking application;
the access service layer is used for carrying out flow control and full link tracking on the docking application request before the application service request is forwarded to the docking service platform;
and the log service layer is used for storing logs generated by the docking application in batches.
Further, the core data system includes a plurality of different business service subsystems;
the docking service platform comprises:
the product configuration layer is used for providing product configuration information available for the docking application and broadcasting the product configuration information to the corresponding docking application, wherein the product configuration information comprises a current channel identifier, product information and sales information;
and each business docking layer is respectively docked with the corresponding business service subsystem and submits the application service request to the corresponding business service subsystem.
Further, the docking service platform further comprises:
and the channel application management layer is used for registering, putting on and off shelves, distributing keys and reducing and expanding the volume of the data access system.
The log collecting layer is used for storing the logs generated by the data access system;
and the application compiling layer is used for triggering compiling in response to the change of the application source code.
Further, the system further comprises:
the channel docking mirror image is used for docking at least one data access system and accessing the data access system to the docking mirror image.
Further, the channel docking mirror comprises:
the protocol adaptation layer is used for accessing the data access system and converting the protocol type of the data access system into a protocol type adapted to the corresponding core data system; and
and the function adaptation layer is used for docking the data access system and the application service in the docking service platform.
Further, the channel docking mirror image further comprises:
and the encryption and decryption adaptation layer is used for encrypting and decrypting the content generated by the butt joint.
Further, the system further comprises:
and the internet access layer is connected between the data access system and the channel butt joint mirror image and is used for realizing the butt joint of the data access system and the channel butt joint mirror image.
Further, the cloud native infrastructure comprises:
a container management subsystem;
a mirror image warehouse;
a source code management subsystem; and
an automated compilation subsystem.
The technical scheme of the embodiment of the invention realizes the butt joint of different data access systems and core data systems by respectively butting at least one data access system and at least one core data system through the butt joint mirror image and receiving the butt joint application request and running the corresponding butt joint application through the butt joint mirror image. Secondly, at least one application service is packaged through the docking service platform, and when the docking application is executed to different application service nodes, the corresponding application service is executed, so that the isolation setting and the independent operation of the application service are realized, the resource use ratio is improved, and the working efficiency of the front-end system is favorably improved. Moreover, the application source codes corresponding to the docking application are supported and operated based on the original cloud infrastructure, so that the processing of the docking application is realized, and the development, operation and maintenance cost of the front-end system is reduced.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of the operation of a preamble system of an embodiment of the present invention;
FIG. 2 is an overall block diagram of a front-end system of an embodiment of the present invention;
FIG. 3 is an overall architecture diagram of a front-end system of an embodiment of the present invention;
fig. 4 is a flowchart of the data access system according to the embodiment of the present invention during access.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout the description, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The presence of a front-end system facilitates interfacing of different data processing systems with the core data system. Therefore, the method has important practical significance for improving the working efficiency of the front-end system and reducing the cost of research, development, operation and maintenance of the front-end system while ensuring the normal butt joint of different data processing systems and a core data system.
In this embodiment, a core data system in an insurance company is used to interface different internet sales channels.
The internet marketing channel is a novel marketing mode for realizing marketing by means of internet, communication technology and digital interactive media, and has important significance for reducing enterprise cost and improving enterprise competitiveness. However, due to the various access ways of the existing internet channels, a front-end system is required to isolate from a core system in the process of docking and operation and maintenance of an enterprise and an internet sales channel. Meanwhile, different internet sales channels correspond to different specific service requirements according to different product types, and burst incremental flow can be generated according to requirements of operation activities (such as double eleven activities) of a channel party. Therefore, for insurance companies to interface, manage and operate and maintain internet sales channels, a front-end system is needed to isolate the internet sales channels from the insurance core system. Based on this, the embodiment of the invention aims to provide a front-end system, which is beneficial to improving the working efficiency of the front-end system and reducing the research, development, operation and maintenance costs of the front-end system while realizing the butt joint of the data access system of the internet sales channel and the core data system of the insurance company.
Fig. 1 is a schematic diagram of the operation of a front-end system according to an embodiment of the present invention. As shown in fig. 1, the front-end system 200 of the present embodiment is applied to the interfacing of different data access systems 100 and different core data systems 300. The front-end system 200 includes a docking image 1, a docking service platform 2, and a cloud native infrastructure 3. Therein, the core data system 300 includes a plurality of different business service subsystems. It should be understood that the data access system of the present embodiment corresponds to data systems corresponding to different internet sales channels, and the core data system corresponds to a core data system of an insurance company. This nomenclature is used for the following description.
As shown in fig. 1, the docking image 1 of the present embodiment docks at least one data access system 100 configured to run a corresponding docking application in response to receiving a docking application request. The application request comprises at least one application service request, each application service request corresponds to one application service, and each application service corresponds to different business service processes. The docking service platform 2 docks at least one core data system 300, encapsulating at least one application service, configured to execute a corresponding application service in response to execution of a docking application to a different application service node. The cloud native infrastructure 3 is configured to support and run application source code corresponding to the docking application and the application service, thereby implementing the docking application. Therefore, the butt joint of different data access systems and core data systems is realized through the butt joint mirror image and the butt joint service platform, at least one application service is packaged through the butt joint service platform, when the butt joint application is executed to different application service nodes, the corresponding application service is executed, the isolation setting and the independent operation of the application service are realized, the resource use ratio is improved, and the working efficiency of the front-end system is favorably improved. Moreover, the application source codes corresponding to the docking application are supported and operated based on the original cloud infrastructure, so that the processing of the docking application is realized, and the development, operation and maintenance cost of the front-end system is reduced.
Fig. 2 is an overall frame diagram of the front system of the embodiment of the present invention. As shown in fig. 2, to facilitate interfacing with the data access system, the front-end system 200 of the present embodiment further includes a channel interfacing mirror 4. The channel docking mirror image 4 is used for docking at least one data access system, accessing the data access system into the docking mirror image 1, converting data information corresponding to the docking application request into a type adapted to the front-end system, and forwarding the converted docking application request to the docking mirror image. Therefore, the channel butt-joint mirror image ensures that the butt joint of different data access systems and the core data system is smooth, the use performance of the front system is improved, and the butt joint efficiency of the data access system and the core data system is improved.
Further, as shown in fig. 2, the front-end system 200 of the present embodiment further includes an internet access layer 5. And the internet access layer 5 is connected between the data access system and the channel butt-joint mirror image 4 and is used for realizing the connection between the data access system and the channel butt-joint mirror image.
The overall architecture of the front-end system of the present embodiment will be described in detail with reference to the accompanying drawings.
Fig. 3 is an overall architecture diagram of a front-end system of an embodiment of the present invention. As shown in fig. 3, the docking image 1 of the present embodiment includes a service server 11 and a non-service server 12. The service server 11 receives the docking application request, synchronizes the product configuration available for the docking application, and forwards the application service request to the docking service platform 2. The non-service server 12 monitors the execution process of the docking application request, including whether the docking application request starts to be processed, and executes the corresponding application request to a specific application service node.
In order to further improve the service performance of the front-end system, the non-business server 12 of the embodiment includes a naming service layer 121, an access service layer 122, and a log service layer 123. Therein, the naming service layer 121 generates a globally unique name for the docking application. The access service layer 122 performs traffic management and full link tracking on the application service request before forwarding the application service request to the docking service platform. The log service layer 123 saves logs generated by the docking application in a batch.
Optionally, the naming service layer 121 generates a globally unique name for the image corresponding to the current application request during running, where the generated name includes channel identification information of the data access system to which the docking application source belongs, attribute information of the application service included in the docking application, and a displacement identifier of the current execution container. Meanwhile, in the embodiment, because the change frequencies of different information are different, a fixed coding mode is adopted for the information type with a small change frequency according to the actual use condition, so that the resources occupied in the name generation process are reduced.
Optionally, the access service layer 122 of the present embodiment performs traffic control on the request of the peer application, including current limiting, fusing, and gray-level offloading. Meanwhile, the access service layer 122 of the present embodiment is further configured to, when receiving the docking application request, perform the following logic functions according to a preset order and configuration, including: responding to the existence of encryption and decryption services in the designated port, initiating calling to the designated port and carrying out encryption and decryption; responding to the existence of protocol conversion service in the appointed port, and initiating call to the appointed port to perform protocol conversion; and adding a full link trace (traceid) to the docking application request, and routing to a service docking layer corresponding to different application services in the docking service platform. It should be understood that, in this embodiment, since the access service layer and the service end are in the same docking mirror image, cross-machine access is not involved in the above access, which can effectively reduce performance loss caused by repeated calls to the system, and is further beneficial to improving the working efficiency of the front-end system.
In this embodiment, the log service layer 123 stores logs generated by the docking application in batch, and serves as a monitor for file change, when new content occurs in the log file, the changed content of the log file is read, and the name of the current container and the name of the log file are used as identifiers, so that the changed content is delivered to the docking service platform. Therefore, the docking application does not need to pay attention to all the contents of the log in real time, and only needs to normally log according to the log setting mode, so that the working efficiency of the front-end system is further improved.
As shown in fig. 3, the docking service platform 2 of this embodiment includes a product configuration layer 21 and at least one service docking layer, where each service docking layer respectively docks a corresponding service subsystem and submits an application service request to the corresponding service subsystem.
Optionally, the insurance product configuration layer 21 is configured to provide product configuration information available to the docking application and broadcast the product configuration information to the corresponding docking application, the product configuration information including the current channel identification, the product information, and the sales information. The current channel identifier is used for representing the unique identifier of different data access systems in the front-end system. The product information includes product representation, product attribute configuration information, and product rule information. The product identification is used for representing the unique identification of the core data system to the current product. The product attribute configuration information includes rate tables and fixed parameters, etc. The product rule information includes corresponding rule requirements for different application services of the product, such as verification rules for the insured life, identity and uniqueness. The sales information is the information of the affiliated sales units set for the products by the data access system according to the requirement of meeting the business.
Optionally, the application services that can be provided by this embodiment may include a policy service, an underwriting service, an insurance application service, an insurance withdrawal service, and a mirroring service. The service interfacing layers include a first service interfacing layer 22, a second service interfacing layer 23, and a third service interfacing layer 24. The first docking service layer 22 docks the insurance service subsystem in the core data system and submits requests corresponding to the underwriting service, the insuring service and the decommissioning service to the insurance service subsystem. The second service docking layer 23 docks the image service subsystem in the core data system, and submits a request corresponding to the image service subsystem. The third service docking layer 24 docks the policy service subsystem in the core data system and submits a request corresponding to the policy service subsystem.
Furthermore, since different application services have different performance indexes and docking manners, the service docking layer of this embodiment is further configured to perform peak clipping and valley filling on a docking application request from the data access system through a queue mechanism, so as to ensure stable traffic to the core data system; providing a consistent calling style to encapsulate the heterogeneity of different application service flows in the core data system; and recording the received reply information of the application request, and storing the reply information as a service flow record.
Further, as shown in fig. 3, the docking service platform 2 of the present embodiment further includes a channel application management layer 25, a log aggregation layer 26, and an application compiling layer 27. The channel application management layer 25 performs registration, shelving, key distribution, capacity reduction and capacity expansion on different data access systems. The logging aggregator layer 26 stores logs generated by the data access system. The application compilation layer 27 triggers compilation in response to changes in the application source code.
Specifically, the channel application management layer 25 of the present embodiment is further configured to interact with the cloud native infrastructure, and implement the following functions, including: the method comprises the steps of establishing a channel identifier, establishing a docking application identifier, managing source code address information (including storage addresses, execution node addresses, labels and the like of application source codes of the docking application) of the docking application, using the source code address information for automatic compiling when the docking application is released, and allocating required capacity resources for the docking application according to standard units. Therefore, the effective management efficiency of the docking application in the front-end system is convenient and improved.
And for the docking application of the same data access system, starting at least two container examples through a channel application management layer, and performing corresponding capacity expansion or capacity reduction on the container examples according to the traffic of the data access system. Therefore, the isolation of different container operation and the isolation of application services are realized, and the use efficiency of the front-end system in completing the butt joint application is further improved.
In this embodiment, since all the docking applications are stateless services, the log collecting service 26 receives the change content of the log file delivered by the log service layer, uniformly files the logs generated by the docking applications, and provides a log query function, which is convenient for querying and tracing the running logs of the docking applications later, and further improves the service performance of the front-end system.
Further, as shown in fig. 3, the cloud native infrastructure 3 of the present embodiment includes a container management subsystem 31, a mirror repository 32, a source code management subsystem 33, and an automatic compilation subsystem 34.
Optionally, the container management subsystem 31 of the present embodiment is used for autonomously managing an operable Docker cluster, and an open-source container arrangement system (e.g., kubernets) or a commercial container cluster service (e.g., a bocun container service, an arilocos container service, etc.) may also be used. The image repository 32 is used to store a plurality of image files collectively, which may be created by the Docker's own function, or may use an open source or commercial solution. The source code management subsystem 33 may employ a distributed version control system (git) or a free-open-source version control system (subversion). The automated compilation subsystem 34 may be implemented via open source software projects (Jenkins). Therefore, cloud primary infrastructure is built through the cooperation of different subsystems, and docking application is supported and operated, so that the development and use cost of the front-end system can be reduced.
Further, the channel docking image 4 of the present embodiment includes a protocol adaptation layer 41 and a function adaptation layer 42, and is compiled and formed by using the docking image as a master for completing implementation related to a specific service. In addition, the language and framework for implementing the application between the source code and the function of the channel docking image are not limited, but need to meet the following convention: under the source code project directory, a Dockerfile file for compiling the mirror image exists, and the appointed service is provided at the appointed port. Meanwhile, when the protocol is adapted, a fixed service interaction protocol is selected as much as possible, and optionally, the service interaction protocol may adopt a restful protocol, a gPRC protocol or a Dubbo protocol.
As shown in fig. 3, the channel docking image 4 of the present embodiment further includes an encryption/decryption adaptation layer 43. The resulting content is encrypted and decrypted by the encryption and decryption adaptation layer 43. Specifically, in the process of implementing the docking application, the content generated by the docking application is encrypted, so that data loss or leakage is prevented, and the security of the data is improved. Moreover, when the relevant data of the butt joint application needs to be called subsequently, decryption is carried out through the encryption and decryption adaptation layer, so that the data can be conveniently checked, called and traced, and the use value of the data is improved.
Further, the internet access layer 5 of this embodiment employs a reverse proxy server, such as nginx or apache httpd. Thereby, docking application requests of different data access systems are accessed and are just connected through the internet access layerAnd in the corresponding channel docking example, different docking applications are further realized.
Fig. 4 is a flowchart of the data access system according to the embodiment of the present invention during access. As shown in fig. 4, the data access system of this embodiment performs the following steps when accessing the head-end system:
at step S100, the data access system is registered.
In step S200, the docking application is registered.
In step S300, a source code repository is opened to perform product configuration.
Optionally, step S300 of the present embodiment includes steps S310 to S330.
In step S310, the source code repository is opened.
In step S320, product attribute configuration is performed.
In step S330, product additional function coordination is performed.
In step S400, the application is tested.
In step S500, the application is published.
Optionally, the present embodiment further includes step S510 and step S520 after step S500 is executed.
In step S510, traffic management is applied.
In step S520, version management is applied.
Thus, in the present embodiment, the connection between the new data access system and the core data system is realized by executing steps S100 to S500. And the step S510 and the step S520 are executed to manage the application flow and the version, so as to further improve the service performance of the front-end system and expand the application range of the front-end system.
The technical scheme of the embodiment of the invention realizes the butt joint of different data access systems and core data systems by respectively butting at least one data access system and at least one core data system through the butt joint mirror image and receiving the butt joint application request and running the corresponding butt joint application through the butt joint mirror image. Secondly, at least one application service is packaged through the docking service platform, and when the docking application is executed to different application service nodes, the corresponding application service is executed, so that the isolation setting and the independent operation of the application service are realized, the resource use ratio is improved, and the working efficiency of the front-end system is favorably improved. Moreover, the application source codes corresponding to the docking application are supported and operated based on the original cloud infrastructure, so that the processing of the docking application is realized, and the development, operation and maintenance cost of the front-end system is reduced.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A front-end system for interfacing a data access system with a core data system, the system comprising:
the system comprises a butt joint mirror image, a data access system and a data service module, wherein the butt joint mirror image is used for butt joint of at least one data access system, responding to a received butt joint application request, and running a corresponding butt joint application, the butt joint application request comprises at least one application service request, each application service request corresponds to one application service, and each application service corresponds to different business service processes;
the docking service platform is used for docking at least one core data system, encapsulating at least one application service and responding to the execution of the docking application to different application service nodes to execute the corresponding application service; and
and the cloud native infrastructure is used for supporting and running the application source codes corresponding to the docking application and the application service so as to realize the docking application.
2. The front-end system of claim 1, wherein the docking mirror comprises:
the service server is used for receiving the docking application request, synchronizing the available product configuration of the docking application and forwarding the application service request to a docking service platform; and
and the non-business server is used for monitoring the execution process of the docking application request.
3. The front-end system of claim 2, wherein the non-service server comprises:
a naming service layer for generating a globally unique name for the docking application;
the access service layer is used for carrying out flow control and full link tracking on the docking application request before the application service request is forwarded to the docking service platform;
and the log service layer is used for storing logs generated by the docking application in batches.
4. The front-end system of claim 1, wherein the core data system comprises a plurality of different business service subsystems;
the docking service platform comprises:
the product configuration layer is used for providing product configuration information available for the docking application and broadcasting the product configuration information to the corresponding docking application, wherein the product configuration information comprises a current channel identifier, product information and sales information;
and each business docking layer is respectively docked with the corresponding business service subsystem and submits the application service request to the corresponding business service subsystem.
5. The front-end system of claim 4, wherein the docking service platform further comprises:
the channel application management layer is used for registering, putting on and off shelves, distributing keys and reducing and expanding the volume of the data access system;
the log collecting layer is used for storing the logs generated by the data access system;
and the application compiling layer is used for triggering compiling in response to the change of the application source code.
6. The front-end system of claim 1, further comprising:
the channel docking mirror image is used for docking at least one data access system and accessing the data access system to the docking mirror image.
7. The front end system of claim 6, wherein the channel docking mirror comprises:
the protocol adaptation layer is used for accessing the data access system and converting the protocol type of the data access system into a protocol type adapted to the corresponding core data system; and
and the function adaptation layer is used for docking the data access system and the application service in the docking service platform.
8. The front end system of claim 7, wherein the channel docking mirror further comprises:
and the encryption and decryption adaptation layer is used for encrypting and decrypting the content generated by the butt joint.
9. The front-end system of claim 6, further comprising:
and the internet access layer is connected between the data access system and the channel butt joint mirror image and is used for realizing the butt joint of the data access system and the channel butt joint mirror image.
10. The front-end system of claim 1, wherein the cloud-native infrastructure comprises:
a container management subsystem;
a mirror image warehouse;
a source code management subsystem; and
an automated compilation subsystem.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011522114.4A CN112596710B (en) | 2020-12-21 | 2020-12-21 | Front-end system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011522114.4A CN112596710B (en) | 2020-12-21 | 2020-12-21 | Front-end system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112596710A true CN112596710A (en) | 2021-04-02 |
CN112596710B CN112596710B (en) | 2024-05-14 |
Family
ID=75199904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011522114.4A Active CN112596710B (en) | 2020-12-21 | 2020-12-21 | Front-end system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112596710B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6092083A (en) * | 1997-02-26 | 2000-07-18 | Siebel Systems, Inc. | Database management system which synchronizes an enterprise server and a workgroup user client using a docking agent |
US20040006537A1 (en) * | 2002-03-04 | 2004-01-08 | First Data Corporation | Method and system for processing credit card related transactions |
CN1681260A (en) * | 2004-06-30 | 2005-10-12 | 中国银行股份有限公司 | Processing system between enterprise and bank service abutting joint |
CN101458808A (en) * | 2008-12-31 | 2009-06-17 | 中国建设银行股份有限公司 | Bank management system, server cluster and correlation method |
CN101877158A (en) * | 2010-03-23 | 2010-11-03 | 苏州德融嘉信信用管理技术有限公司 | Front service platform of bank and operation processing method thereof |
US20130091557A1 (en) * | 2011-10-11 | 2013-04-11 | Wheel Innovationz, Inc. | System and method for providing cloud-based cross-platform application stores for mobile computing devices |
CN104035775A (en) * | 2014-06-12 | 2014-09-10 | 华夏银行股份有限公司 | Comprehensive front-end system of bank |
CN106295377A (en) * | 2016-08-24 | 2017-01-04 | 成都万联传感网络技术有限公司 | A kind of medical treatment endowment data secure exchange agent apparatus and construction method thereof |
CN109583867A (en) * | 2018-11-30 | 2019-04-05 | 银联商务股份有限公司 | Channel docking system and method |
CN110189229A (en) * | 2018-05-16 | 2019-08-30 | 杜鹏飞 | Insure core business system in internet |
WO2020006903A1 (en) * | 2018-07-02 | 2020-01-09 | 平安科技(深圳)有限公司 | Financial data interaction method, apparatus computer device and storage medium |
CN110908658A (en) * | 2019-11-15 | 2020-03-24 | 国网电子商务有限公司 | Micro-service and micro-application system, data processing method and device |
US20200236009A1 (en) * | 2019-01-18 | 2020-07-23 | Fidelity lnformation Services, LLC | Systems and methods for rapid booting and deploying of an enterprise system in a cloud environment |
CN111563832A (en) * | 2020-04-28 | 2020-08-21 | 智慧徐州建设投资发展有限公司 | Cloud-based multi-citizen service fusion platform |
US20200379794A1 (en) * | 2017-05-02 | 2020-12-03 | Namu Tech Co., Ltd. | Method for containerizing application on cloud platform |
-
2020
- 2020-12-21 CN CN202011522114.4A patent/CN112596710B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6092083A (en) * | 1997-02-26 | 2000-07-18 | Siebel Systems, Inc. | Database management system which synchronizes an enterprise server and a workgroup user client using a docking agent |
US20040006537A1 (en) * | 2002-03-04 | 2004-01-08 | First Data Corporation | Method and system for processing credit card related transactions |
CN1681260A (en) * | 2004-06-30 | 2005-10-12 | 中国银行股份有限公司 | Processing system between enterprise and bank service abutting joint |
CN101458808A (en) * | 2008-12-31 | 2009-06-17 | 中国建设银行股份有限公司 | Bank management system, server cluster and correlation method |
CN101877158A (en) * | 2010-03-23 | 2010-11-03 | 苏州德融嘉信信用管理技术有限公司 | Front service platform of bank and operation processing method thereof |
US20130091557A1 (en) * | 2011-10-11 | 2013-04-11 | Wheel Innovationz, Inc. | System and method for providing cloud-based cross-platform application stores for mobile computing devices |
CN104035775A (en) * | 2014-06-12 | 2014-09-10 | 华夏银行股份有限公司 | Comprehensive front-end system of bank |
CN106295377A (en) * | 2016-08-24 | 2017-01-04 | 成都万联传感网络技术有限公司 | A kind of medical treatment endowment data secure exchange agent apparatus and construction method thereof |
US20200379794A1 (en) * | 2017-05-02 | 2020-12-03 | Namu Tech Co., Ltd. | Method for containerizing application on cloud platform |
CN110189229A (en) * | 2018-05-16 | 2019-08-30 | 杜鹏飞 | Insure core business system in internet |
WO2020006903A1 (en) * | 2018-07-02 | 2020-01-09 | 平安科技(深圳)有限公司 | Financial data interaction method, apparatus computer device and storage medium |
CN109583867A (en) * | 2018-11-30 | 2019-04-05 | 银联商务股份有限公司 | Channel docking system and method |
US20200236009A1 (en) * | 2019-01-18 | 2020-07-23 | Fidelity lnformation Services, LLC | Systems and methods for rapid booting and deploying of an enterprise system in a cloud environment |
CN110908658A (en) * | 2019-11-15 | 2020-03-24 | 国网电子商务有限公司 | Micro-service and micro-application system, data processing method and device |
CN111563832A (en) * | 2020-04-28 | 2020-08-21 | 智慧徐州建设投资发展有限公司 | Cloud-based multi-citizen service fusion platform |
Non-Patent Citations (2)
Title |
---|
于眉;曾亮;: "EAI技术在银行业中的应用实例", 中国金融电脑, no. 08, pages 213 - 217 * |
孙政;: "国库信息处理系统运维支撑平台建设实践", 金融科技时代, no. 12 * |
Also Published As
Publication number | Publication date |
---|---|
CN112596710B (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240204978A1 (en) | Interface for digital operator platform including response caching | |
US10116507B2 (en) | Method of and system for managing computing resources | |
US10552448B2 (en) | Systems and methods for event driven object management and distribution among multiple client applications | |
CN107508795B (en) | Cross-container cluster access processing device and method | |
CN110083650B (en) | Metadata self-discovery-based automatic generation method for data query interface | |
US20110246526A1 (en) | Service level agreement based storage access | |
CN104838620A (en) | Event management in telecommunications networks | |
CN100385973C (en) | Business information processing system and method | |
CN113839977A (en) | Message pushing method and device, computer equipment and storage medium | |
US20140214956A1 (en) | Method and apparatus for managing sessions of different websites | |
CN112511591A (en) | Method, device, equipment and medium for realizing hospital interface data interaction | |
KR20210043865A (en) | NGSI-LD API Wrapping Method | |
CN113301079B (en) | Data acquisition method, system, computing device and storage medium | |
CN111698675A (en) | Data processing method, device and computer readable storage medium | |
CN114020444B (en) | Calling system and method for resource service application in enterprise digital middle station | |
CN111740945A (en) | Data processing method and device | |
CN114448686A (en) | Cross-network communication device and method based on micro-service | |
CN112596710B (en) | Front-end system | |
CN111371621A (en) | Data exchange method and device based on hybrid cloud and computer readable medium | |
CN111008209A (en) | Data account checking method, device and system, storage medium and electronic device | |
CN114866416A (en) | Multi-cluster unified management system and deployment method | |
CN104363286A (en) | Workflow template-driven CDN content distribution method and system | |
CN106559454B (en) | Resource access method, device and system | |
CN112187916A (en) | Cross-system data synchronization method and device | |
CN113741912A (en) | Model management system, method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |