CN112596710B - Front-end system - Google Patents

Front-end system Download PDF

Info

Publication number
CN112596710B
CN112596710B CN202011522114.4A CN202011522114A CN112596710B CN 112596710 B CN112596710 B CN 112596710B CN 202011522114 A CN202011522114 A CN 202011522114A CN 112596710 B CN112596710 B CN 112596710B
Authority
CN
China
Prior art keywords
docking
application
service
data access
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011522114.4A
Other languages
Chinese (zh)
Other versions
CN112596710A (en
Inventor
李嘉
吴晓征
徐可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Vanadium Titanium Intelligent Technology Co ltd
Original Assignee
Shanghai Vanadium Titanium Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Vanadium Titanium Intelligent Technology Co ltd filed Critical Shanghai Vanadium Titanium Intelligent Technology Co ltd
Priority to CN202011522114.4A priority Critical patent/CN112596710B/en
Publication of CN112596710A publication Critical patent/CN112596710A/en
Application granted granted Critical
Publication of CN112596710B publication Critical patent/CN112596710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the invention discloses a front-end system, which is applied to the docking of a data access system and a core data system, wherein the system comprises a docking mirror image, a docking service platform and a cloud primary infrastructure, a docking application request is received through the docking mirror image, a docking application is operated, the docking service platform responds to the docking application to be executed to different application service nodes, corresponding application services are executed, and application source codes corresponding to the docking application and the application services are supported and operated through the cloud primary infrastructure. Therefore, the butt joint of different data access systems and core data systems and the isolation setting and independent operation of application services are realized, and the method is beneficial to improving the working efficiency of the front-end system and reducing the research, development and operation and maintenance costs of the front-end system.

Description

Front-end system
Technical Field
The invention relates to the technical field of computers, in particular to a front-end system.
Background
Different data processing systems often have different access modes, and docking of the different data processing systems is often achieved through a front-end system.
Therefore, the working efficiency of the front-end system is improved while the normal butt joint of different data processing systems is ensured, and the cost of the development and operation of the front-end system is reduced.
Disclosure of Invention
In view of the above, the embodiment of the invention aims to provide a front-end system, which is beneficial to improving the working efficiency and reducing the cost of research, development and operation and maintenance while realizing normal docking of different data processing systems.
The embodiment of the invention aims to provide a front-end system which is applied to the docking of a data access system and a core data system, and the system comprises:
The system comprises a docking mirror image, a data access system and a data access system, wherein the docking mirror image is used for docking at least one data access system, and running corresponding docking applications in response to receiving docking application requests, the docking application requests comprise at least one application service request, each application service request corresponds to one application service respectively, and each application service corresponds to different business service flows respectively;
The docking service platform is used for docking at least one core data system, at least one application service is packaged, and corresponding application service is executed in response to the docking application being executed to different application service nodes; and
And the cloud primary infrastructure is used for supporting and running the application source codes corresponding to the docking application and the application service so as to realize the docking application.
Further, the docking image includes:
the service server is used for receiving the docking application request, synchronizing the available product configuration of the docking application and forwarding the application service request to a docking service platform; and
And the non-business server is used for monitoring the execution process of the butt-joint application request.
Further, the non-business server side includes:
A naming service layer for generating a globally unique name for the docking application;
The access service layer is used for carrying out flow control and full-link tracking on the docking application request before the application service request is forwarded to the docking service platform;
and the log service layer is used for storing logs generated by the docking application in batches.
Further, the core data system includes a plurality of different business service subsystems;
The docking service platform comprises:
the product configuration layer is used for providing product configuration information available to the docking application and broadcasting the product configuration information to the corresponding docking application, wherein the product configuration information comprises a current channel identifier, product information and sales information;
And the business docking layers are respectively docked with the corresponding business service subsystems and submit application service requests to the corresponding business service subsystems.
Further, the docking service platform further includes:
and the channel application management layer is used for registering, putting on and off shelves, distributing keys and expanding capacity of the data access system.
The log collecting layer is used for storing logs generated by the data access system;
And the application compiling layer is used for responding to the change of the application source code and triggering compiling.
Further, the system further comprises:
And the channel docking mirror image is used for docking at least one data access system and accessing the data access system into the docking mirror image.
Further, the channel docking image includes:
The protocol adaptation layer is used for accessing the data access system and converting the protocol type of the data access system into the protocol type adapted to the corresponding core data system; and
And the function adapting layer is used for interfacing the data access system and the application service in the interfacing service platform.
Further, the channel docking image further includes:
and the encryption and decryption adaptation layer is used for encrypting and decrypting the content generated by the butt joint.
Further, the system further comprises:
and the internet access layer is connected between the data access system and the channel butt joint mirror image and is used for realizing the butt joint of the data access system and the channel butt joint mirror image.
Further, the cloud native infrastructure includes:
A container management subsystem;
A mirror warehouse;
a source code management subsystem; and
An automated compiling subsystem.
According to the technical scheme, at least one data access system is docked through the docking mirror image and at least one core data system is docked through the docking service platform, docking application requests are received through the docking mirror image, corresponding docking applications are operated, and docking of different data access systems and core data systems is achieved. And secondly, packaging at least one application service through the docking service platform, and executing the corresponding application service when the docking application is executed to different application service nodes, so that the isolation setting and independent operation of the application service are realized, the resource use ratio is improved, and the working efficiency of the front-end system is improved. Furthermore, based on the original cloud infrastructure, the application source codes corresponding to the docking application are supported and operated, so that the docking application is processed, and the development and operation cost of the front-end system can be reduced.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of the operation of a front-end system of an embodiment of the present invention;
FIG. 2 is an overall frame diagram of a front-end system of an embodiment of the present invention;
FIG. 3 is an overall architecture diagram of a front end system of an embodiment of the present invention;
Fig. 4 is a flowchart of a data access system according to an embodiment of the present invention.
Detailed Description
The present invention is described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth in detail. The present invention will be fully understood by those skilled in the art without the details described herein. Well-known methods, procedures, flows, components and circuits have not been described in detail so as not to obscure the nature of the invention.
Moreover, those of ordinary skill in the art will appreciate that the drawings are provided herein for illustrative purposes and that the drawings are not necessarily drawn to scale.
Unless the context clearly requires otherwise, the words "comprise," "comprising," and the like in the description are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, it is the meaning of "including but not limited to".
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
The presence of the pre-system facilitates interfacing of the different data processing systems with the core data system. Therefore, the working efficiency of the front-end system is improved and the cost of research, development and operation and maintenance of the front-end system is reduced while the normal butt joint of different data processing systems and core data systems is ensured.
In this embodiment, a description will be given of an example in which a core data system in an insurance company interfaces with different internet sales channels.
The Internet marketing channel is a novel marketing mode for realizing marketing by means of Internet, communication technology and digital interactive media, and has important significance for reducing enterprise cost and improving enterprise competitiveness. However, because of the variety of access manners of the existing internet channel, a front-end system is required to be isolated from a core system in the process of docking and operating an enterprise with an internet sales channel. Meanwhile, different internet sales channels correspond to different specific business requirements according to different product types, and according to requirements of channel side operation activities (such as double eleven activities), sudden incremental flow can be generated. Thus, for insurance companies to interface, manage and maintain internet sales channels, a pre-system is required to isolate the internet sales channels from the insurance core system. Based on the above, the embodiment of the invention aims to provide a front-end system, which is beneficial to improving the working efficiency of the front-end system and reducing the research, development and operation and maintenance costs of the front-end system while realizing the butt joint of a data access system of an internet sales channel and a core data system of an insurance company.
FIG. 1 is a schematic diagram illustrating the operation of a front-end system according to an embodiment of the present invention. As shown in fig. 1, the front-end system 200 of the present embodiment is applied to the interfacing of different data access systems 100 with different core data systems 300. The front-end system 200 includes a docking image 1, a docking service platform 2, and a cloud native infrastructure 3. Wherein the core data system 300 includes a plurality of different business service subsystems. It should be understood that the data access system of this embodiment corresponds to a data system corresponding to a different internet sales channel, and the core data system corresponds to a core data system of an insurance company. The description will follow this nomenclature.
As shown in fig. 1, the docking image 1 of the present embodiment docks at least one data access system 100, and is configured to run a corresponding docking application in response to receiving a docking application request. The docking application request comprises at least one application service request, each application service request corresponds to one application service respectively, and each application service corresponds to different business service flows respectively. The docking service platform 2 docks to at least one core data system 300, encapsulating at least one application service, configured to execute a corresponding application service in response to the docking application executing to a different application service node. The cloud native infrastructure 3 is configured to support and run application source code corresponding to the docking application and the application service, thereby implementing the docking application. Therefore, the docking of different data access systems and core data systems is realized through the docking mirror image and the docking service platform, at least one application service is packaged through the docking service platform, when the docking application is executed to different application service nodes, the corresponding application service is executed, the isolation setting and independent operation of the application service are realized, the resource use proportion is improved, and the working efficiency of the front-end system is improved. Furthermore, based on the original cloud infrastructure, the application source codes corresponding to the docking application are supported and operated, so that the docking application is processed, and the development and operation cost of the front-end system can be reduced.
Fig. 2 is an overall frame diagram of a front-end system of an embodiment of the present invention. As shown in fig. 2, to facilitate docking with the data access system, the front-end system 200 of the present embodiment further includes a channel docking mirror 4. The channel docking image 4 is used for docking at least one data access system, accessing the data access system into the docking image 1, converting data information corresponding to the docking application request into a type suitable for the front-end system, and forwarding the converted docking application request to the docking image. Therefore, the channel butt-joint mirror image ensures smooth butt joint of different data access systems and the core data system, is beneficial to improving the service performance of the front-end system, and further improves the butt joint efficiency of the data access system and the core data system.
Further, as shown in fig. 2, the front-end system 200 of the present embodiment further includes an internet access layer 5. The internet access layer 5 is connected between the data access system and the channel docking mirror image 4 and is used for realizing connection of the data access system and the channel docking mirror image.
The overall structural diagram of the front-end system in this embodiment will be described in detail with reference to the accompanying drawings.
Fig. 3 is an overall architecture diagram of a front-end system of an embodiment of the present invention. As shown in fig. 3, the docking image 1 of the present embodiment includes a service server 11 and a non-service server 12. The business server 11 receives the docking application request, synchronizes product configuration available for the docking application, and forwards the application service request to the docking service platform 2. The non-business server 12 monitors the execution of the docking application request, including whether the docking application request begins to be processed, and the execution of the corresponding application request to a specific application service node.
To further improve the usability of the front-end system, the non-business server 12 of the present embodiment includes a naming service layer 121, an access service layer 122, and a log service layer 123. Where the naming service layer 121 generates a globally unique name for the docking application. Access service layer 122 performs flow control and full link tracking on the docking application request before the application service request is forwarded to the docking service platform. The log service layer 123 stores logs generated by the docking application in batches.
Optionally, the naming service layer 121 generates a globally unique name for the mirror image corresponding to the current application request at runtime, where the generated name includes channel identification information of the data access system to which the docking application source belongs, attribute information of the application service included in the docking application, and a displacement identifier of the current execution container. Meanwhile, in this embodiment, since there is a difference in the changing frequency of different information, a fixed encoding mode is adopted for the information type with smaller changing frequency according to the actual use situation, so as to reduce the resources occupied in the name generation process.
Optionally, the access service layer 122 of this embodiment performs flow control on the application request for docking, including current limiting, fusing, and gray level splitting. Meanwhile, the access service layer 122 of the present embodiment is further configured to execute, when receiving a request for docking an application, the following logic functions according to a preset order and configuration, including: responding to the encryption and decryption service existing in the appointed port, and initiating call to the appointed port and encrypting and decrypting; responding to the designated port to have protocol conversion service, and performing protocol conversion on the designated port initiation call; and adding full link tracking (trace id) to the docking application request, and routing to the business docking layers corresponding to different application services in the docking service platform. It should be understood that in this embodiment, since the access service layer and the service server are in the same docking mirror image, cross-machine access is not involved in the above access, so that performance loss generated by repeatedly calling the system for multiple times can be effectively reduced, and further, the working efficiency of the front-end system is improved.
In this embodiment, the log service layer 123 stores logs generated by the docking application in batches, and is used as a monitor of file change, when new content occurs in the log file, reads the changed content of the log file, and uses the name of the current container and the name of the log file as an identifier to deliver the changed content to the docking service platform. Therefore, the docking application does not need to pay attention to the whole content of the log in real time, and only needs to normally log according to the log setting mode, so that the working efficiency of the front-end system is further improved.
As shown in fig. 3, the docking service platform 2 of the present embodiment includes a product configuration layer 21 and at least one service docking layer, where each service docking layer respectively docks to a corresponding service subsystem and submits an application service request to the corresponding service subsystem.
Optionally, the product configuration layer 21 is configured to provide product configuration information available to the docking application, and broadcast the product configuration information to the corresponding docking application, where the product configuration information includes a current channel identifier, product information, and sales information. The current channel identifier is used for representing unique identifiers of different data access systems in the front-end system. The product information includes product representation, product attribute configuration information, and product rule information. The product identification is used to characterize the unique identification of the core data system to the current product. The product attribute configuration information includes a tariff table, fixed parameters, and the like. The product rule information includes corresponding rule requirements for different application services of the product, such as check rules for the age, identity and uniqueness of the insured person. The sales information is the information of the attribution sales units set for the products by the data access system according to the requirements of the service.
Alternatively, the application services that may be provided by the present embodiment may include a policy service, a kernel service, an application service, a refund service, and a mirror service. The service docking layer comprises a first service docking layer 22, a second service docking layer 23 and a third service docking layer 24. The first business docking layer 22 docks with the insurance service subsystem in the core data system and submits requests corresponding to the underwriting service, the application service, and the refund service to the insurance service subsystem. The second service docking layer 23 docks with the image service subsystem in the core data system, and submits a request corresponding to the image service subsystem. The third business docking layer 24 docks the policy service subsystem in the core data system, and submits a request corresponding to the policy service subsystem.
Furthermore, since different application services have different performance indexes and docking modes, the business docking layer of the embodiment is further configured to perform peak clipping and valley filling on docking application requests from the data access system through a queue mechanism so as to ensure stable flow to the core data system; providing a consistent calling style to encapsulate the heterogeneity of different application service flows in the core data system; recording the received receiving application request reply information and storing the received application request reply information as a service flow record.
Further, as shown in fig. 3, the docking service platform 2 of the present embodiment further includes a channel application management layer 25, a log aggregation layer 26, and an application compiling layer 27. The channel application management layer 25 registers, puts on and off shelves, distributes keys and expands capacity for different data access systems. The log aggregation layer 26 stores logs generated by the data access system. The application compilation layer 27 triggers compilation in response to changes in application source code.
Specifically, the channel application management layer 25 of the present embodiment is further configured to interact with the cloud native infrastructure and implement functions including: establishing a channel identifier, establishing a docking application identifier, managing source code address information (comprising a storage address of application source codes of the docking application, an execution node address, a label and the like) of the docking application, using the source code address information for automatic compiling when the docking application is issued, and distributing required capacity resources for the docking application according to standard units. Therefore, the effective management efficiency of the butt joint application in the front-end system is convenient and improved.
And for the docking application of the same data access system, at least two container examples are started through the channel application management layer, and corresponding expansion or contraction is carried out on the container examples according to the traffic of the data access system. Therefore, isolation during operation of different containers and isolation between application services are realized, and the use efficiency of the front-end system during butt joint application is further improved.
In this embodiment, since the docking application is a stateless service, the log aggregation service 26 receives the changed content of the log file delivered by the log service layer, files the log generated by the docking application uniformly, and provides a log query function, so that the subsequent query and tracing of the running log of the docking application are facilitated, and the use performance of the front-end system is further improved.
Further, as shown in fig. 3, the cloud native infrastructure 3 of the present embodiment includes a container management subsystem 31, a mirror repository 32, a source code management subsystem 33, and an automated compiling subsystem 34.
Alternatively, the container management subsystem 31 of the present embodiment is configured to autonomously manage the operable Docker cluster, and may also use an open-source container orchestration system (such as Kubernetes) or a commercial container cluster service (such as a bot, an ali cloud container service, etc.). The image repository 32 for centrally storing various image files may be created by the Docker's own functionality, or may use open source or commercial solutions. The source code management subsystem 33 may employ a distributed version control system (git) or a free-open version control system (subversion). The automated compiling subsystem 34 may be implemented by an open source software project (Jenkins). Therefore, the cloud primary infrastructure is built through the cooperation of different subsystems, and the docking application is supported and operated, so that the development and use cost of the front-end system is reduced.
Further, the channel docking image 4 of the present embodiment includes a protocol adaptation layer 41 and a function adaptation layer 42, and is compiled by taking the docking image as a master, so as to complete implementation related to a specific service. In addition, the language and framework for implementing the application between the source code and the function of the channel docking mirror image are not limited, but the following conventions are met: under the source code engineering directory, a Dockerfile file for compiling the mirror image exists, and the appointed service is provided at the appointed port. Meanwhile, when the protocol adaptation is performed, a fixed service interaction protocol is selected as much as possible, and optionally, the service interaction protocol can adopt a restful protocol, a gPRC protocol or a Dubbo protocol.
As shown in fig. 3, the channel docking image 4 of the present embodiment further includes an encryption/decryption adaptation layer 43. The content generated by the docking is encrypted and decrypted by the encryption and decryption adaptation layer 43. Specifically, in the process of realizing the docking application, the content generated by the docking application is encrypted, so that data loss or leakage is prevented, and the safety of the data is improved. Furthermore, when related data of the docking application needs to be called later, decryption is carried out through the encryption and decryption adaptation layer, so that data can be conveniently checked, called and traced, and the use value of the data is improved.
Further, the internet access layer 5 of the present embodiment employs a reverse proxy server, such as, for example, nginx or APACHE HTTPD. Thus, the docking application of different data access systems is accessed through the Internet access layer to request and network the docking applicationIn the corresponding channel docking example, different docking applications are further realized.
Fig. 4 is a flowchart of a data access system according to an embodiment of the present invention. As shown in fig. 4, the data access system of the present embodiment performs the following steps when accessing the front-end system:
In step S100, the data access system is registered.
In step S200, a docking application is registered.
In step S300, a source code warehouse is opened to perform product configuration.
Optionally, step S300 of the present embodiment includes step S310 to step S330.
In step S310, a source code repository is opened.
In step S320, product attribute configuration is performed.
In step S330, product additional function matching is performed.
In step S400, the application is tested.
In step S500, an application is published.
Optionally, the present embodiment further includes step S510 and step S520 after step S500 is performed.
In step S510, traffic management is applied.
In step S520, version management is applied.
Thus, in this embodiment, the connection between the new data access system and the core data system is achieved by executing steps S100-S500. And by executing step S510 and step S520 to manage the application flow and version, the usability of the front-end system is further improved, and the application range of the front-end system is enlarged.
According to the technical scheme, at least one data access system is docked through the docking mirror image and at least one core data system is docked through the docking service platform, docking application requests are received through the docking mirror image, corresponding docking applications are operated, and docking of different data access systems and core data systems is achieved. And secondly, packaging at least one application service through the docking service platform, and executing the corresponding application service when the docking application is executed to different application service nodes, so that the isolation setting and independent operation of the application service are realized, the resource use ratio is improved, and the working efficiency of the front-end system is improved. Furthermore, based on the original cloud infrastructure, the application source codes corresponding to the docking application are supported and operated, so that the docking application is processed, and the development and operation cost of the front-end system can be reduced.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations may be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A pre-system for interfacing a data access system with a core data system, the system comprising:
The system comprises a docking mirror image, a data access system and a data access system, wherein the docking mirror image is used for docking at least one data access system, and running corresponding docking applications in response to receiving docking application requests, the docking application requests comprise at least one application service request, each application service request corresponds to one application service respectively, and each application service corresponds to different business service flows respectively;
The docking service platform is used for docking at least one core data system, at least one application service is packaged, and corresponding application service is executed in response to the docking application being executed to different application service nodes; and
The cloud primary infrastructure is used for supporting and running application source codes corresponding to the docking application and the application service so as to realize the docking application;
Wherein the core data system comprises a plurality of different business service subsystems; the docking service platform comprises:
the product configuration layer is used for providing product configuration information available to the docking application and broadcasting the product configuration information to the corresponding docking application, wherein the product configuration information comprises a current channel identifier, product information and sales information;
And the business docking layers are respectively docked with the corresponding business service subsystems and submit application service requests to the corresponding business service subsystems.
2. The front-end system of claim 1, wherein the docking mirror comprises:
the service server is used for receiving the docking application request, synchronizing the available product configuration of the docking application and forwarding the application service request to a docking service platform; and
And the non-business server is used for monitoring the execution process of the butt-joint application request.
3. The pre-system of claim 2, wherein the non-business server comprises:
A naming service layer for generating a globally unique name for the docking application;
The access service layer is used for carrying out flow control and full-link tracking on the docking application request before the application service request is forwarded to the docking service platform;
and the log service layer is used for storing logs generated by the docking application in batches.
4. The lead system of claim 1, wherein the docking service platform further comprises:
The channel application management layer is used for registering, putting on and off shelves, distributing keys and expanding capacity of the data access system;
the log collecting layer is used for storing logs generated by the data access system;
And the application compiling layer is used for responding to the change of the application source code and triggering compiling.
5. The lead system of claim 1, wherein the system further comprises:
And the channel docking mirror image is used for docking at least one data access system and accessing the data access system into the docking mirror image.
6. The lead system of claim 5, wherein the channel docking mirror comprises:
The protocol adaptation layer is used for accessing the data access system and converting the protocol type of the data access system into the protocol type adapted to the corresponding core data system; and
And the function adapting layer is used for interfacing the data access system and the application service in the interfacing service platform.
7. The lead system of claim 6, wherein the channel docking mirror further comprises:
and the encryption and decryption adaptation layer is used for encrypting and decrypting the content generated by the butt joint.
8. The lead system of claim 5, wherein the system further comprises:
and the internet access layer is connected between the data access system and the channel butt joint mirror image and is used for realizing the butt joint of the data access system and the channel butt joint mirror image.
9. The pre-amble system of claim 1, wherein the cloud native infrastructure comprises:
A container management subsystem;
A mirror warehouse;
a source code management subsystem; and
An automated compiling subsystem.
CN202011522114.4A 2020-12-21 2020-12-21 Front-end system Active CN112596710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011522114.4A CN112596710B (en) 2020-12-21 2020-12-21 Front-end system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011522114.4A CN112596710B (en) 2020-12-21 2020-12-21 Front-end system

Publications (2)

Publication Number Publication Date
CN112596710A CN112596710A (en) 2021-04-02
CN112596710B true CN112596710B (en) 2024-05-14

Family

ID=75199904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011522114.4A Active CN112596710B (en) 2020-12-21 2020-12-21 Front-end system

Country Status (1)

Country Link
CN (1) CN112596710B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6092083A (en) * 1997-02-26 2000-07-18 Siebel Systems, Inc. Database management system which synchronizes an enterprise server and a workgroup user client using a docking agent
CN1681260A (en) * 2004-06-30 2005-10-12 中国银行股份有限公司 Processing system between enterprise and bank service abutting joint
CN101458808A (en) * 2008-12-31 2009-06-17 中国建设银行股份有限公司 Bank management system, server cluster and correlation method
CN101877158A (en) * 2010-03-23 2010-11-03 苏州德融嘉信信用管理技术有限公司 Front service platform of bank and operation processing method thereof
CN104035775A (en) * 2014-06-12 2014-09-10 华夏银行股份有限公司 Comprehensive front-end system of bank
CN106295377A (en) * 2016-08-24 2017-01-04 成都万联传感网络技术有限公司 A kind of medical treatment endowment data secure exchange agent apparatus and construction method thereof
CN109583867A (en) * 2018-11-30 2019-04-05 银联商务股份有限公司 Channel docking system and method
CN110189229A (en) * 2018-05-16 2019-08-30 杜鹏飞 Insure core business system in internet
WO2020006903A1 (en) * 2018-07-02 2020-01-09 平安科技(深圳)有限公司 Financial data interaction method, apparatus computer device and storage medium
CN110908658A (en) * 2019-11-15 2020-03-24 国网电子商务有限公司 Micro-service and micro-application system, data processing method and device
CN111563832A (en) * 2020-04-28 2020-08-21 智慧徐州建设投资发展有限公司 Cloud-based multi-citizen service fusion platform

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1639723A (en) * 2002-03-04 2005-07-13 第一数据公司 Method and system for processing credit card related transactions
US20130091557A1 (en) * 2011-10-11 2013-04-11 Wheel Innovationz, Inc. System and method for providing cloud-based cross-platform application stores for mobile computing devices
KR101807806B1 (en) * 2017-05-02 2017-12-11 나무기술 주식회사 Application containerization method on cloud platform
US11018956B2 (en) * 2019-01-18 2021-05-25 Fidelity Information Services, Llc Systems and methods for rapid booting and deploying of an enterprise system in a cloud environment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6092083A (en) * 1997-02-26 2000-07-18 Siebel Systems, Inc. Database management system which synchronizes an enterprise server and a workgroup user client using a docking agent
CN1681260A (en) * 2004-06-30 2005-10-12 中国银行股份有限公司 Processing system between enterprise and bank service abutting joint
CN101458808A (en) * 2008-12-31 2009-06-17 中国建设银行股份有限公司 Bank management system, server cluster and correlation method
CN101877158A (en) * 2010-03-23 2010-11-03 苏州德融嘉信信用管理技术有限公司 Front service platform of bank and operation processing method thereof
CN104035775A (en) * 2014-06-12 2014-09-10 华夏银行股份有限公司 Comprehensive front-end system of bank
CN106295377A (en) * 2016-08-24 2017-01-04 成都万联传感网络技术有限公司 A kind of medical treatment endowment data secure exchange agent apparatus and construction method thereof
CN110189229A (en) * 2018-05-16 2019-08-30 杜鹏飞 Insure core business system in internet
WO2020006903A1 (en) * 2018-07-02 2020-01-09 平安科技(深圳)有限公司 Financial data interaction method, apparatus computer device and storage medium
CN109583867A (en) * 2018-11-30 2019-04-05 银联商务股份有限公司 Channel docking system and method
CN110908658A (en) * 2019-11-15 2020-03-24 国网电子商务有限公司 Micro-service and micro-application system, data processing method and device
CN111563832A (en) * 2020-04-28 2020-08-21 智慧徐州建设投资发展有限公司 Cloud-based multi-citizen service fusion platform

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EAI技术在银行业中的应用实例;于眉;曾亮;;中国金融电脑(08);全文 *
国库信息处理系统运维支撑平台建设实践;孙政;;金融科技时代(12);全文 *
王媛等.《互联网金融》.电子科技大学出版社,2020,第213-217页. *

Also Published As

Publication number Publication date
CN112596710A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN109976667B (en) Mirror image management method, device and system
US8914469B2 (en) Negotiating agreements within a cloud computing environment
US20170048102A1 (en) Method of and system for managing computing resources
CN109189841B (en) Multi-data source access method and system
Zimmermann Architectural decisions as reusable design assets
CN100547545C (en) The method and system that is used for the application fractionation of network edge calculating
JPH0594344A (en) Method for efficient document form conversion in data processing system
CN113992769B (en) Industrial Internet information exchange method
US9591079B2 (en) Method and apparatus for managing sessions of different websites
CN111212142A (en) Service processing method, integrated open docking platform and computer storage medium
CN110769018A (en) Message pushing method and device
US20210389976A1 (en) Techniques to facilitate a migration process to cloud storage
CN111612504A (en) Information sending method and device for task completion user and electronic equipment
WO2022083293A1 (en) Managing task flow in edge computing environment
CN112596710B (en) Front-end system
CN111666166B (en) Service providing method, device, equipment and storage medium
KR100377189B1 (en) System and method for data exchange between workflow system and applications
CN104363286A (en) Workflow template-driven CDN content distribution method and system
CN115349117B (en) Multi-level cache grid system for multi-tenant, serverless environments
CN114866416A (en) Multi-cluster unified management system and deployment method
CN113805850A (en) Artificial intelligence management system based on multiple deep learning and machine learning frameworks
CN113923257A (en) Container group instance termination and creation method, device, electronic equipment and storage medium
CN114584962A (en) Data migration method, system, server and storage medium
CN112187916A (en) Cross-system data synchronization method and device
CN113691465A (en) Data transmission method, intelligent network card, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant