WO2021215864A1 - Api gateway accelerator system and method - Google Patents

Api gateway accelerator system and method Download PDF

Info

Publication number
WO2021215864A1
WO2021215864A1 PCT/KR2021/005138 KR2021005138W WO2021215864A1 WO 2021215864 A1 WO2021215864 A1 WO 2021215864A1 KR 2021005138 W KR2021005138 W KR 2021005138W WO 2021215864 A1 WO2021215864 A1 WO 2021215864A1
Authority
WO
WIPO (PCT)
Prior art keywords
api
service
cache
module
server
Prior art date
Application number
PCT/KR2021/005138
Other languages
French (fr)
Korean (ko)
Inventor
송영관
Original Assignee
주식회사 모비젠
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 모비젠 filed Critical 주식회사 모비젠
Publication of WO2021215864A1 publication Critical patent/WO2021215864A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/541Interprogram communication via adapters, e.g. between incompatible applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • the present invention relates to an API gateway accelerator system and method, and more particularly, to an API gateway accelerator system and method for providing a service utilizing web caching technology under an API gateway server environment that unifies the endpoints of all API servers. will be.
  • API gateway technology that provides access control, service routing, traffic control, etc. is applied to the current generalized API service providing systems, so a number of API endpoints are unified and serviced.
  • the technical problem to be solved by the present invention is to provide a system and method for improving response speed and reducing service load for an API service having a complex processing structure in an API gateway environment.
  • a system for solving the above technical problem is an API gateway accelerator system for processing a processing procedure in an API management portal service to configure a service infrastructure of a PI server and a cache server, the device is, a registration module for registering API specifications by checking whether a cache service is available; a configuration module for configuring environment information for executing the cache server; a distribution module for distributing execution information for executing a service based on the environment information; and a statistical analysis module for processing a response message for an API request received by the API gateway by checking whether the cache is serviced or not.
  • a method according to another embodiment of the present invention for solving the technical problem may be an API gateway web caching service providing method implemented by the API gateway accelerator system.
  • FIG. 1 is a conceptual diagram of a system to which an API gateway accelerator system and method according to an embodiment of the present invention is applied.
  • FIG. 2 is a diagram illustrating a flow chart of a service in terms of an API user and an API provider according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a flow chart of a service in terms of an API user and an API provider according to another embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a service configuration in terms of API users and API providers according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an API gateway, a cache server, and an API server configuration module according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating the overall operation of the API gateway accelerator system and method according to an embodiment of the present invention.
  • a system for solving the above technical problem is an API gateway accelerator system for processing a processing procedure in an API management portal service to configure a service infrastructure of a PI server and a cache server, the device is, a registration module for registering API specifications by checking whether a cache service is available; a configuration module for configuring environment information for executing the cache server; a distribution module for distributing execution information for executing a service based on the environment information; and a statistical analysis module for processing a response message for an API request received by the API gateway by checking whether the cache is serviced or not.
  • the registration module In the system, the registration module, API gateway accelerator system, characterized in that it can selectively perform the configuration of an API service infrastructure having a complex processing structure or a general API service infrastructure configuration.
  • the configuration module is automatically configured based on registration information that is a processing result of the registration module without complex configuration input for the API cache infrastructure configuration, and the distribution module executes the execution based on the registration information It may be characterized by distributing information.
  • the statistical analysis module may be characterized in that it performs cache statistics processing processed by the API cache service executed based on the API cache infrastructure configuration configured by the configuration module and the distribution module.
  • a method according to another embodiment of the present invention for solving the technical problem may be an API gateway web caching service providing method implemented by the API gateway accelerator system.
  • identification symbols eg, A, B, C, etc.
  • the identification code does not describe the order of each step, and each step is clearly specified in context. Unless the order is specified, the order may differ from the specified order. That is, each step may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the reverse order.
  • FIG. 1 is a conceptual diagram of a system to which an API gateway accelerator system and method according to an embodiment of the present invention is applied.
  • the API gateway accelerator system 10 includes an API gateway 100 , a cache server 111 , an API server 113 , and an API management portal 120 .
  • An API configuration with a complex process (hereinafter, '(a)') can be configured together by tying an API server and a cache server, and an API configuration without a complex process (hereinafter, '(B)') is an API server can only be configured.
  • the server group according to the combination of (A) and (B) is composed of API gateway back-end service.
  • the API gateway 100 represents all API servers 115 and provides a unified API endpoint (URI path exposed to the outside) to API users, and the cache server 200 responds to the API server possessing a complex process. Store and manage messages to minimize the response time of the API gateway.
  • URI path exposed to the outside URI path exposed to the outside
  • the API management portal 120 can easily and effectively support server creation, deletion, monitoring, etc. for server configuration and management for the combination of (A) and (B).
  • API management portal 500 Open Stack
  • VMware, AWS (Amazon Web Service), GCP (Google Cloud Platform), Cloud), public cloud (Public Cloud), hybrid cloud (Hybrid Cloud), such as deployment models can be supported, so that the API server configuration can be provided according to the service type desired by the user.
  • AWS Amazon Web Service
  • GCP Google Cloud Platform
  • Cloud Public Cloud
  • Hybrid Cloud Hybrid Cloud
  • FIG. 2 is a diagram illustrating a flow chart of a service in terms of an API user and an API provider according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing a flow chart of a service in terms of an API user and an API provider according to another embodiment of the present invention.
  • the API user side may include an API request reception and verification step (S210), an API request delivery step (S250), an API analysis step (S250), and an API response message return step (S270).
  • S210 an API request reception and verification step
  • S250 an API request delivery step
  • S250 an API analysis step
  • S270 an API response message return step
  • API request reception and verification (S210) step is a function of providing a message verification function by combining the API request message received by the API gateway 100 with URI information and HTTP header information, and passing the message that does not pass verification to an error step provides
  • the API request delivery step (S230) provides a service routing function that can deliver an API request message in an API configuration that has a complex processing process registered by the API provider (a) or an API configuration that does not have a complicated processing process.
  • the API analysis step (S250) analyzes the HTTP header information of the API response message received to the API gateway 100 and updates the cache statistics information of the corresponding API service according to whether the cache is serviced or not, so that the cache usage in the API management portal 120 later Provides functions to be used as statistics.
  • the API response message return step (S270) is a function of directly delivering the processing result of the API analysis step (S250) including the API request reception and verification (S251) to the API analysis step (S255) to the API user in accordance with the response message standard.
  • the API provider side may include an API specification registration step (S310), an API configuration step (S330), and a server distribution (S350) step through the API management portal 500 service.
  • S310 an API specification registration step
  • S330 an API configuration step
  • S350 server distribution
  • the API specification registration step (S310) receives essential inputs such as the API classification system, which is the basic configuration in which API information is exposed, data source, distribution cycle, revision date, description, creation date, and access right from the API provider for API service. Provides a function to generate basic information.
  • the API configuration step (S330) provides a function of automatically performing at least one of the server environment configuration for the API configuration of (A) or the API configuration of (B) referenced in FIG. 1 .
  • API configuration of (A) it provides a function to select directory-based configuration or extension-based configuration within URI.
  • an executable image is created and distributed based on the environment information automatically configured in the API configuration step (S330) step and is added to the service routing information of the API gateway 100.
  • the executable image may provide a function for distributing the desired service type (Docker image, executable compressed file) by the user.
  • FIG. 4 is a diagram illustrating a service configuration in terms of API users and API providers according to an embodiment of the present invention.
  • the API gateway in terms of API users, it includes an API gateway 100 , a cache server 111 , and an API server 113 .
  • the API gateway is basically configured as one unit, it may be provided in an HA (High Availability) configuration that binds several units into one.
  • a cache server and an API server can also be provided in an HA configuration that bundles multiple units.
  • a configuration in which the cache server 111 and the API server 113 are bundled in one server configuration 110 may be provided. This configuration is performed through the distribution module 125 of the API management portal 120 and depends on the type of service desired by the user.
  • the API management portal 120 includes a registration module 121, a configuration module 123, a distribution module 125 and a statistics module 127, and all modules are UI/UX. components together.
  • the registration module 121 includes a function for registering API basic information and cache configuration.
  • the configuration module 123 provides a function of automatically generating settings for configuring the API and cache server with reference to the API basic information and whether the cache is configured.
  • the distribution module 125 provides a function of distributing an API service based on the automatically generated server configuration information.
  • the configuration module 127 shows the generated configuration information to be confirmed to the API provider, and may also provide a correction function.
  • the distribution module 125 may provide a function to check the generation/distribution performance log for the server and a function to access the generated server.
  • FIG. 5 is a diagram illustrating an API gateway, a cache server, and an API server configuration module according to an embodiment of the present invention.
  • the API gateway filtering service system 120 includes a request control module 121 , a traffic control module 123 , a service routing module 125 , and a statistical analysis module 127 .
  • the API request message sent by the API user is received by the API gateway, records the request message in the log through the request control module 121, performs validation, and passes to the error stage when validation fails.
  • the threshold value of traffic control settings such as the number of API executions per day and the number of API executions per hour set by the API provider, it passes to the error stage when the threshold is exceeded, and is distributed by the API provider in the service routing module 125 when the threshold is less than the threshold
  • cache server setting service 210 includes a cache policy storage module 211, policy application module 212, and records all of the performance log in the designated path.
  • the API provider creates a cache policy storage module 211 created through the distribution module 125 of the API management portal 120 of FIG.
  • the cache server is started through the policy application module 212 .
  • contents may be different for each service in an API provider service-friendly manner.
  • FIG. 6 is a diagram illustrating the overall operation of the API gateway accelerator system and method according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a flow for explaining the overall operation of the API gateway accelerator system and method according to an embodiment of the present invention.
  • the diagram includes a cache server configuration.
  • the API provider creates a cache server and an API server group within the allowed infrastructure configuration through the API management portal 120 and registers an endpoint for the new API service in the API gateway (S610). .
  • the API user sends an API request message to the API gateway based on the exposed API information, and the API gateway performs the next step if it is within a threshold set through validation of the message and traffic control (S615) .
  • the cache server When there is no message in the cache, the cache server makes a request to the API server (S620 and S625), receives a response message, and stores it in the storage (S630).
  • the cache server performs API delivery by writing the set validity period and caching hit, miss, and expire information in the header of the response message (S635).
  • the API gateway analyzes the header information of the response message, records cache statistics, and then delivers the API response message to the API user (S640).
  • the API gateway When the user (S650) who has received the API response message requests the same API from another API user (S655), the API gateway performs the same step S615 and determines whether the stored API request object is valid, and exceeds the cache hit (Hit). After setting to one of (Expire), when the validity is exceeded, the same steps S660 and S680 as those of S630 and S640 are performed. Otherwise, the API delivery (S675), cache statistics (S680) and API response (S685) steps are sequentially performed.
  • the system according to an embodiment of the present invention can minimize the API response delay time by utilizing the web cache function when making an API call that undergoes complex processing, and minimize the system load by minimizing the call inside the API system.
  • the embodiment according to the present invention described above may be implemented in the form of a computer program that can be executed through various components on a computer, and such a computer program may be recorded in a computer-readable medium.
  • the medium includes a hard disk, a magnetic medium such as a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM and DVD, a magneto-optical medium such as a floppy disk, and a ROM. , RAM, flash memory, and the like, hardware devices specially configured to store and execute program instructions.
  • the computer program may be specially designed and configured for the present invention, or may be known and used by those skilled in the computer software field.
  • Examples of the computer program may include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • connection or connection members of the lines between the components shown in the drawings exemplarily represent functional connections and/or physical or circuit connections, and in an actual device, various functional connections, physical connections that are replaceable or additional may be referred to as connections, or circuit connections.
  • connection or circuit connections unless there is a specific reference such as “essential” or “importantly”, it may not be a necessary component for the application of the present invention.

Abstract

The present invention relates to an API caching system, and a service is provided under the environment of an API gateway server that unifies endpoints of all API servers. Provided is a REST-based accelerator system comprising: an API gateway for determining whether or not to provide and service an endpoint for receiving an API user's request message and transmitting to an uniform resource identifier (URI) registered by an API provider; a cache server for determining whether or not to cache by analyzing the request message, receiving a response message by requesting from an API server, and reusing upon request for the same message by storing for a storage period; and the API server created by the API provider.

Description

API 게이트웨이 엑셀레이터 시스템 및 방법API Gateway Accelerator System and Method
본 발명은 API 게이트웨이 엑셀레이터 시스템 및 방법에 관한 것으로서, 보다 구체적으로는, 모든 API 서버들의 엔드포인트를 단일화하는 API 게이트웨이 서버 환경하에서 웹 캐싱 기술을 활용한 서비스를 제공하는 API 게이트웨이 엑셀레이터 시스템 및 방법에 관한 것이다.The present invention relates to an API gateway accelerator system and method, and more particularly, to an API gateway accelerator system and method for providing a service utilizing web caching technology under an API gateway server environment that unifies the endpoints of all API servers. will be.
현재 보편화된 API 서비스 제공 시스템들은 접근제어, 서비스 라우팅, 트래픽제어 등을 제공하는 API 게이트웨이 기술이 적용되어 있어서, 다수의 API 엔드포인트를 단일화하여 서비스하고 있다.API gateway technology that provides access control, service routing, traffic control, etc. is applied to the current generalized API service providing systems, so a number of API endpoints are unified and serviced.
이로 인하여, 다수의 외부 요청 시 복잡한 처리 구조를 갖는 API 서비스의 응답속도는 느려지고 백엔드 서비스의 부하는 높아져 인프라 증설 혹은 트래픽 제한과 같은 차선책을 선택할 수 밖에 없는 한계점이 존재한다.Due to this, when a large number of external requests are made, the response speed of the API service with a complex processing structure is slowed and the load of the backend service is increased, so there is a limit in which there is no choice but to choose the next best solution such as infrastructure expansion or traffic restriction.
본 발명이 해결하고자 하는 기술적 과제는, API 게이트웨이 환경하에서의 복잡한 처리 구조를 갖는 API 서비스에 대해서 응답속도를 개선하고 서비스 부하를 낮출 수 시스템 및 이용방법을 제공하는 것이다.The technical problem to be solved by the present invention is to provide a system and method for improving response speed and reducing service load for an API service having a complex processing structure in an API gateway environment.
상기 기술적 과제를 해결하기 위한 본 발명의 일 실시 예에 따른 시스템은, PI 서버와 캐시 서버의 서비스 인프라 구성을 하기 위해 API 관리 포털 서비스내 처리절차를 처리하기 위한, API 게이트웨이 엑셀레이터 시스템으로서, 상기 장치는, 캐시 서비스의 가능 여부를 확인하여 API 명세사항을 등록하는 등록모듈; 상기 캐시 서버가 실행되기 위한 환경 정보를 구성하는 구성모듈; 상기 환경 정보를 기초로 서비스를 실행하는 실행 정보를 배포하는 배포모듈; 및 API 게이트웨이로 수신된 API 요청에 대한 응답 메시지 분석 처리절차로서, 상기 캐시 서비스여부 확인하여 처리하는 통계분석모듈;을 포함한다.A system according to an embodiment of the present invention for solving the above technical problem is an API gateway accelerator system for processing a processing procedure in an API management portal service to configure a service infrastructure of a PI server and a cache server, the device is, a registration module for registering API specifications by checking whether a cache service is available; a configuration module for configuring environment information for executing the cache server; a distribution module for distributing execution information for executing a service based on the environment information; and a statistical analysis module for processing a response message for an API request received by the API gateway by checking whether the cache is serviced or not.
상기 기술적 과제를 해결하기 위한 본 발명의 다른 일 실시 예에 따른 방법은, 상기 API 게이트웨이 엑셀레이터 시스템에 의해 구현되는 API 게이트웨이 웹 캐싱 서비스 제공 방법일 수 있다.A method according to another embodiment of the present invention for solving the technical problem may be an API gateway web caching service providing method implemented by the API gateway accelerator system.
본 발명의 일 실시 예에 따른 복잡한 처리를 거치는 API 호출 시 웹 캐시 기능을 활용하여 API 응답지연시간을 최소화하고 API 시스템 내부의 호출을 최소화하여 시스템 부하를 최소화 할 수 있다.When making an API call that undergoes complex processing according to an embodiment of the present invention, it is possible to minimize the API response delay time by using the web cache function and minimize the internal call of the API system to minimize the system load.
본 발명의 일 실시 예에 따라 초급 API 제공자도 제공되는 화면상의 설정만으로 웹 캐싱 관련 코딩없이 웹 캐싱 기능을 시스템에 쉽게 적용할 수 있다.According to an embodiment of the present invention, even a beginner API provider can easily apply a web caching function to the system without web caching related coding only by setting on the screen provided.
도 1은 본 발명의 일 실시 예에 따른 API 게이트웨이 엑셀레이터 시스템 및 방법이 적용된 시스템의 개념도이다.1 is a conceptual diagram of a system to which an API gateway accelerator system and method according to an embodiment of the present invention is applied.
도 2는 본 발명의 일 실시 예에 따른 API 사용자와  API 제공자 측면의 서비스에 대한 흐름도를 도시한 도면이다.FIG. 2 is a diagram illustrating a flow chart of a service in terms of an API user and an API provider according to an embodiment of the present invention.
도 3은 본 발명의 다른 일 실시 예에 따른 API 사용자와  API 제공자 측면의 서비스에 대한 흐름도를 도시한 도면이다.3 is a diagram illustrating a flow chart of a service in terms of an API user and an API provider according to another embodiment of the present invention.
도 4는 본 발명의 일 실시 예에 따른 API 사용자와  API 제공자 측면의 서비스 구성을 도시한 도면이다.4 is a diagram illustrating a service configuration in terms of API users and API providers according to an embodiment of the present invention.
도 5는 본 발명의 일 실시 예에 따른 API 게이트웨이와 캐시서버, API 서버 구성 모듈을 도시한 도면이다.5 is a diagram illustrating an API gateway, a cache server, and an API server configuration module according to an embodiment of the present invention.
도 6은 본 발명의 일 실시 예에 따른 API 게이트웨이 엑셀레이터 시스템 및 방법의 전체 동작을 설명하는 흐름을 도시한 도면이다.6 is a diagram illustrating the overall operation of the API gateway accelerator system and method according to an embodiment of the present invention.
상기 기술적 과제를 해결하기 위한 본 발명의 일 실시 예에 따른 시스템은, PI 서버와 캐시 서버의 서비스 인프라 구성을 하기 위해 API 관리 포털 서비스내 처리절차를 처리하기 위한, API 게이트웨이 엑셀레이터 시스템으로서, 상기 장치는, 캐시 서비스의 가능 여부를 확인하여 API 명세사항을 등록하는 등록모듈; 상기 캐시 서버가 실행되기 위한 환경 정보를 구성하는 구성모듈; 상기 환경 정보를 기초로 서비스를 실행하는 실행 정보를 배포하는 배포모듈; 및 API 게이트웨이로 수신된 API 요청에 대한 응답 메시지 분석 처리절차로서, 상기 캐시 서비스여부 확인하여 처리하는 통계분석모듈;을 포함한다.A system according to an embodiment of the present invention for solving the above technical problem is an API gateway accelerator system for processing a processing procedure in an API management portal service to configure a service infrastructure of a PI server and a cache server, the device is, a registration module for registering API specifications by checking whether a cache service is available; a configuration module for configuring environment information for executing the cache server; a distribution module for distributing execution information for executing a service based on the environment information; and a statistical analysis module for processing a response message for an API request received by the API gateway by checking whether the cache is serviced or not.
상기 시스템에 있어서, 상기 등록모듈은, 복잡한 처리 구조를 갖는 API 서비스 인프라 구성 또는 일반적인 API 서비스 인프라 구성을 선택적으로 수행할 수 있는 특징으로 하는, API 게이트웨이 엑셀레이터 시스템.In the system, the registration module, API gateway accelerator system, characterized in that it can selectively perform the configuration of an API service infrastructure having a complex processing structure or a general API service infrastructure configuration.
상기 시스템에 있어서, 상기 구성모듈은, 상기 API 캐시 인프라 구성에 대한 복잡한 구성 입력없이 등록모듈의 처리결과인 등록 정보를 기반으로 자동으로 구성되고, 상기 배포모듈은, 상기 등록 정보를 기반으로 상기 실행 정보를 배포하는 것을 특징으로 할 수 있다.In the system, the configuration module is automatically configured based on registration information that is a processing result of the registration module without complex configuration input for the API cache infrastructure configuration, and the distribution module executes the execution based on the registration information It may be characterized by distributing information.
상기 시스템에 있어서, 상기 통계분석모듈은, 상기 구성모듈 및 배포모듈에 의해 구성된 API 캐시 인프라 구성을 기반으로 실행되는 API 캐시 서비스에 의해 처리되는 캐시 통계처리를 수행하는 것을 특징으로 할 수 있다.In the system, the statistical analysis module may be characterized in that it performs cache statistics processing processed by the API cache service executed based on the API cache infrastructure configuration configured by the configuration module and the distribution module.
상기 기술적 과제를 해결하기 위한 본 발명의 다른 일 실시 예에 따른 방법은, 상기 API 게이트웨이 엑셀레이터 시스템에 의해 구현되는 API 게이트웨이 웹 캐싱 서비스 제공 방법일 수 있다.A method according to another embodiment of the present invention for solving the technical problem may be an API gateway web caching service providing method implemented by the API gateway accelerator system.
이하, 첨부된 도면들을 참조하여 본 발명의 실시 예들을 상세히 살펴보기로 한다. 상기한 본 발명의 목적, 특징 및 효과는 도면과 관련된 실시 예들을 통해서 이해될 수 있을 것이다. 다만, 본 발명은 여기서 설명되는 실시 예 들에 한정되지 않고, 다양한 형태로 응용되어 변형될 수도 있다. 오히려 후술될 본 발명의 실시 예들은 본 발명에 의해 개시된 기술 사상을 보다 명확히 하고, 나아가 본 발명이 속하는 기술분야에서 통상의 지식을 가진자에게 본 발명의 기술 사상이 충분히 전달될 수 있도록 제공되는 것이다. 따라서, 본 발명의 범위가 후술될 실시 예들에 의해 한정되는 것으로 해석되어서는 안 될 것이다. 한편, 하기 실시 예와 도면 상에 동일한 참조 번호들은 동일한 구성 요소를 나타낸다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. Objects, features, and effects of the present invention described above may be understood through embodiments associated with the drawings. However, the present invention is not limited to the embodiments described herein, and may be applied and modified in various forms. Rather, the embodiments of the present invention to be described later clarify the technical idea disclosed by the present invention, and furthermore, it is provided so that the technical idea of the present invention can be sufficiently conveyed to those of ordinary skill in the art to which the present invention pertains. . Accordingly, the scope of the present invention should not be construed as being limited by the examples to be described later. On the other hand, the same reference numbers in the following examples and drawings indicate the same components.
또한, 각 단계들에 있어 식별부호(예를 들어, 가, 나, 다 등)는 설명의 편의를 위하여 사용되는 것으로 식별부호는 각 단계들의 순서를 설명하는 것이 아니며, 각 단계들은 문맥상 명백하게 특정 순서를 기재하지 않는 이상 명기된 순서와 다르게 일어날 수 있다. 즉, 각 단계들은 명기된 순서와 동일하게 일어날 수도 있고 실질적으로 동시에 수행될 수도 있으며 반대의 순서대로 수행될 수도 있다.In addition, identification symbols (eg, A, B, C, etc.) in each step are used for convenience of explanation, and the identification code does not describe the order of each step, and each step is clearly specified in context. Unless the order is specified, the order may differ from the specified order. That is, each step may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the reverse order.
여기서 사용되는 모든 용어들은 다르게 정의되지 않는 한, 본 발명이 속하는 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 가진다. 일반적으로 사용되는 사전에 정의되어 있는 용어들은 관련 기술의 문맥상 가지는 의미와 일치하는 것으로 해석되어야 하며, 본 출원에서 명백하게 정의하지 않는 한 이상적이거나 과도하게 형식적인 의미를 지니는 것으로 해석될 수 없다.All terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present invention belongs, unless otherwise defined. Terms defined in general used in the dictionary should be interpreted as being consistent with the meaning in the context of the related art, and cannot be interpreted as having an ideal or excessively formal meaning unless explicitly defined in the present application.
도 1은 본 발명의 일 실시 예에 따른 API 게이트웨이 엑셀레이터 시스템 및 방법이 적용된 시스템의 개념도이다.1 is a conceptual diagram of a system to which an API gateway accelerator system and method according to an embodiment of the present invention is applied.
도 1을 참조하면, API 게이트웨이 엑셀레이터 시스템(10)은 API 게이트웨이(100), 캐시 서버(111), API 서버(113), API 관리포털(120)를 포함한다. 복잡한 처리 과정을 소유한 API 구성(이하, '(가)')은  API 서버와 캐시 서버를 묶어 같이 구성할 수 있으며, 복잡한 처리 과정이 없는 API 구성(이하, '(나)')은 API 서버만으로 구성 가능하다. 이러한 (가)와 (나) 조합에 의한 서버군은 API 게이트웨이 back-end 서비스로 구성된다.Referring to FIG. 1 , the API gateway accelerator system 10 includes an API gateway 100 , a cache server 111 , an API server 113 , and an API management portal 120 . An API configuration with a complex process (hereinafter, '(a)') can be configured together by tying an API server and a cache server, and an API configuration without a complex process (hereinafter, '(B)') is an API server can only be configured. The server group according to the combination of (A) and (B) is composed of API gateway back-end service.
API 게이트웨이(100)는 모든 API 서버(115)를 대표하여 API 사용자에게 단일화된 API 엔트포인트(외부에 노출된 URI 경로)를 제공하고 캐시 서버(200)는 복잡한 처리과정을 소유한 API 서버의 응답 메시지를 저장 및 관리하여 API 게이트웨이의 응답 시간을 최소화하도록 한다.The API gateway 100 represents all API servers 115 and provides a unified API endpoint (URI path exposed to the outside) to API users, and the cache server 200 responds to the API server possessing a complex process. Store and manage messages to minimize the response time of the API gateway.
복잡한 처리 과정을 소유한 API 구성인 (가)의 캐시 서버(111)와 API 서버(113)는 API 요청 시 API 서버의 응답 메시지를 캐시 서버에 저장하고, 추후, API 요청 시 API 서버로 요청하지 않고 캐시 서버의 저장된 메시지로 응답하여 서버 부하 및 응답 속도 향상을 취한다.The cache server 111 and API server 113 of (A), which are API components possessing a complex processing process, store the response message of the API server in the cache server when making an API request, and do not request to the API server when making an API request later. Without  , the cache server responds to the stored message to improve the server load and response speed.
복잡한 처리 과정이 없는 API 구성인 (나)의 API 서버(113)는  API 요청 시 이에 대한 응답 처리를 수행한다.The API server 113 of (B), which is an API configuration without a complex processing process, performs response processing for API requests.
API 관리 포털(120)에서는 (가)와 (나) 조합을 위한 서버 구성 및 관리를 하기 위한 서버 생성, 삭제, 모니터링 등을 쉽고 효과적으로 지원할 수 있다. The API management portal 120 can easily and effectively support server creation, deletion, monitoring, etc. for server configuration and management for the combination of (A) and (B).
또한, API 관리 포털(500)에서는 오픈 스택(Open Stack), VMware, AWS(Amazon Web Service), GCP(Google Cloud Platform) 등의 다양한 인프라 환경에서 온프레미스(On-Premise), 폐쇄형 클라우드(Private Cloud), 공개형 클라우드(Public Cloud), 하이브리드 클라우드(Hybrid Cloud) 등의 개발(Deployment)모델을 지원할 수 있고, 따라서 사용자가 원하는 서비스 형태에 따라 API 서버 구성을 제공할 수 있다.In addition, in the API management portal 500  Open Stack, VMware, AWS (Amazon Web Service), GCP (Google Cloud Platform), Cloud), public cloud (Public Cloud), hybrid cloud (Hybrid Cloud), such as deployment models can be supported, so that the API server configuration can be provided according to the service type desired by the user.
도 2는 본 발명의 일 실시 예에 따른 API 사용자와  API 제공자 측면의 서비스에 대한 흐름도를 도시한 도면이다.FIG. 2 is a diagram illustrating a flow chart of a service in terms of an API user and an API provider according to an embodiment of the present invention.
또한, 도 3은 본 발명의 다른 일 실시 예에 따른 API 사용자와  API 제공자 측면의 서비스에 대한 흐름도를 도시한 도면이다.In addition, FIG. 3 is a diagram showing a flow chart of a service in terms of an API user and an API provider according to another embodiment of the present invention.
도 2를 참조하면, API 사용자 측면에서는 API 요청 수신 및 검증(S210) 단계, API 요청 전달 단계(S250), API 분석 단계(S250), API 응답 메시지 리턴 단계(S270)를 포함할 수 있다.Referring to FIG. 2 , the API user side may include an API request reception and verification step (S210), an API request delivery step (S250), an API analysis step (S250), and an API response message return step (S270).
API 요청 수신 및 검증(S210) 단계는 API 게이트웨이(100)에 의해 수신된 API 요청 메시지를 URI 정보와 HTTP 헤더 정보를 조합하여 메시지 검증 기능을 제공하고 검증을 통과하지 못한 메시지는 오류 단계로 넘기는 기능을 제공한다.API request reception and verification (S210) step is a function of providing a message verification function by combining the API request message received by the API gateway 100 with URI information and HTTP header information, and passing the message that does not pass verification to an error step provides
API 요청 전달 단계(S230)는 API 제공자가 등록한 (가) 복잡한 처리 과정을 소유한 API 구성 혹은 (나) 복잡한 처리 과정이 없는 API 구성으로 API 요청 메시지를 전달할 수 있는 서비스 라우팅 기능을 제공한다.The API request delivery step (S230) provides a service routing function that can deliver an API request message in an API configuration that has a complex processing process registered by the API provider (a) or an API configuration that does not have a complicated processing process.
API 분석 단계(S250)는 API 게이트웨이(100)로 수신된 API 응답 메시지의 HTTP 헤더 정보를 분석하고 캐시 서비스여부에 따라서 해당 API 서비의 캐시 통계정보를 갱신하여 추후 API 관리 포털(120)에서 캐시 사용량 통계로 활용되도록 기능을 제공한다.The API analysis step (S250) analyzes the HTTP header information of the API response message received to the API gateway 100 and updates the cache statistics information of the corresponding API service according to whether the cache is serviced or not, so that the cache usage in the API management portal 120 later Provides functions to be used as statistics.
API 응답 메시지 리턴 단계(S270)는 API 요청 수신 및 검증(S251) 단계 내지API 분석 단계(S255)를 포함하는 API 분석 단계(S250)의 처리결과를 응답 메시지 규격에 맞게 API 사용자에게 바로 전달되는 기능을 제공한다.The API response message return step (S270) is a function of directly delivering the processing result of the API analysis step (S250) including the API request reception and verification (S251) to the API analysis step (S255) to the API user in accordance with the response message standard. provides
이어서, 도 3을 참조하면, API 제공자 측면에서는 API 관리 포털(500) 서비스를 통하여 API 명세등록 단계(S310), API 구성 단계(S330) 및 서버 배포(S350) 단계를 포함할 수 있다.Subsequently, referring to FIG. 3 , from the API provider side, it may include an API specification registration step (S310), an API configuration step (S330), and a server distribution (S350) step through the API management portal 500 service.
API 명세등록 단계(S310)는 API 정보가 노출되는 기본 구성인 API 분류 체계와 데이터 출처, 배포주기, 수정일자, 설명, 생성일자, 접근권한 와 같은 필수 입력 사항을 API 제공자에게 받아 API 서비스를 위한 기본 정보 생성하는 기능을 제공한다.The API specification registration step (S310) receives essential inputs such as the API classification system, which is the basic configuration in which API information is exposed, data source, distribution cycle, revision date, description, creation date, and access right from the API provider for API service. Provides a function to generate basic information.
API 구성 단계(S330)는 도 1에서 참조된 (가)의 API 구성 또는 (나)의 API 구성에 대한 서버 환경 구성 중 적어도 하나를 자동으로 수행하는 기능을 제공한다. The API configuration step (S330) provides a function of automatically performing at least one of the server environment configuration for the API configuration of (A) or the API configuration of (B) referenced in FIG. 1 .
또한, (가)의 API 구성인 경우, 디렉토리 기반 구성 혹은 URI내 확장자 기반 구성 선택 기능을 제공한다.In addition, in the case of API configuration of   (A), it provides a function to select directory-based configuration or extension-based configuration within URI.
서버 배포(S350) 단계는 API 구성 단계(S330)단계에서 자동 구성된 환경 정보를 기반으로 실행 이미지를 생성하여 배포하고 API 게이트웨이(100)의 서비스 라우팅정보에 추가한다. 또한, 실행 이미지는 사용자가 원하는 서비스 형태(도커 이미지, 실행 압축파일)로 배포 할 수 있는 기능을 제공할 수 있다.In the server distribution (S350) step, an executable image is created and distributed based on the environment information automatically configured in the API configuration step (S330) step and is added to the service routing information of the API gateway 100. In addition, the executable image may provide a function for distributing the desired service type (Docker image, executable compressed file) by the user.
도 4는 본 발명의 일 실시 예에 따른 API 사용자와  API 제공자 측면의 서비스 구성을 도시한 도면이다.4 is a diagram illustrating a service configuration in terms of API users and API providers according to an embodiment of the present invention.
도 4를 참조하면, API 사용자 측면에서는 API 게이트웨이(100), 캐시 서버(111), API 서버(113)를 포함한다. API 게이트웨이는 기본적으로 한 대로 구성되나, 여러 대를 하나로 묶는 HA(High Availabiltiy)구성으로 제공될 수도 있다. 캐시 서버와 API 서버 역시 여러대를 묶는 HA 구성으로 제공될 수 있다.Referring to FIG. 4 , in terms of API users, it includes an API gateway 100 , a cache server 111 , and an API server 113 . Although the API gateway is basically configured as one unit, it may be provided in an HA (High Availability) configuration that binds several units into one. A cache server and an API server can also be provided in an HA configuration that bundles multiple units.
또한, 하나의 서버 구성(110)안에 캐시 서버(111)와  API 서버(113)를 묶는 구성도 제공할 수 있다. 이러한 구성은 API 관리 포털(120)의 배포 모듈(125)을 통하여 수행되며 사용자가 원하는 서비스 형태에 따른다.In addition, a configuration in which the cache server 111 and the API server 113 are bundled in one server configuration 110 may be provided. This configuration is performed through the distribution module 125 of the API management portal 120 and depends on the type of service desired by the user.
도 4를 참조하면, API 제공자 측면에서는 API 관리 포털(120)은 등록 모듈(121), 구성 모듈(123), 배포 모듈(125) 및 통계 모듈(127)을 포함하며, 모든 모듈은 UI/UX 구성요소를 함께 갖고 있다. 등록 모듈(121)은 API 기본 정보와 캐시구성여부를 등록할 수 있는 기능를 포함한다. 구성 모듈(123)은 API 기본 정보와 캐시 구성여부를 참조하여 API, 캐시 서버 구성을 위한 설정을 자동으로 생성 기능을 제공한다. 배포 모듈(125)은 자동 생성된 서버 구성 정보를 근거로 API 서비스를 배포하는 기능을 제공한다. 또한, 구성 모듈(127)은 생성된 구성 정보를 API 제공자에 확인하도록 보여 주며 이에 대한 수정 기능도 제공할 수 있다. 또한, 배포 모듈(125)은 서버에 대한 생성/배포 수행 로그를 확인 기능 및 생성된 서버로 접근 기능도 제공할 수 있다.Referring to FIG. 4, in terms of API providers, the API management portal 120 includes a registration module 121, a configuration module 123, a distribution module 125 and a statistics module 127, and all modules are UI/UX. components together. The registration module 121 includes a function for registering API basic information and cache configuration. The configuration module 123 provides a function of automatically generating settings for configuring the API and cache server with reference to the API basic information and whether the cache is configured. The distribution module 125 provides a function of distributing an API service based on the automatically generated server configuration information. In addition, the configuration module 127 shows the generated configuration information to be confirmed to the API provider, and may also provide a correction function. In addition, the distribution module 125 may provide a function to check the generation/distribution performance log for the server and a function to access the generated server.
도 5는 본 발명의 일 실시 예에 따른 API 게이트웨이와 캐시서버, API 서버 구성 모듈을 도시한 도면이다.5 is a diagram illustrating an API gateway, a cache server, and an API server configuration module according to an embodiment of the present invention.
도 5를 참조하면, API 게이트웨이 필터링 서비스 시스템(120)은 요청 제어 모듈(121), 트래픽 제어 모듈(123), 서비스 라우팅 모듈(125) 및 통계 분석 모듈(127)을 포함한다. Referring to FIG. 5 , the API gateway filtering service system 120 includes a request control module 121 , a traffic control module 123 , a service routing module 125 , and a statistical analysis module 127 .
API 사용자가 보낸 API 요청 메시지는 API 게이트웨이가 수신하고 요청 제어 모듈(121)을 통하여 요청 메시지를 로그에 기록하고 유효성 검증을 수행하여 검증 실패시 오류 단계로 넘기고 성공인 경우 트래픽 제어 모듈(123)에서 API 제공자가 설정한 일당 API수행건수와 시간당 API수행건수와 같은 트래픽 제어 설정의 임계치값을 비교하여, 임계치 초과 시 오류 단계로 넘기고, 임계값 미만 시 서비스 라우팅 모듈(125)에서 API 제공자에 의해 배포된 API 캐시 서버 혹은 API 서버로 라우팅되어 응답 메시지를 받아 통계 분석(127) 모듈에서 HTTP 헤더를 분석하여 캐시관련 태그 정보를 파싱하여 캐시 통계 정보를 업데이트하며, 중간에 오류가 있다면 오류 단계로 넘긴다. The API request message sent by the API user is received by the API gateway, records the request message in the log through the request control module 121, performs validation, and passes to the error stage when validation fails. By comparing the threshold value of traffic control settings such as the number of API executions per day and the number of API executions per hour set by the API provider, it passes to the error stage when the threshold is exceeded, and is distributed by the API provider in the service routing module 125 when the threshold is less than the threshold It is routed to the API cache server or API server, receives a response message, analyzes HTTP headers in the statistical analysis module 127, parses cache-related tag information, updates cache statistics information, and if there is an error in the middle, it passes to the error stage.
도 5를 참조하면, 캐시 서버 설정 서비스(210)는 캐시 정책 저장 모듈(211), 정책 적용 모듈(212)을 포함하며, 수행로그를 지정된 경로에 모두 기록한다. API 제공자는 도 4의 API 관리포털(120)의 배포 모듈(125)을 통하여 생성된 캐시 정책 저장 모듈(211)은 캐시서버의 설정파일에 해당 정책을 대입하여 설정파일 및 응답 메시지 저장소를 생성하여 정책 적용 모듈(212)을 통하여 캐시 서버를 기동시킨다. 도 5를 참조하면, API 제공자 서비스 친화적으로 서비스마다 내용은 다를 수 있다.Referring to Figure 5,   cache server setting service 210 includes a   cache policy storage module 211, policy application module 212, and records all of the performance log in the designated path. The API provider creates a cache policy storage module 211 created through the distribution module 125 of the API management portal 120 of FIG. The cache server is started through the policy application module 212 . Referring to FIG. 5 , contents may be different for each service in an API provider service-friendly manner.
도 6은 본 발명의 일 실시 예에 따른 API 게이트웨이 엑셀레이터 시스템 및 방법의 전체 동작을 설명하는 흐름을 도시한 도면이다. 6 is a diagram illustrating the overall operation of the API gateway accelerator system and method according to an embodiment of the present invention.
보다 구체적으로, 도 6은 본 발명의 일 실시 예에 따른 API 게이트웨이 엑셀레이터 시스템 및 방법의 전체 동작을 설명하는 흐름을 도시한 도면으로 해당 도면은 캐서 서버 구성을 포함하는 구성이다.More specifically, FIG. 6 is a diagram illustrating a flow for explaining the overall operation of the API gateway accelerator system and method according to an embodiment of the present invention. The diagram includes a cache server configuration.
도 6을 참조하면, API 제공자는 API 관리포털(120)를 통하여 허용된 인프라 구성내에서 캐시서버와 API 서버군을 생성을 수행하고 신규 API 서비스에 대한 엔드포인트를 API 게이트웨이에 등록한다(S610).Referring to FIG. 6 , the API provider creates a cache server and an API server group within the allowed infrastructure configuration through the API management portal 120 and registers an endpoint for the new API service in the API gateway (S610). .
도 6을 참조하면, API 사용자는 노출된 API 정보를 근거로 API 게이트웨이로 API 요청메시지를 보내고 API 게이트웨이는 해당 메시지에 대한 유효성 검증과 트래픽 제어를 통해 설정된 임계치 이내이면 다음 단계를 수행한다(S615).Referring to FIG. 6 , the API user sends an API request message to the API gateway based on the exposed API information, and the API gateway performs the next step if it is within a threshold set through validation of the message and traffic control (S615) .
캐시 서버는 캐시에 메시지가 없는 경우, API 서버로 요청(S620 및 S625)하여 응답 메시지를 받고 저장소에 저장한다(S630).When there is no message in the cache, the cache server makes a request to the API server (S620 and S625), receives a response message, and stores it in the storage (S630).
캐시 서버는 설정된 유효기간 및 캐싱 히트(Hit), 미스(Miss), 초과(Expire) 정보를 응답 메시지의 헤더에 기록하여 API 전달을 수행한다(S635).The cache server performs API delivery by writing the set validity period and caching hit, miss, and expire information in the header of the response message (S635).
또한, API 게이트웨이에서는 응답 메시지의 헤더 정보를 분석하여 캐시 통계를 기록한 후 API 사용자에게 API 응답메시지를 전달한다(S640).Also, the API gateway analyzes the header information of the response message, records cache statistics, and then delivers the API response message to the API user (S640).
API응답 메시지를 수신한 사용자(S650)가 다른 API 사용자가 동일한 API 요청하면(S655), API 게이트웨이는 단계 S615를 동일하게 수행하고 저장된 API 요청 객체를 유효 여부를 판단하여 캐시 히트(Hit), 초과(Expire) 중 하나로 설정하고 나서, 유효성 초과시 S630 및 S640와 동일한 단계 S660 및 S680를 수행하고, 그 이외에는 API 전달(S675), 캐시 통계(S680) 및 API 응답(S685) 단계를 순차적으로 수행한다.When the user (S650) who has received the API response message requests the same API from another API user (S655), the API gateway performs the same step S615 and determines whether the stored API request object is valid, and exceeds the cache hit (Hit). After setting to one of (Expire), when the validity is exceeded, the same steps S660 and S680 as those of S630 and S640 are performed. Otherwise, the API delivery (S675), cache statistics (S680) and API response (S685) steps are sequentially performed.
본 발명의 일 실시 예에 따른 시스템은, 복잡한 처리를 거치는 API 호출 시 웹 캐시 기능을 활용하여 API 응답지연시간을 최소화하고 API 시스템 내부의 호출을 최소화하여 시스템 부하를 최소화 할 수 있다.The system according to an embodiment of the present invention can minimize the API response delay time by utilizing the web cache function when making an API call that undergoes complex processing, and minimize the system load by minimizing the call inside the API system.
본 발명의 일 실시 예에 따라 초급 API 제공자도 제공되는 화면상의 설정만으로 웹 캐싱 관련 코딩없이 웹 캐싱 기능을 시스템에 쉽게 적용할 수 있다.According to an embodiment of the present invention, even a beginner API provider can easily apply a web caching function to the system without web caching related coding only by setting on the screen provided.
이상 설명된 본 발명에 따른 실시 예는 컴퓨터상에서 다양한 구성요소를 통하여 실행될 수 있는 컴퓨터 프로그램의 형태로 구현될 수 있으며, 이와 같은 컴퓨터 프로그램은 컴퓨터로 판독 가능한 매체에 기록될 수 있다. 이때, 매체는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM 및 DVD와 같은 광기록 매체, 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical medium), 및 ROM, RAM, 플래시 메모리 등과 같은, 프로그램 명령어를 저장하고 실행하도록 특별히 구성된 하드웨어 장치를 포함할 수 있다.The embodiment according to the present invention described above may be implemented in the form of a computer program that can be executed through various components on a computer, and such a computer program may be recorded in a computer-readable medium. In this case, the medium includes a hard disk, a magnetic medium such as a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM and DVD, a magneto-optical medium such as a floppy disk, and a ROM. , RAM, flash memory, and the like, hardware devices specially configured to store and execute program instructions.
한편, 상기 컴퓨터 프로그램은 본 발명을 위하여 특별히 설계되고 구성된 것이거나 컴퓨터 소프트웨어 분야의 당업자에게 공지되어 사용 가능한 것일 수 있다. 컴퓨터 프로그램의 예에는, 컴파일러에 의하여 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용하여 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드도 포함될 수 있다.Meanwhile, the computer program may be specially designed and configured for the present invention, or may be known and used by those skilled in the computer software field. Examples of the computer program may include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
본 발명에서 설명하는 특정 실행들은 일 실시 예들로서, 어떠한 방법으로도 본 발명의 범위를 한정하는 것은 아니다. 명세서의 간결함을 위하여, 종래 전자적인 구성들, 제어 시스템들, 소프트웨어, 상기 시스템들의 다른 기능적인 측면들의 기재는 생략될 수 있다. 또한, 도면에 도시된 구성 요소들 간의 선들의 연결 또는 연결 부재들은 기능적인 연결 및/또는 물리적 또는 회로적 연결들을 예시적으로 나타낸 것으로서, 실제 장치에서는 대체 가능하거나 추가의 다양한 기능적인 연결, 물리적인 연결, 또는 회로 연결들로서 나타내어질 수 있다. 또한, “필수적인”, “중요하게” 등과 같이 구체적인 언급이 없다면 본 발명의 적용을 위하여 반드시 필요한 구성 요소가 아닐 수 있다.The specific implementations described in the present invention are only examples, and do not limit the scope of the present invention in any way. For brevity of the specification, descriptions of conventional electronic components, control systems, software, and other functional aspects of the systems may be omitted. In addition, the connection or connection members of the lines between the components shown in the drawings exemplarily represent functional connections and/or physical or circuit connections, and in an actual device, various functional connections, physical connections that are replaceable or additional may be referred to as connections, or circuit connections. In addition, unless there is a specific reference such as “essential” or “importantly”, it may not be a necessary component for the application of the present invention.
본 발명의 명세서(특히 특허청구범위에서)에서 “상기”의 용어 및 이와 유사한 지시 용어의 사용은 단수 및 복수 모두에 해당하는 것일 수 있다. 또한, 본 발명에서 범위(range)를 기재한 경우 상기 범위에 속하는 개별적인 값을 적용한 발명을 포함하는 것으로서(이에 반하는 기재가 없다면), 발명의 상세한 설명에 상기 범위를 구성하는 각 개별적인 값을 기재한 것과 같다. 마지막으로, 본 발명에 따른 방법을 구성하는 단계들에 대하여 명백하게 순서를 기재하거나 반하는 기재가 없다면, 상기 단계들은 적당한 순서로 행해질 수 있다. 반드시 상기 단계들의 기재 순서에 따라 본 발명이 한정되는 것은 아니다. 본 발명에서 모든 예들 또는 예시적인 용어(예들 들어, 등등)의 사용은 단순히 본 발명을 상세히 설명하기 위한 것으로서 특허청구범위에 의해 한정되지 않는 이상 상기 예들 또는 예시적인 용어로 인해 본 발명의 범위가 한정되는 것은 아니다. 또한, 당업자는 다양한 수정, 조합 및 변경이 부가된 특허청구범위 또는 그 균등물의 범주 내에서 설계 조건 및 팩터에 따라 구성될 수 있음을 알 수 있다.In the specification of the present invention (especially in the claims), the use of the term “above” and similar referential terms may be used in both the singular and the plural. In addition, when a range is described in the present invention, each individual value constituting the range is described in the detailed description of the invention as including the invention to which individual values belonging to the range are applied (unless there is a description to the contrary). same as Finally, the steps constituting the method according to the present invention may be performed in an appropriate order unless the order is explicitly stated or there is no description to the contrary. The present invention is not necessarily limited to the order in which the steps are described. The use of all examples or exemplary terms (eg, etc.) in the present invention is merely for the purpose of describing the present invention in detail, and unless defined by the claims, the scope of the present invention is limited by the examples or exemplary terminology. it's not going to be In addition, those skilled in the art will recognize that various modifications, combinations, and changes may be made in accordance with design conditions and factors within the scope of the appended claims or their equivalents.

Claims (4)

  1. API 서버와 캐시 서버의 서비스 인프라 구성을 하기 위해 API 관리 포털 서비스내 처리절차를 처리하기 위한, API 게이트웨이 엑셀레이터 시스템으로서,As an API gateway accelerator system for processing processing procedures in the API management portal service to configure the service infrastructure of the API server and the cache server,
    상기 장치는,The device is
    캐시 서비스의 가능 여부를 확인하여 API 명세사항을 등록하는 등록모듈;a registration module for registering API specifications by checking whether a cache service is available;
    상기 캐시 서버가 실행되기 위한 환경 정보를 구성하는 구성모듈;a configuration module for configuring environment information for executing the cache server;
    상기 환경 정보를 기초로 서비스를 실행하는 실행 정보를 배포하는 배포모듈; 및a distribution module for distributing execution information for executing a service based on the environment information; and
    API 게이트웨이로 수신된 API 요청에 대한 응답 메시지 분석 처리절차로서, 상기 캐시 서비스여부 확인하여 처리하는 통계분석모듈;을 포함하는, API 게이트웨이 엑셀레이터 시스템.A process for analyzing a response message for an API request received by the API gateway, a statistical analysis module for checking whether the cache service is there or not;
  2. 제1항에 있어서, According to claim 1,
    상기 등록모듈은,The registration module is
    복잡한 처리 구조를 갖는 API 서비스 인프라 구성 또는 일반적인 API 서비스 인프라 구성을 선택적으로 수행할 수 있는 특징을 갖는, API 게이트웨이 엑셀레이터 시스템.An API gateway accelerator system having a feature that can selectively perform an API service infrastructure configuration having a complex processing structure or a general API service infrastructure configuration.
  3. 제1항에 있어서, According to claim 1,
    상기 구성모듈은,The component module is
    상기 API 캐시 인프라 구성에 대한 복잡한 구성 입력없이 등록모듈의 처리결과인 등록 정보를 기반으로 자동으로 구성되고, It is automatically configured based on the registration information that is the processing result of the registration module without entering a complex configuration for the API cache infrastructure configuration,
    상기 배포모듈은,The distribution module is
    상기 등록 정보를 기반으로 상기 실행 정보를 배포하는 것을 특징으로 하는, API 게이트웨이 엑셀레이터 시스템.An API gateway accelerator system, characterized in that distributing the execution information based on the registration information.
  4. 제1항에 있어서,According to claim 1,
    상기 통계분석모듈은,The statistical analysis module,
    상기 구성모듈 및 배포모듈에 의해 구성된 API 캐시 인프라 구성을 기반으로 실행되는 API 캐시 서비스에 의해 처리되는 캐시 통계처리를 수행하는 것을 특징으로 하는, API 게이트웨이 엑셀레이터 시스템.An API gateway accelerator system, characterized in that the cache statistics processing processed by the API cache service executed based on the API cache infrastructure configuration configured by the configuration module and the distribution module is performed.
PCT/KR2021/005138 2020-04-23 2021-04-23 Api gateway accelerator system and method WO2021215864A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0049213 2020-04-23
KR1020200049213A KR20210130989A (en) 2020-04-23 2020-04-23 api gateway accelerator system and methods

Publications (1)

Publication Number Publication Date
WO2021215864A1 true WO2021215864A1 (en) 2021-10-28

Family

ID=78269494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/005138 WO2021215864A1 (en) 2020-04-23 2021-04-23 Api gateway accelerator system and method

Country Status (2)

Country Link
KR (1) KR20210130989A (en)
WO (1) WO2021215864A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900448A (en) * 2022-05-30 2022-08-12 上海亿通国际股份有限公司 Micro-service gateway flow management method and device and electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785637A (en) * 2022-03-15 2022-07-22 浪潮云信息技术股份公司 Implementation method and system for caching response data by API gateway
KR102619580B1 (en) * 2023-05-09 2024-01-02 쿠팡 주식회사 Api gateway and operation method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080209451A1 (en) * 2007-01-29 2008-08-28 Mashery, Inc. Methods for analyzing, limiting, and enhancing access to an internet API, web service, and data
KR20150042067A (en) * 2013-10-10 2015-04-20 에스케이텔레콤 주식회사 Method for API of CDN service and apparatus therefor
KR20150137542A (en) * 2014-05-30 2015-12-09 삼성에스디에스 주식회사 Distributed api proxy system and apapparatus and method for managing traffic in such system
US20160267153A1 (en) * 2013-10-30 2016-09-15 Hewlett Packard Enterprise Development Lp Application programmable interface (api) discovery
KR20170062244A (en) * 2015-11-27 2017-06-07 주식회사 비디 Api managing apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080209451A1 (en) * 2007-01-29 2008-08-28 Mashery, Inc. Methods for analyzing, limiting, and enhancing access to an internet API, web service, and data
KR20150042067A (en) * 2013-10-10 2015-04-20 에스케이텔레콤 주식회사 Method for API of CDN service and apparatus therefor
US20160267153A1 (en) * 2013-10-30 2016-09-15 Hewlett Packard Enterprise Development Lp Application programmable interface (api) discovery
KR20150137542A (en) * 2014-05-30 2015-12-09 삼성에스디에스 주식회사 Distributed api proxy system and apapparatus and method for managing traffic in such system
KR20170062244A (en) * 2015-11-27 2017-06-07 주식회사 비디 Api managing apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900448A (en) * 2022-05-30 2022-08-12 上海亿通国际股份有限公司 Micro-service gateway flow management method and device and electronic equipment

Also Published As

Publication number Publication date
KR20210130989A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2021215864A1 (en) Api gateway accelerator system and method
CN103329113B (en) Configuration is accelerated and custom object and relevant method for proxy server and the Dynamic Website of hierarchical cache
CN108810006A (en) resource access method, device, equipment and storage medium
WO2019198885A1 (en) Decentralized service platform using multiple blockchain-based service nodes
WO2013169059A1 (en) System and method for monitoring web service
CN110113188B (en) Cross-subdomain communication operation and maintenance method, total operation and maintenance server and medium
CN108259425A (en) The determining method, apparatus and server of query-attack
CN107172176B (en) APP method for connecting network, equipment and configuration server based on configuration management
US20180124048A1 (en) Data transmission method, authentication method, and server
CN111953770B (en) Route forwarding method and device, route equipment and readable storage medium
CN112261172A (en) Service addressing access method, device, system, equipment and medium
WO2021060957A1 (en) Method and device for performing asynchronous operations in a communication system
US20110131288A1 (en) Load-Balancing In Replication Engine of Directory Server
CN111130936A (en) Method and device for testing load balancing algorithm
CN108076092A (en) Web server resources balance method and device
CN105610639A (en) Total log grabbing method and device
CN112395141B (en) Data page management method and device, electronic equipment and storage medium
CN113259386A (en) Malicious request intercepting method and device and computer equipment
CN112134833B (en) Virtual-real fused stream deception defense method
CN115913583A (en) Business data access method, device and equipment and computer storage medium
CN112860398A (en) Data processing method, device, equipment and medium based on rule engine
CN115085969B (en) Mimicry architecture based on Vpp bottom framework and arbitration method
WO2023120813A1 (en) Method and device for serverless computing through mutual monitoring between network edges
CN112217882B (en) Distributed gateway system for service opening
CN116028225A (en) Transaction request packet processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21792419

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21792419

Country of ref document: EP

Kind code of ref document: A1