CN116401014A - Service release method, device, storage medium and server - Google Patents

Service release method, device, storage medium and server Download PDF

Info

Publication number
CN116401014A
CN116401014A CN202310374745.3A CN202310374745A CN116401014A CN 116401014 A CN116401014 A CN 116401014A CN 202310374745 A CN202310374745 A CN 202310374745A CN 116401014 A CN116401014 A CN 116401014A
Authority
CN
China
Prior art keywords
service
preheating
annotation
interface
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310374745.3A
Other languages
Chinese (zh)
Inventor
秦复祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yishi Huolala Technology Co Ltd
Original Assignee
Shenzhen Yishi Huolala Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yishi Huolala Technology Co Ltd filed Critical Shenzhen Yishi Huolala Technology Co Ltd
Priority to CN202310374745.3A priority Critical patent/CN116401014A/en
Publication of CN116401014A publication Critical patent/CN116401014A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45591Monitoring or debugging support
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses a service release method, a device, a storage medium and a server. The method comprises the following steps: when the service is started, detecting whether a preheating annotation is carried; if the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state; after the call of the needed preheating interface is completed, exposing the service state; if the service status is healthy, the service is published. The scheme realizes smooth release of the service by rewriting the health check strategy.

Description

Service release method, device, storage medium and server
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a service publishing method, device, storage medium, and server.
Background
Kubernetes cluster is a set of nodes running a containerized application. The containerized application packages the application with its dependent items and some necessary services.
When a Java application in a Kubernetes cluster is online in service, the application is started up, usually starting from a lower performance, because of things such as just-in-time (JIT) compiling, a first request triggers a first loading of a related class, which has a certain time consumption, affects the instantaneity of the first call, and accordingly causes a surge in CPU and memory. Whereas kubernets schedules the pod (Plainolddata structure, common old data structure) according to requests, which can set the initial value smaller, JVM (Java VirtualMachine ) applications just started up require more resources, which results in continuous restart of the pod due to service health check interface blocking, which affects micro-service release efficiency due to multiple check failures. This is encountered by both service providers and service consumers for micro-service scenarios.
Disclosure of Invention
The embodiment of the application provides a service release method, a device, a storage medium and a server, which can realize smooth release of services.
In a first aspect, an embodiment of the present application provides a service publishing method, including:
when the service is started, detecting whether a preheating annotation is carried;
if the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state;
after the call of the needed preheating interface is completed, exposing the service state;
and if the service state is healthy, releasing the service.
In a second aspect, an embodiment of the present application provides a service publishing device, including:
the first detection unit is used for detecting whether the preheating annotation is carried or not when the service is started;
the adjusting unit is used for rewriting the service health check strategy if the preheating annotation is carried, so as to adjust the exposure time of the service state;
the first processing unit is used for exposing the service state after the call of the needed preheating interface is completed;
and the first issuing unit is used for issuing the service if the service state is healthy.
In an embodiment, the service issuing apparatus further includes:
an obtaining unit, configured to obtain an interface request protocol in the warm-up annotation after overwriting the service health check policy and before exposing the service state;
and the second processing unit is used for carrying out request assembly on the interface request protocol and calling the interface request protocol.
In an embodiment, the service issuing apparatus further includes:
the determining unit is used for determining whether the mark in the preheating annotation is an SOA call or not after the preheating annotation is detected to be carried and before the service health checking strategy is rewritten;
the first processing unit is configured to rewrite a service health check policy when the determination unit determines no.
In an embodiment, the service issuing apparatus further includes:
the third processing unit is used for rewriting the preprocessor if the mark in the preheating annotation is an SOA call;
the assembling unit is used for assembling parameters of the calling interface according to the mark preheating annotation method;
the calling unit is used for calling the target interface and finishing interface preheating;
and the second issuing unit is used for issuing the service.
In an embodiment, the service issuing apparatus further includes:
the second detection unit is used for detecting whether the time consumption for starting the service is within a preset duration after the service is released;
when the second detection unit determines that the service is the target service, continuing to issue the next target service;
when the first detection unit determines no, the service is reissued.
In an embodiment, the interface request protocol is an http protocol, a json-rpc protocol, or a grpc protocol.
In a third aspect, embodiments of the present application further provide a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the service delivery method described above.
In a fourth aspect, an embodiment of the present application further provides a server, including a processor and a memory, where the processor is electrically connected to the memory, the memory is configured to store instructions and data, and the processor is configured to execute the service publishing method described above.
In the embodiment of the application, when the service is started, whether a preheating annotation is carried is detected; if the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state; after the call of the needed preheating interface is completed, exposing the service state; if the service status is healthy, the service is published. According to the scheme, the health of the service can be exposed after the interface is preheated through rewriting the health checking strategy, so that smooth release of the service is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a service publishing method according to an embodiment of the present application.
Fig. 2 is another flow chart of a service publishing method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a service issuing device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
When a Java application in a Kubernetes cluster is online in a service, the application is started up, usually starting from a lower performance, because JIT (just in time) compiling and the like are performed, a first request triggers a first loading of a related class, and the process has a certain time consumption, can influence the instantaneity of the first call, and accordingly causes a surge in CPU and memory. Whereas kubernets schedules pod (plain old data structure) according to requests, which can set an initial value smaller, JVM (java virtual machine) applications just started require more resources, which will cause the pod to restart continuously due to service health check interface blocking, and cause the micro-service release efficiency to be affected due to multiple check failures. This is encountered by both service providers and service consumers for micro-service scenarios.
In the related art, the throughput per pod can be reduced by increasing the number of pods, however, this approach requires higher costs; or, by enlarging requests and limits of the CPU and the memory, the application can have more resource processing, but the method cannot guarantee the stability of service health detection; still alternatively, the pod may not be restarted by increasing the probe check time until the JVM warm-up is completed, however this approach does not guarantee immediate discovery when a service becomes problematic. It is known that it is difficult to ensure smooth distribution of services in the above manner.
Based on this, the embodiment of the application provides a service publishing method, a device, a storage medium and a server, which can improve the interface preheating efficiency and enable micro services to be published smoothly. Each of which will be described in detail below.
In one embodiment, a service publishing method is provided and is applied to a server. Referring to fig. 1, the specific flow of the service distribution method may be as follows:
101. when the service is started, it is detected whether a warm-up annotation is carried.
In this embodiment, annotation (Annoation) is a way and method provided by Java for an element in a meta-program to associate any information with any metadata (metadata). The annotation is an interface through which the program can retrieve the annotation object specifying the program element by reflection and then retrieve the metadata within the annotation by the annotation object.
In Kubernetes, any non-identifying metadata may be appended to an object using annotations. The preheating annotation, namely the analysis for preheating, can instruct the detection system software and hardware environment to correctly set the software and load necessary related files.
In practice, the preheat parameter may be configured using a preheat annotation @ wakeup method, stating the number of times that preheat is required and the duration of each preheat. When a certain preheating frequency is adopted, the accuracy of the test result can be improved. That is, in one embodiment, when the service is started, it may be detected whether there is a warm-up annotation @ WarmUp method.
102. If the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state.
In particular, for a micro-service modification health check policy that carries a warm-up annotation, it may be decided when to expose service health to kubernetes. In some embodiments, service health checks may be implemented using a probe mechanism in kubernetes. For example, the service health check may be performed from two dimensions: ready status check (readness) and survival status check (liveness).
The probe, i.e. the periodic diagnosis performed by kubelet on the container (container), requires kubelet to call the Handler implemented by the container to perform the diagnosis. The survival and ready probes are referred to as health checks. These container probes are small processes running periodically that return results (success, failure or unknown) reflecting the state of the container in Kubernetes. Based on the results Kubernetes will decide how to handle each container to ensure high availability of cluster services and continuity of traffic. The relevant probe mechanism is described as follows:
the kubernetes survival probe refers to detecting whether the container is running. If the survival probe fails, kubelet will kill the container and the container will be affected by its restart policy. If the container does not provide a survival probe, the default state is successful.
A kubernetes ready probe refers to whether a container is ready for a service request. Only if the detection is successful will the service be provided to the outside. If the ready probe fails, the endpoint controller will delete the Pod's IP address from all Service's endpoints that match the Pod. The ready state before the initial delay defaults to failure. If the container does not provide a ready probe, the default state is successful. In practice, the ready probe is used to determine whether the program in the container is alive, and if the health condition is not satisfied, the IP address of the pod is automatically removed from the Service endpoint list.
In one embodiment, to solve the problem that the survival probe and the ready probe cannot better judge whether the program is started or not in the slow start program or the complex program, a start probe (StartupProde) of kubernetes may be used to detect whether the application in the container has been started. If the container provides an enable probe, all other probes are disabled until it is successful. If the start-up probe fails, kubelet will kill the container, which is restarted following its restart policy. If the container does not provide a start-up probe, the default state is successful.
In some embodiments, after detecting that the pre-heat annotation is carried, before overwriting the service health check policy, the following operations may be further included:
determining whether the mark in the preheating annotation is an SOA call;
if not, the service health check strategy is rewritten.
The SOA (Service-oriented architecture) is a Service-oriented architecture, which is a coarse-grained, loosely-coupled Service architecture, and the services communicate through a simple and precisely-defined interface, and does not involve an underlying programming interface and a communication model, so that different functional units (called services) of an application program are connected through well-defined interfaces and contracts between the services.
Specifically, before the service health check strategy is rewritten, because the center of gravity of the application lies in the preheating and smooth release of the micro-service under the container, the judgment condition can be added to exclude the SOA call, so as to more accurately adjust the exposure time of the service state.
In the specific implementation, if the SOA call is marked in the preheating annotation, the method is an interface, and the realization of the automatic generation method is needed. Thus, a specific method implementation can be simulated based on the service address in the annotation and the parameters in the method. The URL (Uniform ResourceLocator ) of the request is obtained from the annotation, an OKhttp framework can be used for initiating an http request to the interface so as to finish the initialization operation of the network request connection pool, and the connection in the pool is directly used for accelerating access when the method is called conveniently, and the interface is called to finish preheating. That is, in an embodiment, if the SOA call is marked in the warm-up annotation, the following operations may be further included:
rewriting the preprocessor;
assembling parameters of a calling interface according to a mark preheating annotation method;
calling a target interface to finish interface preheating;
the service is published.
103. And exposing the service state after the call of the needed preheating interface is completed.
Specifically, if the mark in the pre-heating annotation is not an SOA call, the interface is directly exposed. Therefore, the externally exposed request protocol of the interface can be http, json-rpc, grpc and the like. In one embodiment, after overwriting the service health check policy, before exposing the service state, further comprising:
acquiring an interface request protocol in the preheating annotation;
and carrying out request assembly and calling on the interface request protocol.
Specifically, a template method mode can be used for carrying out request assembly on each protocol, and if the http request is carried out, an OKhttp framework is used for carrying out the request; if the protocol is json-rpc protocol, packaging the protocol into a corresponding protocol format according to the protocol requirement, and then requesting by using an OKhttp framework; if the grpc protocol is adopted, protocol buffer byte stream conversion is required for the request parameters. The health check mode can be realized by preheating the annotated method, specifically by rewriting the health indicator class.
And after all interfaces needing to be preheated are called, exposing the service state to be healthy.
104. If the service state is healthy, the service is released.
Specifically, if the service state is healthy, the service is normally released. In one embodiment, in order to improve the program stability, a kubernetes canary publishing method may be used, if a problem is found, the method may be rolled back to the previous version in time. That is, after the service is released, the following operations may be further included:
detecting whether the service starting time is within a preset time length;
if yes, continuing to issue the next target service;
if not, the service is reissued.
Specifically, when there are more service nodes, one node can issue one by one, whether one node is started normally is observed, and if so, other nodes can issue.
In addition, if the test environment determines that the preheating logic is not problematic, the service can be distributed in full quantity directly, so that smooth distribution of the service is realized.
As can be seen from the above, in the service publishing method provided in this embodiment, when the service is started, whether the service carries a preheating annotation is detected; if the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state; after the call of the needed preheating interface is completed, exposing the service state; if the service status is healthy, the service is published. According to the scheme, the health of the service can be exposed after the interface is preheated through rewriting the health checking strategy, so that the interface preheating efficiency is improved, and smooth release of the service is realized.
In still another embodiment of the present application, referring to fig. 2, a method for preheating and smooth publishing of micro services under a container is further provided to solve the problem that the services cannot be published smoothly. The method comprises the following steps:
when the service starts, it is checked whether there is a warm-up annotation @ WarmUp method. The application is started by listening for an application preparation completion signal (application readyEvent), mainly a method of scanning the application for @ WarmUp notes and is a non-SOA call. If the method contains @ WarmUp annotation and is a non-SOA call, the method is executed directly to make full use of the JIT compiler optimization to have code in the code cache (codemechanism). If middleware (mysql, redis, etc.) is used in the project, the call can also complete the initialization of the middleware connection pool, and the execution of the subsequent method is quickened. If the annotation is marked with an SOA call, the method is an interface, the realization of the method needs to be automatically generated, and a specific method realization (equivalent to mock data) can be simulated and generated according to the service address in the annotation and the parameters in the method. The URL of the request is obtained from the annotation, an http frame can be used for initiating an http request to the interface, so that the initialization operation of the network request connection pool is completed, the connection in the pool can be directly used for accelerating access when the method is called, and the interface is called for completing preheating.
If the tag in the annotation is not an SOA call, it is the directly exposed interface. The externally exposed protocols of such interfaces may be http, json-rpc, grpc protocols. If the http request is made, using the OKhttp framework to make the request; if the protocol is json-rpc protocol, packaging the protocol into a corresponding protocol format according to the protocol requirement, and then requesting by using an OKhttp framework; if the grpc protocol is used, the request parameter needs to be converted into a protocol buffer byte stream, that is, the traffic cannot enter the service. The health check mode can be realized by preheating the annotated method, specifically by rewriting the health indicator class. The health method mainly detects whether the operation monitored before is executed, if so, the service is healthy, then the health check can detect that the service is healthy, otherwise, the service unhealthy state is returned.
After the service state is determined to be healthy, for more stable release, a canary release mode can be used, namely, how many nodes are served, and release is performed one by one. And observing whether one node is started normally, and if so, issuing other nodes. In addition, if the test environment determines that the preheating logic has no problem, the service can be distributed in full quantity directly, so that smooth distribution of the service is realized.
It can be known that in the embodiment of the application, by rewriting the health check policy, service health is only exposed after the interface is preheated, so that service is smoothly released when being released, and the application stability is improved; in addition, the service provider and the consumer support http, json-rpc and grpc protocols in an annotation mode, call preheating of the interface is automatically completed, repeated development is avoided, and cost for deploying other middleware is reduced.
In yet another embodiment of the present application, a service publishing device is also provided. The service issuing device may be integrated in the form of software or hardware in the server. As shown in fig. 3, the service issuing apparatus 300 may include: a first detection unit 301, an adjustment unit 302, a first processing unit 303, a first distribution unit 304, wherein:
a first detecting unit 301, configured to detect whether a preheating annotation is carried when a service is started;
an adjusting unit 302, configured to rewrite the service health check policy if the preheating annotation is carried, so as to adjust the exposure time of the service state;
the first processing unit 303 is configured to expose a service state after the required preheating interface is called;
the first issuing unit 304 is configured to issue the service if the service status is healthy.
In an embodiment, the service issuing apparatus 300 may further include:
an obtaining unit, configured to obtain an interface request protocol in the warm-up annotation after overwriting the service health check policy and before exposing the service state;
and the second processing unit is used for carrying out request assembly on the interface request protocol and calling the interface request protocol.
In an embodiment, the service issuing apparatus 300 may further include:
the determining unit is used for determining whether the mark in the preheating annotation is an SOA call or not after the preheating annotation is detected to be carried and before the service health checking strategy is rewritten;
the first processing unit is configured to rewrite a service health check policy when the determination unit determines no.
In an embodiment, the service issuing apparatus 300 may further include:
the third processing unit is used for rewriting the preprocessor if the mark in the preheating annotation is an SOA call;
the assembling unit is used for assembling parameters of the calling interface according to the mark preheating annotation method;
the calling unit is used for calling the target interface and finishing interface preheating;
and the second issuing unit is used for issuing the service.
In an embodiment, the service issuing apparatus 300 may further include:
the second detection unit is used for detecting whether the time consumption for starting the service is within a preset duration after the service is released;
when the second detection unit determines that the service is the next target service, the first continues to issue the next target service;
when the first detection unit determines no, the service is reissued.
In an embodiment, the interface request protocol is an http protocol, a json-rpc protocol, or a grpc protocol.
As can be seen from the above, the service publishing device provided in the embodiment of the present application detects whether the service carries the preheating annotation when the service is started; if the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state; after the call of the needed preheating interface is completed, exposing the service state; if the service status is healthy, the service is published. According to the scheme, the health of the service can be exposed after the interface is preheated through rewriting the health checking strategy, so that the interface preheating efficiency is improved, and smooth release of the service is realized.
In yet another embodiment of the present application, a server is also provided. As shown in fig. 4, the server 400 includes a processor 401 and a memory 402.
The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the server 400, connects respective portions of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or loading applications stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the server.
In this embodiment, the processor 401 in the server 400 loads the instructions corresponding to the processes of one or more applications into the memory 402 according to the following steps, and the processor 401 executes the applications stored in the memory 402, so as to implement various functions:
when the service is started, detecting whether a preheating annotation is carried;
if the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state;
after the call of the needed preheating interface is completed, exposing the service state;
and if the service state is healthy, releasing the service.
In an embodiment, after overwriting the service health check policy, the processor 401 may further perform the following operations:
acquiring an interface request protocol in the preheating annotation;
and carrying out request assembly on the interface request protocol and calling.
In an embodiment, after detecting that the pre-heat annotation is carried, the processor 401 may further perform the following operations before overwriting the service health check policy:
determining whether the mark in the preheating annotation is an SOA call;
if not, the service health check strategy is rewritten.
In one embodiment, if the pre-heat annotation is marked with an SOA call, the processor 401 may perform the following operations:
rewriting the preprocessor;
assembling parameters of a calling interface according to a mark preheating annotation method;
calling a target interface to finish interface preheating;
and releasing the service.
In one embodiment, after publishing the service, the processor 401 may perform the following operations:
detecting whether the service starting time is within a preset time length or not;
if yes, continuing to issue the next target service;
and if not, reissuing the service.
In an embodiment, the interface request protocol is an http protocol, a json-rpc protocol, or a grpc protocol.
Memory 402 may be used to store applications and data. The memory 402 stores applications that include instructions executable in a processor. Applications may constitute various functional modules. The processor 401 executes various functional applications and information processing by running applications stored in the memory 402.
In some embodiments, as shown in fig. 5, the server 400 further comprises: a display 403, a control circuit 404, a radio frequency circuit 405, an input unit 406 and a power supply 407. The processor 401 is electrically connected to the display 403, the control circuit 404, the radio frequency circuit 405, the input unit 406, and the power supply 407, respectively.
The display 403 may be used to display information entered by a user or information provided to a user for various graphical user interfaces of a server, which may be composed of images, text, icons, video, and any combination thereof.
The control circuit 404 is electrically connected to the display screen 403, and is used for controlling the display screen 403 to display information.
The radio frequency circuit 405 is configured to transmit and receive radio frequency signals, so as to establish wireless communication with a server or other servers through wireless communication, and transmit and receive signals with electronic devices or other servers.
The input unit 406 may be used to receive entered numbers, character information, or user characteristic information (e.g., fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 406 may include a fingerprint recognition module.
The power supply 407 is used to power the various components of the server 400. In some embodiments, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 5, the server 400 may further include a speaker, a bluetooth module, a display screen, etc., which will not be described herein.
From the above, the server provided by the embodiment of the application can expose the service health after the interface is preheated through rewriting the health check strategy, so that the interface preheating efficiency is improved, and smooth release of the service is realized.
In some embodiments, a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform any of the service delivery methods described above is also provided.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory (ROM, readOnlyMemory), random access memory (RAM, randomAccessMemory), magnetic or optical disk, and the like.
The service publishing method, device, storage medium and server provided by the embodiments of the present application are described in detail, and specific examples are applied to illustrate the principles and implementation of the present application, and the description of the above embodiments is only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A service distribution method, comprising:
when the service is started, detecting whether a preheating annotation is carried;
if the preheating annotation is carried, the service health check strategy is rewritten to adjust the exposure time of the service state;
after the call of the needed preheating interface is completed, exposing the service state;
and if the service state is healthy, releasing the service.
2. The service distribution method according to claim 1, further comprising, after overwriting the service health check policy, before exposing the service status:
acquiring an interface request protocol in the preheating annotation;
and carrying out request assembly on the interface request protocol and calling.
3. The service distribution method according to claim 1, characterized by, after detecting that the pre-heat annotation is carried, overwriting the service health check policy, further comprising:
determining whether the mark in the preheating annotation is an SOA call;
if not, the service health check strategy is rewritten.
4. The service distribution method according to claim 3, further comprising:
if the preheating annotation is marked by SOA call, rewriting the preprocessor;
assembling parameters of a calling interface according to a mark preheating annotation method;
calling a target interface to finish interface preheating;
and releasing the service.
5. The service distribution method according to claim 1, characterized by further comprising, after distributing the service:
detecting whether the service starting time is within a preset time length or not;
if yes, continuing to issue the next target service;
and if not, reissuing the service.
6. The service distribution method according to any one of claims 1 to 5, wherein the interface request protocol is an http protocol, a json-rpc protocol or a grpc protocol.
7. A service distribution apparatus, comprising:
the first detection unit is used for detecting whether the preheating annotation is carried or not when the service is started;
the adjusting unit is used for rewriting the service health check strategy if the preheating annotation is carried, so as to adjust the exposure time of the service state;
the first processing unit is used for exposing the service state after the call of the needed preheating interface is completed;
and the first issuing unit is used for issuing the service if the service state is healthy.
8. The service distribution device according to claim 7, further comprising:
an obtaining unit, configured to obtain an interface request protocol in the warm-up annotation after overwriting the service health check policy and before exposing the service state;
and the second processing unit is used for carrying out request assembly on the interface request protocol and calling the interface request protocol.
9. A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the service delivery method of any of claims 1-6.
10. The server is characterized by comprising a processor and a memory, wherein the processor is electrically connected with the memory, and the memory is used for storing instructions and data; the processor is configured to perform the service distribution method of any of claims 1-6.
CN202310374745.3A 2023-04-10 2023-04-10 Service release method, device, storage medium and server Pending CN116401014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310374745.3A CN116401014A (en) 2023-04-10 2023-04-10 Service release method, device, storage medium and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310374745.3A CN116401014A (en) 2023-04-10 2023-04-10 Service release method, device, storage medium and server

Publications (1)

Publication Number Publication Date
CN116401014A true CN116401014A (en) 2023-07-07

Family

ID=87019549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310374745.3A Pending CN116401014A (en) 2023-04-10 2023-04-10 Service release method, device, storage medium and server

Country Status (1)

Country Link
CN (1) CN116401014A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980421A (en) * 2023-09-25 2023-10-31 厦门她趣信息技术有限公司 Method, device and equipment for processing tangential flow CPU resource surge under blue-green deployment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980421A (en) * 2023-09-25 2023-10-31 厦门她趣信息技术有限公司 Method, device and equipment for processing tangential flow CPU resource surge under blue-green deployment
CN116980421B (en) * 2023-09-25 2023-12-15 厦门她趣信息技术有限公司 Method, device and equipment for processing tangential flow CPU resource surge under blue-green deployment

Similar Documents

Publication Publication Date Title
CN112000348A (en) Control method and device for service gray release and computer equipment
CN110908753B (en) Intelligent fusion cloud desktop server, client and system
CN111984269B (en) Method for providing application construction service and application construction platform
TWI344090B (en) Management of a scalable computer system
US20080140760A1 (en) Service-oriented architecture system and methods supporting dynamic service provider versioning
US20120180026A1 (en) System and method for updating initialization parameters for application software from within a software development environment
US20080140857A1 (en) Service-oriented architecture and methods for direct invocation of services utilizing a service requestor invocation framework
CN112965700B (en) Routing-based micro-service processing method and device, computer equipment and medium
US10789111B2 (en) Message oriented middleware with integrated rules engine
WO2022037612A1 (en) Method for providing application construction service, and application construction platform, application deployment method and system
CN110825399B (en) Deployment method and device of application program
US7721278B2 (en) Modular server architecture for multi-environment HTTP request processing
CN109739619A (en) Processing method and device based on containerized application and storage medium
CN116401014A (en) Service release method, device, storage medium and server
CN113742105A (en) Adaptation method, apparatus and medium for microservice framework
CN112433863A (en) Micro-service calling method and device, terminal equipment and storage medium
WO2023098052A1 (en) Server operation and maintenance method and apparatus, and device and storage medium
CN115357369A (en) CRD application integration calling method and device in k8s container cloud platform
US9632897B2 (en) Monitoring components in a service framework
CN110674043A (en) Application debugging processing method and server
CN117041111A (en) Vehicle cloud function test method and device, electronic equipment and storage medium
CN114610446B (en) Method, device and system for automatically injecting probe
EP4071601A1 (en) Mobile service upgrade method and apparatus, and terminal
CN114816445A (en) System platform architecture, function publishing method and device, platform and storage medium
CN118132120B (en) Service system updating method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination