Disclosure of Invention
Accordingly, the present application is directed to a method, an apparatus, and a device for processing a surge of resources of a tangential flow CPU in a blue-green deployment, which can solve the problem that when a limited resource is configured, a first discharge amount causes a surge of CPU usage rate when a blue-green deployment mode is used, and further causes resource contention to cause a service error. At the moment of current cutting, under the same configuration, the CPU occupation can be reduced by 40%, and the usability of the system is improved. And the flow is cut in the preheating stage, so that the influence of connection storm on the stability of upstream service caused by unreasonable arrangement of the preheating connection number is also prevented.
According to one aspect of the application, a method for processing the surge of the tangential CPU resource under blue-green deployment is provided,
the tangential flow comprises the following steps:
acquiring P50 quantile values of the number of pod to be deployed, the total throughput of the upstream service on which the deployment service depends and the response time of the upstream service on which the deployment service depends;
creating a static management class, and setting the overtime time of a connection pool and the number of single domain name connections; calculating theoretical connection number through P50 quantile values of pod number, total throughput and response time, and obtaining the connection number of the service in normal operation and comparing the theoretical connection number with a large value as a first connection number in real time;
comparing the single domain name connection number with the first connection number to obtain a small value as a second connection number, and synchronously calling the second connection number according to preset time to preheat the connection pool;
after the tangential flow, the preheating is turned off and the flow is directed into the new pod set.
In the technical scheme, the operation of creating the connection is moved forward, and the preheating operation of reasonable quantity connection is carried out on the connection pool, so that the resource competition problem is staggered in time, the condition that the request fails when the flow is cut in a large flow under the condition of limited resources is ensured, and meanwhile, the stability of the upstream service is not influenced due to the fact that the connection storm is caused for the upstream service because of the preheating operation.
In some embodiments, the number of pod to be deployed is obtained, specifically:
creating a first configuration file, and filling commands in the first configuration file to acquire the number of pod to be deployed;
mapping the first configuration file into a second configuration file, wherein the second configuration file is a configuration file of the Pod to be deployed;
and modifying the second configuration file based on the first configuration file, adding the environment variable related to the pod number in the configuration file, and updating the second configuration file after modification.
In the above technical solution, the purpose of the arrangement is to directly obtain the number of pod corresponding to service deployment during step operation, so as to increase the operation efficiency. Specifically, through the association of the first configuration file and the second configuration file, the number of pod corresponding to service deployment can be directly acquired through the second configuration file associated with the first configuration file and written into the environment variable when the steps are operated, and the acquisition efficiency is improved.
In some embodiments, a static management class is created that sets the timeout time of the connection pool and the number of single domain name connections, and, in particular,
creating a static management class, wherein the static class is used for managing the configuration of the connection pool;
the timeout period for modifying the connection pool configuration is not less than 10 minutes, and,
the number of single domain name connections is not less than 10.
In the technical scheme, a static management class is realized, the overtime time of the configuration of the connection pool is modified to be a numerical value of not less than 10 minutes, and the probability of connection recovery before the cut-off is ensured to be reduced. The number of the single domain name connections configured by the connection pool is modified to be not less than 10, so that the maximum value of the number of the connections to be preheated in the pre-step is ensured to be consistent, and the connection can be used in a sufficient way in the case of large-flow impact. It should be noted that this 10 is for a bottom pocket. The number of preheated connections is 10 at maximum and cannot be increased.
In some embodiments, the theoretical number of connections is calculated from the P50 quantile value of the number of pod, total throughput, and response time, specifically:
the throughput of a single pod is calculated as follows:
where Q is the average throughput qps of a single pod, S is the total throughput qpm, and C is the number of pods;
the throughput A to be processed and the theoretical throughput upper limit B within 1 second are calculated, and the formula is as follows:
wherein T is the P50 quantile value of the response time;
and calculating a theoretical connection number D through the throughput A required to be processed in 1 second and the theoretical throughput upper limit B, wherein the theoretical connection number D is calculated according to the following formula:
in the above technical solution, according to statistics, it is known that the tcp connection time is not more than about 50ms each time. During a window of 50ms there is no connection available for the first time of metering, and the request is squeezed. The request for extrusion is treated as normal based on 1s of energy. The theoretical number of connections is thus calculated in this embodiment in 1 second as a window to ensure the rationality of the subsequent pre-heat connections.
In some embodiments, the connection number of the service in normal operation is obtained in real time, and a large value is compared with the theoretical connection number to be used as the first connection number, which is specifically:
establishing a connection pool management type management connection pool, setting a timing acquisition task through the connection pool management type, and executing the task of acquiring the connection number of the service in normal operation every time of a first preset time;
and comparing the connection number obtained each time when the service normally operates with the theoretical connection number to obtain a large value as a first connection number.
In the above technical solution, the connection pool is managed by the class. And starting a timing task in the system, executing once every first preset time, and acquiring the number of currently used connections through the connection pool management class. When the method is operated, the calculation result of the value and the theoretical connection number is compared, and a large value is taken, so that a layer of bottom is added, and the rationality of the subsequent preheating connection is ensured.
In some embodiments, comparing the single domain name connection number with the first connection number to obtain a small value as a second connection number, and asynchronously and concurrently calling the second connection number according to preset time to preheat the connection pool, specifically:
monitoring the flow cutting process, judging whether the business flow enters in the flow cutting process,
if not, comparing the single domain name connection number with the first connection number to obtain a small value as a second connection number, setting a timing call task, and executing asynchronous concurrent call of the second connection number to preheat a connection pool every time a second preset time is passed;
if yes, the connection pool is not preheated by synchronously and synchronously calling the second connection number according to the preset time timing.
In the above technical solution, the connection number of the single domain name is compared with the first connection number to take a small value as the second connection number, and the spam is performed again. Unexpected connection storm caused by unexpected situations is prevented, and service stability is affected. Meanwhile, the asynchronous call is a timing task and is executed once every second preset time. A switch is arranged in the timing task, when the service flow enters the service, the preheating switch is closed, the preheating operation is not performed any more, and invalid access is avoided. The operation can solve the problem that the tangential flow time is uncontrollable, thereby ensuring that the preheated connection is always effective. The switch can limit the operation of preheating the connection before the flow cutting, and can not influence the service to normally process the flow, and redundant connection is created due to the preheating operation.
According to another aspect of the present application, a device for processing a surge of a tangential CPU resource under blue-green deployment is provided, including: the device comprises an acquisition module, a connection pool management module, a preheating module and a tangential flow module which are connected in sequence;
the acquisition module is used for acquiring the number of pod to be deployed, the total throughput of the upstream service depending on the deployment service and the P50 quantile value of the response time of the upstream service depending on the deployment service;
the connection pool management module is used for creating a static management class, setting the overtime time of the connection pool and the number of single domain name connections; calculating theoretical connection number through P50 quantile values of pod number, total throughput and response time, and obtaining the connection number of the service in normal operation and comparing the theoretical connection number with a large value as a first connection number in real time;
the preheating module is used for comparing the single domain name connection number with the first connection number to obtain a small value as a second connection number, and synchronously calling the second connection number to preheat the connection pool according to the preset time timing;
and the flow cutting module is used for closing preheating after cutting flow and guiding the flow into the new pod group.
In the above technical solution, in order to better apply the method, different modules are sequentially built in different steps, and each module is connected in series, so that the method can be used more efficiently. It should be noted that the principle and effect of each step have been described above, and will not be explained here.
According to still another aspect of the present application, there is provided a device for processing a surge of a traffic CPU resource under blue-green deployment, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a bluish-green deployment down-cut CPU resource surge processing method as described above.
In the above technical solution, for better running and processing of the method, the above method is stored in a memory, and the stored method is executed by a processor. It should be noted that the principle and effect of each step have been described above, and will not be explained here.
According to a further aspect of the present application, a computer readable storage medium is provided, storing a computer program, which when executed by a processor, implements a method for processing a bluish-green deployment down-cut CPU resource surge as described above.
In the above technical solution, for better operation and use of the method, the above method is stored in a computer readable storage medium and implemented by a processor. It should be noted that the principle and effect of each step have been described above, and will not be explained here.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is specifically noted that the following examples are only for illustrating the present application, but do not limit the scope of the present application. Likewise, the following examples are only some, but not all, of the examples of the present application, and all other examples, which a person of ordinary skill in the art would obtain without making any inventive effort, are within the scope of the present application.
The application provides a method, a device and equipment for processing the surge of tangential CPU resources under blue-green deployment, which can solve the problems that under the configuration of limited resources, the CPU utilization rate is increased due to the primary discharge when a blue-green deployment mode is used, and further, the service error reporting is caused by resource competition. At the moment of current cutting, under the same configuration, the CPU occupation can be reduced by 40%, and the usability of the system is improved. And the flow is cut in the preheating stage, so that the influence of connection storm on the stability of upstream service caused by unreasonable arrangement of the preheating connection number is also prevented.
One of the embodiments
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a method for processing a flow-cutting CPU resource surge in a blue-green deployment of the present application. It should be noted that, if there are substantially the same results, the method of the present application is not limited to the flow sequence shown in fig. 1. As shown in fig. 1, the method comprises the steps of:
s101: acquiring P50 quantile values of the number of pod to be deployed, the total throughput of the upstream service on which the deployment service depends and the response time of the upstream service on which the deployment service depends;
in this embodiment, the blue-green deployment is to directly deploy a new version without stopping the old version in the deployment process, and after the new version is operated, switch the traffic to the new version entirely. And if the verification of the new version is passed, deleting the old version, otherwise, switching the flow to the old version again. In the process, the new version and the old version are deployed simultaneously, and the use condition of resources needs to be considered. The main improvement of the present application is that the improvement is performed before the traffic switching, so the preamble step of the blue-green deployment can refer to the prior art, and the present embodiment is not limited.
In this embodiment, pod is a "module" or "modular container," which is a technique for building extensible, high-availability applications. Pod is the smallest manageable unit in the Kubernetes cluster, which is a container or combination of containers that share a network namespace, share storage volumes, and may share an IP address.
In the present embodiment, the total throughput of the upstream service on which the service depends is deployed means, for example: to be deployed is service B, which relies on service a. The number of connections preheated by service B is related to the qps value received by service a from service B.
In the present embodiment, the P50 quantile value, i.e., the median value, of the response time of the upstream service on which the service depends is deployed. The 100 requests are arranged from small to large according to response time, and the position is 50, namely the P50 value. If the P50 value of the response time is 200ms, it means that half of the user response time is within 200ms, and half of the user response time is more than 200ms. This P50 is needed in this embodiment to calculate together the theoretically required number of connections.
In this embodiment, the number of pod to be deployed is obtained, specifically: creating a first configuration file, and filling commands in the first configuration file to acquire the number of pod to be deployed; mapping the first configuration file into a second configuration file, wherein the second configuration file is a configuration file of the Pod to be deployed; and modifying the second configuration file based on the first configuration file, adding the environment variable related to the pod number in the configuration file, and updating the second configuration file after modification.
In this embodiment, the purpose of this arrangement is to directly obtain the number of pod corresponding to the service deployment during step operation, so as to increase the operation efficiency. Specifically, through the association of the first configuration file and the second configuration file, the number of pod corresponding to service deployment can be directly acquired through the second configuration file associated with the first configuration file and written into the environment variable when the steps are operated, and the acquisition efficiency is improved.
S102: creating a static management class, and setting the overtime time of a connection pool and the number of single domain name connections; calculating theoretical connection number through P50 quantile values of pod number, total throughput and response time, and obtaining the connection number of the service in normal operation and comparing the theoretical connection number with a large value as a first connection number in real time;
in this embodiment, a static management class (httpmentulils) is used to obtain a connection pool management class corresponding to the domain name. The corresponding httpcalient initiation request is obtained by httpcalientntils. The static class maintains a mapping Map of the one domain name and httpllient. The internal implementation of the getHttpCLient method is obtained from the Map. At initialization, the connection to be preheated is registered in httpmentulils. The scheme is adopted to ensure high cohesion and low coupling of codes and has good expansibility.
In this embodiment, a static management class is created, the timeout time of the connection pool and the number of single domain name connections are set, and in particular,
creating a static management class, wherein the static class is used for managing the configuration of the connection pool;
the timeout period for modifying the connection pool configuration is not less than 10 minutes, and,
the number of single domain name connections is not less than 10.
In this embodiment, a static management class is implemented, and the timeout time of the connection pool configuration is modified to a value not less than 10 minutes, so as to ensure that the probability of connection recovery is reduced before the flow switching. The number of the single domain name connections configured by the connection pool is modified to be not less than 10, so that the maximum value of the number of the connections to be preheated in the pre-step is ensured to be consistent, and the connection can be used in a sufficient way in the case of large-flow impact.
In this embodiment, the theoretical connection number is calculated by P50 quantile of pod number, total throughput and response time, specifically:
the throughput of a single pod is calculated as follows:
where Q is the average throughput qps of a single pod, S is the total throughput qpm, and C is the number of pods;
the throughput A to be processed and the theoretical throughput upper limit B within 1 second are calculated, and the formula is as follows:
wherein T is the P50 quantile value of the response time;
and calculating a theoretical connection number D through the throughput A required to be processed in 1 second and the theoretical throughput upper limit B, wherein the theoretical connection number D is calculated according to the following formula:
in this embodiment, the throughput of a single pod is the uptake qps of a single pod.
In this embodiment, it is known from statistics that the tcp connection time is not more than about 50ms each time. During a window of 50ms there is no connection available for the first time of metering, and the request is squeezed. The request for extrusion is treated as normal based on 1s of energy. The theoretical number of connections is thus calculated in this embodiment in 1 second as a window to ensure the rationality of the subsequent pre-heat connections.
In this embodiment, the connection number of the service in normal operation is obtained in real time, and a large value is compared with the theoretical connection number to be used as the first connection number, which is specifically:
establishing a connection pool management type management connection pool, setting a timing acquisition task through the connection pool management type, and executing the task of acquiring the connection number of the service in normal operation every time of a first preset time;
and comparing the connection number obtained each time when the service normally operates with the theoretical connection number to obtain a large value as a first connection number.
In this embodiment, the connection pool is managed by this class. And starting a timing task in the system, executing once every first preset time, and acquiring the number of currently used connections through the connection pool management class. When the method is operated, the calculation result of the value and the theoretical connection number is compared, and a large value is taken, so that a layer of bottom is added, and the rationality of the subsequent preheating connection is ensured.
S103: comparing the single domain name connection number with the first connection number to obtain a small value as a second connection number, and synchronously calling the second connection number according to preset time to preheat the connection pool;
in this embodiment, the connection number of the single domain name is compared with the first connection number to obtain a small value as the second connection number, and the second connection number is called asynchronously and concurrently according to preset time to preheat the connection pool, specifically:
monitoring the flow cutting process, judging whether the business flow enters in the flow cutting process,
if not, comparing the single domain name connection number with the first connection number to obtain a small value as a second connection number, setting a timing call task, and executing asynchronous concurrent call of the second connection number to preheat a connection pool every time a second preset time is passed;
if yes, the connection pool is not preheated by synchronously and synchronously calling the second connection number according to the preset time timing.
In this embodiment, the single domain name connection number is smaller than the first connection number to be used as the second connection number, and the spam is performed again. Unexpected connection storm caused by unexpected situations is prevented, and service stability is affected. Meanwhile, the asynchronous call is a timing task and is executed once every second preset time. A switch is arranged in the timing task, when the service flow enters the service, the preheating switch is closed, the preheating operation is not performed any more, and invalid access is avoided. The operation can solve the problem that the tangential flow time is uncontrollable, thereby ensuring that the preheated connection is always effective. The switch can limit the operation of preheating the connection before the flow cutting, and can not influence the service to normally process the flow, and redundant connection is created due to the preheating operation.
S104: after the tangential flow, the preheating is turned off and the flow is directed into the new pod set.
One of the embodiments will be further explained in a specific case. Referring to fig. 2, the method specifically includes the following steps:
step one: a configuration file of ConfigMap is created. Specific flash commands are filled in the configuration to obtain the number of pod. Key configuration data, pod_count.sh: |# | bin/flash
POD_NUM=$(kubectl get pods -l app.kubernetes.io/name=j78 -o json | jq -r `.items | length`)
echo “POD_NUM=$POD_NUM”>/etc/pod_env/pod_num_env
In the above-mentioned flash command, j78 is a service tag to be queried. The purpose of this step is to fetch the pod with a command.
Step two: mapping the yaml configuration file created in the step one to the volume part in the Pod configuration file and mounting in the volume units of the container. This process is implemented using a k8s Downward API. Modify yaml configuration file of Pod and add POD_NUM related configuration under spec: contacts: env configuration. The key added contents are as follows: spec: contacts: env: -name: POD_NUM;
spec: contacts: env: value from: configMapKeyRef: name: pod-count-configMap, wherein pod-count-configMap is replaced according to actual conditions, and step one creates name alignment within yaml file.
spec: continuers: env: value from: configMapKeyRef: key: pod_num_env, wherein the redirection of pod_num_env and the flash command in step one is consistent.
spec: volumes: configMap: name: pod-count-configMap, wherein pod-count-configMap is aligned with the name in the yaml file of step one.
spec: volumeMounts: molutPath:/etc/pod_env, where/etc/pod_env is aligned with the flash command script in step one.
And updating the Pod file configuration, and in the process of starting the Pod, acquiring the Pod number corresponding to the service deployment, and writing the Pod number into the environment variable POD_NUM.
Step three: querying clickhouse, obtaining qpm of the upstream service on which the service depends currently to be deployed [ for example, service a calls service b ], converting the qpm of the request from service a into average qps by querying the qpm of the request from service b, and calculating average qps of single pod according to the obtained pod number. At the same time, a P50 quantile value is obtained that depends on the response time of the service. According to statistics, the tcp connection time is not more than about 50 ms. During a window of 50ms there is no connection available for the first time of metering, and the request is squeezed. The request for extrusion is treated as normal based on 1s of energy.
The set variables are as follows:
s: a service requests qpm of b service, e.g.: 1000qpm
Number of pod for b services, for example: 20
Q qps data for service single pod, e.g.: 50qps
T P50 response time of service, unit ms, for example: 50ms
The calculation process comprises the following steps:
Q=S/60/20 [ average data can exclude sporadic factors, relatively reliable ]
The amount of requests to be processed within 1 s: (50/1000+1) q=51qps
Qps theoretical acceptable qps for a service ticket connection: 1000ms/50 ms=20 qps
Theoretical number of connections required: 51/20=2.55, rounding up is 3 connections.
Step four: and D, obtaining the number of the currently used connections for normal operation of the service, and taking a large value from the result calculated in the step three. A PoolingHttpClient class used by itself is encapsulated, and the PoolingHttpClienConnectionManager attribute is set, and the connection pool is managed through the class. In the system, a timing task is started, the execution is carried out every 5s, and the currently used connection number is obtained through a connection pool management class getStatus () method and written into redis. At service start-up, the number of in-use connections is obtained through redis. And (3) comparing the value with the calculation result of the step three to obtain a large value, so that a layer of bottom is added, and the reasonability of the subsequent preheating connection is ensured.
In this embodiment, the application Listener < application ReadyEvent > is a slot for the starting of the Springboot listening service. For specifying operations to be performed after service initiation
Redis: distributed cache service
PoolingHttpClientConnectionManager: pooling connection manager
idleTimeout: the maximum idle time of the connection is recovered if the connection is unused beyond the time threshold.
PoolinghttpClient: custom tool class
Step five: realizing a static type httpllientutils, modifying the idletiout time of httpllient to 10 minutes before creating the PoolingHttpClient instance, and ensuring that the probability of connection being recovered before the cut-off is reduced. The configuration of the connection pool of httpClient is modified, the number of single domain name connections is modified to 10, the maximum value of the number of connections to be preheated in the pre-step is ensured to be consistent, and enough connections can be used in the case of high-flow impact.
Step six: inheriting application Listener < application ReadyEvent >, asynchronously and concurrently invoking requests of service n [ and consistency obtained in step four ] times to preheat connection. The value of n is as follows: and (3) comparing the preheating connection number with 10 [ maximum connection number of single domain name ], taking a smaller value n, and performing bottom covering again. Unexpected connection storm caused by unexpected situations is prevented, and service stability is affected. Meanwhile, the asynchronous call is a timed task, which is executed every 4 minutes. A switch is arranged in the timing task, when the service flow enters the service, the preheating switch is closed, the preheating operation is not performed any more, and invalid access is avoided. The operation can solve the problem that the tangential flow time is uncontrollable, thereby ensuring that the preheated connection is always effective. The switch can limit the operation of preheating the connection before the flow cutting, and can not influence the service to normally process the flow, and redundant connection is created due to the preheating operation.
Step seven: and switching the flow, and guiding the flow into the new pod group.
Referring to fig. 3 and fig. 4, fig. 3 is a schematic diagram of a specific case method of testing one embodiment of the method for processing the surge of the tangential CPU resources in blue-green deployment according to the present application; FIG. 4 is a second exemplary method test diagram of one embodiment of the method for processing the surge of CPU resources under blue-green deployment. From the figure, it can be seen that the resource contention of the initial start-up and the tangential flow of the service is reasonably off-peak. The effect of reducing cpu occupation during tangential flow can be achieved.
The application solves the problems that when a bluish-green deployment scheme is used under the configuration of a limited request cpu on a containerization scheduling platform of a K8S cluster, the utilization rate of pod cpu resources is increased in a surge at the moment of tangential flow, and short-time system abnormality occurs. The connection pool used by the system is reasonably preheated, so that the operations of establishing connection and processing the instantaneous large flow occupy CPU are staggered in time, and meanwhile, because the established connection number is calculated according to actual needs, the situation that connection storm is caused by connection preheating in downstream service deployment and the stability of upstream service is influenced due to unreasonable connection preheating under the condition that the downstream pod number is more is avoided. Therefore, the tangential flow instant can normally provide service on the premise of limited cpu resources. According to the application, under the configuration of limited resources, when a blue-green deployment mode is used, the initial discharge leads to the surge of the CPU utilization rate, so that the situation of service reporting errors caused by resource competition is caused. At the moment of current cutting, under the same configuration, the CPU occupation can be reduced by 40%, and the usability of the system is improved. And the flow is cut in the preheating stage, so that the influence of connection storm on the stability of upstream service caused by unreasonable arrangement of the preheating connection number is also prevented. The application advances the operation of establishing the connection and performs the preheating operation of reasonable quantity connection on the connection pool, thereby staggering the resource competition problem in time, ensuring that the condition of failed request can not occur when the flow is cut in a large flow under the condition of limited resources, and simultaneously avoiding the influence on the stability of the upstream service due to the occupation of resources caused by the connection storm for the upstream service because of the preheating operation.
Second embodiment
A device for processing a tangential CPU resource surge in a blue-green deployment, comprising: the device comprises an acquisition module, a connection pool management module, a preheating module and a tangential flow module which are connected in sequence;
the acquisition module is used for acquiring the number of pod to be deployed, the total throughput of the upstream service depending on the deployment service and the P50 quantile value of the response time of the upstream service depending on the deployment service;
the connection pool management module is used for creating a static management class, setting the overtime time of the connection pool and the number of single domain name connections; calculating theoretical connection number through P50 quantile values of pod number, total throughput and response time, and obtaining the connection number of the service in normal operation and comparing the theoretical connection number with a large value as a first connection number in real time;
the preheating module is used for comparing the single domain name connection number with the first connection number to obtain a small value as a second connection number, and synchronously calling the second connection number to preheat the connection pool according to the preset time timing;
and the flow cutting module is used for closing preheating after cutting flow and guiding the flow into the new pod group.
In this embodiment, in order to better use the method described in one of the embodiments, different steps are sequentially used to create different modules, and each module is connected in series, so that the method can be used more efficiently. It should be noted that the principle and effect of each step have been described above, and will not be explained here.
Third embodiment
A device for processing a traffic-by-traffic CPU resource surge in a blue-green deployment, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a bluish-green deployment down-cut CPU resource surge processing method as described above.
In this embodiment, for better running and processing the method described in one of the embodiments, the above method is stored in a memory, and the stored method is executed by a processor. It should be noted that the principle and effect of each step have been described above, and will not be explained here.
Fourth embodiment
A computer readable storage medium storing a computer program which when executed by a processor implements a method for processing a bluish-green deployment down-cut CPU resource surge as described above.
In this embodiment, for better operation and use of the method according to one of the embodiments, the above method is stored in a computer-readable storage medium, and implemented by a processor. It should be noted that the principle and effect of each step have been described above, and will not be explained here.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent devices or equivalent processes using the descriptions and the drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the present application.