CN117032991B - Gray scale publishing method, device and system - Google Patents

Gray scale publishing method, device and system Download PDF

Info

Publication number
CN117032991B
CN117032991B CN202311286247.XA CN202311286247A CN117032991B CN 117032991 B CN117032991 B CN 117032991B CN 202311286247 A CN202311286247 A CN 202311286247A CN 117032991 B CN117032991 B CN 117032991B
Authority
CN
China
Prior art keywords
gray
current user
resource request
end resource
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311286247.XA
Other languages
Chinese (zh)
Other versions
CN117032991A (en
Inventor
李珊珊
陈健
韩亮亮
周嘉琦
王伦
刘雨轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank Of Ningbo Co ltd
Original Assignee
Bank Of Ningbo Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank Of Ningbo Co ltd filed Critical Bank Of Ningbo Co ltd
Priority to CN202311286247.XA priority Critical patent/CN117032991B/en
Publication of CN117032991A publication Critical patent/CN117032991A/en
Application granted granted Critical
Publication of CN117032991B publication Critical patent/CN117032991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The disclosure provides a gray level publishing method, device and system, and relates to the technical field of computers. The method comprises the following steps: the method comprises the steps that a first server side obtains a front-end resource request corresponding to a current user issued by a client side, and a second server side obtains a rear-end resource request corresponding to the current user issued by the client side; the method comprises the steps that under the condition that a first gray scale field is included in a front-end resource request, gray scale front-end resources corresponding to the front-end resource request are called on the basis of a first path, the first gray scale field represents that a current user is a gray scale user, and the first path is a storage path of the gray scale front-end resources; and under the condition that the back-end resource request comprises a second gray level field, enabling the gray level back-end resource corresponding to the back-end resource request based on the first node, wherein the second gray level field represents that the current user is a gray level user, and the first node is used for enabling the gray level back-end resource.

Description

Gray scale publishing method, device and system
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a gray level publishing method, device and system.
Background
Gray level release is a release mode capable of smoothly transiting, namely, a part of users apply new product characteristics, another part of users continue to apply old product characteristics, and if the user feedback of the new product characteristics is better, all users are migrated to the use of the new product characteristics.
In the prior art, gray scale release is realized by mainly configuring a load balancing strategy or a gray scale release tool by using the nginx, but the cost of the method is higher, the uniqueness and the accuracy of the user classification mark in the whole link of the product cannot be ensured, and the dynamic control of the gray scale customer group flow cannot be realized.
Disclosure of Invention
The disclosure provides a gray level publishing method, device and system, which at least solve the technical problems existing in the prior art.
According to a first aspect of the present disclosure, there is provided a gray scale distribution method, the method including: the method comprises the steps that a first server side obtains a front-end resource request corresponding to a current user issued by a client side, and a second server side obtains a rear-end resource request corresponding to the current user issued by the client side; the method comprises the steps that under the condition that a first gray scale field is included in a front-end resource request, gray scale front-end resources corresponding to the front-end resource request are called on the basis of a first path, the first gray scale field represents that a current user is a gray scale user, and the first path is a storage path of the gray scale front-end resources; and under the condition that the back-end resource request comprises a second gray level field, enabling the gray level back-end resource corresponding to the back-end resource request based on a first node, wherein the second gray level field represents that the current user is a gray level user, and the first node is used for enabling the gray level back-end resource.
In an embodiment, before the first server obtains the front-end resource request corresponding to the current user issued by the client, the second server further includes: the client acquires a classification identifier of the current user based on the gray service interface under the condition that the user account of the current user is not in the initial login state; and under the condition that the classification identifier is a gray scale classification identifier, the client adds a first gray scale field in a front-end resource request corresponding to the current user, and adds a second gray scale field in a back-end resource request corresponding to the current user.
In an embodiment, the gray service interface obtains a classification identifier of a current user, including: acquiring an initial classification identifier of a current user based on a gray service interface; if the gray service interface returns the initial classification identifier in the target time, determining the initial classification identifier as the classification identifier of the current user; and if the gray service interface does not return the initial classification identifier within the target time, determining the cache classification identifier as the classification identifier of the current user.
In an embodiment, before the gray service interface obtains the classification identifier of the current user, the method further includes: the gray scale micro server calculates the hit proportion of the current user based on the user account number and the offset of the current user; the gray micro server adds gray classification identification to the current user under the condition that the hit proportion of the current user meets a target threshold value and/or the user account of the current user belongs to a target guest group; the hit ratio of the current user is calculated based on the following formula:wherein->For hit ratio of current user, +.>For the end two digits of the user account +.>Is the offset.
In an embodiment, the invoking the gray-scale front-end resource corresponding to the front-end resource request based on the first path includes: the first service end establishes the first path and stores the gray level front-end resource into the first path; and the first service end calls the gray level front-end resource corresponding to the front-end resource request in the first path under the condition that the front-end resource request comprises a first gray level field.
In an embodiment, the second service end is a single architecture, and the enabling the gray-scale backend resource corresponding to the backend resource request based on the first node includes: the second server side mounts a gray scale resource pool on the load balancer; and the second service end distributes the back-end resource request to the gray-scale resource pool under the condition that the back-end resource request comprises a second gray-scale field, so that the gray-scale resource pool enables gray-scale back-end resources based on the back-end resource request, and the gray-scale resource pool is a first node in the load balancer.
In an embodiment, the second service end is a micro service architecture, and the enabling the gray-scale backend resource corresponding to the backend resource request based on the first node includes: the second service end establishes a gray scale cluster based on standard configuration; and the second service end distributes the back-end resource request to the gray scale cluster under the condition that the back-end resource request comprises a second gray scale field, so that the gray scale cluster enables gray scale back-end resources based on the back-end resource request, and the gray scale cluster is a first node in the second service end.
According to a second aspect of the present disclosure, there is provided a gradation issue system including: the system comprises a client, a first service end, a second service end and a gray micro service end; the gray micro server is used for adding classification identification for the current user; the client is used for adding a first gray level field in a front-end resource request corresponding to the current user and adding a second gray level field in a back-end resource request corresponding to the current user under the condition that the classification identifier is a gray level classification identifier; the first server is used for calling gray level front-end resources based on the front-end resource request; the second server is used for calling gray back-end resources based on the back-end resource request.
According to a third aspect of the present disclosure, there is provided a gradation issuing apparatus comprising: the first acquisition module is used for acquiring a front-end resource request corresponding to a current user issued by the client; the calling module is used for calling the gray level front-end resource corresponding to the front-end resource request based on a first path under the condition that the front-end resource request comprises a first gray level field, wherein the first gray level field represents that the current user is a gray level user, and the first path is a storage path of the gray level front-end resource.
According to a fourth aspect of the present disclosure, there is provided a gradation issuing apparatus, comprising: the second acquisition module is used for acquiring a back-end resource request corresponding to the current user issued by the client; the starting module is used for starting the gray level back-end resource corresponding to the back-end resource request based on a first node under the condition that the back-end resource request comprises a second gray level field, the second gray level field represents that the current user is a gray level user, and the first node is used for starting the gray level back-end resource.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
According to the gray level publishing method, device and system, the front-end resource request and the rear-end resource request corresponding to the current user are respectively obtained, the gray level front-end resource corresponding to the front-end resource request is called based on the first path under the condition that the front-end resource request comprises the first gray level field, and the gray level rear-end resource corresponding to the rear-end resource request is started based on the first node under the condition that the rear-end resource request comprises the second gray level field, so that full-link opening of the gray level field at the first service end and the second service end is realized, uniqueness and accuracy of user classification identification are guaranteed, and in addition, gray level users are hit based on the user account number, the offset and the guest group to which the user account number belongs of the current user, so that gray level guest group flow can be dynamically controlled.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 shows a flowchart of a gray level distribution method according to an embodiment of the disclosure;
fig. 2 shows a second flowchart of a gray level distribution method according to an embodiment of the disclosure;
fig. 3 shows a flowchart of a gray scale distribution method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a gray scale distribution system according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of a gray scale distribution device according to an embodiment of the present disclosure;
fig. 6 shows a second schematic structural diagram of a gray scale distribution device according to an embodiment of the present disclosure;
Fig. 7 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Fig. 1 shows a flowchart of a gray scale distribution method according to an embodiment of the present disclosure, as shown in fig. 1, a gray scale distribution method includes:
in step S101, the first server obtains a front-end resource request corresponding to a current user issued by the client, and the second server obtains a back-end resource request corresponding to the current user issued by the client.
In the disclosure, a gray level release method is applied to a gray level release system, wherein the gray level release system comprises a client, a first service end and a second service end, the client is used for receiving a service request of a current user, and adding a gray level field to the service request of the current user under the condition that a classification identifier of the current user is a gray level classification identifier, and transmitting the service request added with the gray level field to the first service end and the second service end, wherein the gray level classification identifier represents that the current user is a gray level user; the first server is used for calling front-end resources based on front-end resource requests in the service requests; the second server is used for calling the back-end resource based on the back-end resource request in the service request. Specifically, the first server needs to obtain a front-end resource request corresponding to a current user issued by the client, and the second server needs to obtain a back-end resource request corresponding to the current user issued by the client.
In step S102, when the front-end resource request includes the first gray-scale field, the first server invokes the gray-scale front-end resource corresponding to the front-end resource request based on the first path.
In the disclosure, if the front-end resource request includes a first gray field, it is characterized that the current user is a gray user, and at this time, the front-end resource corresponding to the front-end resource request needs to be called based on a first path, where the first path is a storage path of the gray front-end resource; if the front-end resource request does not include the first gray field, the current user is characterized as not being a gray user, and the non-gray front-end resource is called only through a normal resource calling path.
In step S103, if the back-end resource request includes the second gray level field, the second server starts the gray level back-end resource corresponding to the back-end resource request based on the first node.
In the disclosure, if the back-end resource request includes a second gray level field, it is characterized that the current user is a gray level user, and at this time, the back-end resource corresponding to the back-end resource request needs to be started based on a first node, where the first node is configured to start the gray level back-end resource; if the back-end resource request does not include the first gray field, the current user is characterized as not being a gray user, and the non-gray back-end resource is called only through a normal resource mounting node.
According to the gray level release method, the front-end resource request and the rear-end resource request corresponding to the current user are respectively obtained, the gray level front-end resource corresponding to the front-end resource request is called based on the first path under the condition that the front-end resource request comprises the first gray level field, and the gray level rear-end resource corresponding to the rear-end resource request is started based on the first node under the condition that the rear-end resource request comprises the second gray level field, so that full link opening of the gray level field at the first service end and the second service end is achieved, uniqueness and accuracy of user classification identification are guaranteed, accuracy and stability of gray level release are improved, and user experience is further improved.
Fig. 2 shows a second flowchart of a gray scale distribution method according to an embodiment of the present disclosure, where, as shown in fig. 2, the gray scale distribution method includes:
in step S201, the client obtains the classification identifier of the current user based on the gray service interface when the user account of the current user is not in the initial login state.
In the disclosure, the gray level publishing system further comprises a gray level micro-server, wherein the gray level micro-server is connected with the client based on the gray level service interface, and the gray level micro-server is used for judging whether the current user is a gray level user or not and adding classification identification for the current user. The method comprises the steps that in the process that a current user logs in, a client detects whether a user account of the current user is first logged in or not, and if the user account of the current user is not first logged in, a classification identifier of the current user is obtained from a gray scale micro server based on a gray scale service interface; if the user account of the current user is first login, defaulting the classification identifier of the current user to be a non-gray classification identifier.
In one embodiment, the obtaining, based on the gray service interface, the classification identifier of the current user in step S201 includes: acquiring an initial classification identifier of a current user based on a gray service interface; if the gray service interface returns an initial classification identifier in the target time, determining the initial classification identifier as the classification identifier of the current user; and if the gray service interface does not return the initial classification identifier within the target time, determining the cache classification identifier as the classification identifier of the current user. Specifically, the client acquires the classification identifier of the current user from the gray scale service interface in an asynchronous call mode, firstly, determining target time based on time consumption of the current user entering the second-level page and time consumption of the gray scale service interface returning the classification identifier, wherein the target time can be 1.3s, and if the gray scale service interface returns the classification identifier within 1.3s, using the classification identifier returned by the gray scale service interface; if the gray service interface does not return the classification identifier within 1.3s, the last cache classification identifier is used.
In step S202, the client adds a first gray level field in the front-end resource request corresponding to the current user and adds a second gray level field in the back-end resource request corresponding to the current user when the classification identifier is the gray level classification identifier.
In the present disclosure, if the classification identifier is a gray scale classification identifier, that is, the current User is a gray scale User, a first gray scale field is added in a front end resource request User Agent (UA) corresponding to the current User, for example, the first gray scale field may be nbcbgrayv1.0, and a second gray scale field is added in a request header (header) of a back end resource request corresponding to the current User, for example, the second gray scale field may be X-grayflag=gray/nogay.
In an embodiment, after receiving the classification identifier of the current user, the client needs to determine whether the gray state of the current client changes, if the gray state of the current client does not change, then the identifier fields are directly added in the front-end resource request and the back-end resource request based on the classification identifier of the current user, for example, if the current user is the gray classification identifier, the gray fields are added in the front-end resource request and the back-end resource request, and if the current user is the non-gray classification identifier, the non-gray fields are added in the front-end resource request and the back-end resource request; if the gray state of the current client changes, the client browser cache needs to be cleared, the cache classification identifier is updated to be the classification identifier of the current user returned by the gray service interface, and then an identification field is added in the front-end resource request and the back-end resource request based on the classification identifier of the current user.
In an embodiment, in order to facilitate problem investigation, a point burying may be performed for the client behavior, for example, the client may perform point burying on the classification identifier of the current user, the time consuming request of the classification identifier, the last time of caching the classification identifier, and the first gray scale field and the second gray scale field, and may visually check the classification identifier, the first gray scale field and the second gray scale field of the current user when performing problem investigation.
In step S203, the first server obtains a front-end resource request corresponding to the current user issued by the client, and the second server obtains a back-end resource request corresponding to the current user issued by the client.
In step S204, if the front-end resource request includes the first gray-scale field, the first server invokes the gray-scale front-end resource corresponding to the front-end resource request based on the first path.
In step S205, if the back-end resource request includes the second gray level field, the second server starts the gray level back-end resource corresponding to the back-end resource request based on the first node.
The implementation process of step S203 to step S205 is similar to that of step S101 to step S103, and will not be repeated here.
Fig. 3 shows a flowchart third of a gray scale distribution method according to an embodiment of the present disclosure, as shown in fig. 3, a gray scale distribution method includes:
Step S301, the gray scale micro server calculates hit proportion of the current user based on the user account and the offset of the current user.
In the present disclosure, the gray scale micro server calculates the hit ratio of the current user based on the last two digits of the user account of the current user and the offset, where the offset may ensure the randomness of the gray scale user hit during each different gray scale release, for example, the offset may be 25. Specifically, the hit ratio of the current user may be calculated based on the following formula:
wherein,for hit ratio of current user, +.>For the end two digits of the user account +.>Is the offset.
In step S302, the gray scale micro server adds a gray scale classification identifier to the current user when the hit ratio of the current user meets the target threshold value and/or the user account of the current user belongs to the target guest group.
In the present disclosure, the target threshold is a hit ratio threshold set in advance, the target threshold may be 20, where the hit ratio of the current user satisfies the target threshold, that is, the hit ratio of the current user is smaller than the target threshold, for example, if the end two digits of the user account number is 90, the offset is 25, the target threshold is 20, the hit ratio of the current user is 15, the hit ratio is 15 is smaller than the target threshold 20, the current user is a gray user, and the gray classification identifier is added to the current user.
In an embodiment, if the hit ratio of the current user does not meet the target threshold, it is determined whether the user account of the current user belongs to the target guest group, specifically, if the user account of the current user hits the configured guest group and the guest group number is consistent with the target guest group number, the current user is considered as a gray-scale user, and a gray-scale classification identifier is added to the current user, where the target guest group is a guest group in a gray-scale configuration white list, for example, a member guest group. If the hit proportion of the current user does not meet the target threshold value and the user account of the current user does not belong to the target guest group, the current user is a non-gray user, and a non-gray classification mark is added for the current user.
In step S303, the client obtains the classification identifier of the current user based on the gray service interface when the user account of the current user is not in the initial login state.
In step S304, the client adds a first gray level field in the front-end resource request corresponding to the current user and adds a second gray level field in the back-end resource request corresponding to the current user when the classification identifier is the gray level classification identifier.
In step S305, the first server obtains a front-end resource request corresponding to the current user issued by the client, and the second server obtains a back-end resource request corresponding to the current user issued by the client.
In step S306, if the front-end resource request includes the first gray-scale field, the first server invokes the gray-scale front-end resource corresponding to the front-end resource request based on the first path.
In step S307, if the back-end resource request includes the second gray level field, the second server starts the gray level back-end resource corresponding to the back-end resource request based on the first node.
The implementation process of step S303 to step S307 is similar to that of step S201 to step S205, and will not be repeated here.
The gray level publishing method hits gray level users based on the user account number, the offset of the current user and the guest group to which the user account number belongs, so that the gray level guest group flow can be dynamically controlled.
In another embodiment, the invoking the gray-scale front-end resource corresponding to the front-end resource request based on the first path in step S102 includes:
the first service end establishes a first path and stores gray front-end resources into the first path; and the first server calls the gray level front-end resource corresponding to the front-end resource request in the first path under the condition that the front-end resource request comprises the first gray level field.
Specifically, the first server may newly establish a first path mobilebank gray in the WEB server, for storing the gray level front end resource of the current product, where the current product is a currently released product, including software, a WEB page, and the like, and then, if the front end resource request UA includes the first gray level field, the first server switches the front end resource request path from the second path mobilebank to the first path mobilebank, so as to invoke the gray level front end resource according to the front end resource request; if the front-end resource request UA does not include the first gray field, the non-gray front-end resource is directly called from a second path mobilebank, wherein the second path mobilebank is a normal first service-end resource calling path.
Specifically, the first service end may introduce a mod_rewrite module into the ihs configuration file, then rewrite the configuration function of the second path, and add a new htaccess file under the first service end resource directory of the current product after the configuration function is rewritten, and after the file is placed under the first service end root directory of the current product, when the front end resource request accesses the directory, the front end resource request will read the additional configuration options in the file.
In another embodiment, the second service end is a single-body architecture, and the enabling of the back-end resource request based on the first node in step S103 includes:
the second server side mounts a gray scale resource pool on the load balancer; and the second service end distributes the back-end resource request to the gray-scale resource pool under the condition that the back-end resource request comprises a second gray-scale field, so that the gray-scale resource pool enables gray-scale back-end resources based on the back-end resource request, and the gray-scale resource pool is a first node in the load balancer.
Specifically, for a single architecture of the second service end, gray release is realized by adopting an F5 distribution mode, firstly, the second service end mounts two resource pools, namely a gray resource pool and a non-gray resource pool, on an F5 load balancer, wherein the gray resource pool can enable a gray version of a current product, the non-gray resource pool can enable a non-gray version of the current product, the load balancer can enable gray back-end resources based on the back-end resource request if the second gray field is X-GRAYFLAG=gray in a request header of the back-end resource request, namely, if a current user is a gray user, the back-end resource request is distributed to the gray resource pool, namely, a first node is arranged in the gray resource pool; if the second gray level field X-grayflag=nogay, i.e., the current user is a non-gray level user, then the back-end resource request is distributed to a non-gray level resource pool, which may enable non-gray level back-end resources based on the back-end resource request.
In another embodiment, the second service end is a micro service architecture, and enabling the gray-scale backend resource corresponding to the backend resource request based on the first node in step S103 includes:
the second service end establishes a gray scale cluster based on standard configuration; and under the condition that the back-end resource request comprises a second gray level field, the second service end distributes the back-end resource request to the gray level cluster, so that the gray level cluster enables the gray level back-end resource based on the back-end resource request, and the gray level cluster is a first node in the second service end.
Specifically, for the micro-service architecture of the second server, gray level release is realized by combining a load balancing tool and a configuration management center, and differentiation and hit of different clusters can be realized by changing the configuration of the configuration management center and a server file. Firstly, a second server establishes a gray scale cluster in a configuration management center, and changes the configuration of the gray scale cluster into standard configuration so that the gray scale cluster can enable the gray scale version of the current product. If the second gray field X-grayflag=gray in the back-end resource request, that is, the current user is a gray user, the back-end resource request is hit to a gray cluster, the gray cluster is the first node, and the gray cluster can enable the gray back-end resource based on the back-end resource request; if the second gray level field X-grayflag=nogay, i.e. the current user is a non-gray level user, then hit the back-end resource request to a non-gray level cluster, which may enable the non-gray level back-end resource based on the back-end resource request.
Specifically, after the gray level cluster takes effect, a new/opt/settings directory is needed to be created in a server deployed by the current product, a file server/properties is newly created in the directory, and a corresponding gray level environment identifier is written in the file: idc=gray, and the current product can be validated by restarting, wherein the versin version of each microservice can be checked and checked by the SpringBootAdmin console.
Fig. 4 is a schematic structural diagram of a gray scale distribution system according to an embodiment of the present disclosure, and as shown in fig. 4, a gray scale distribution system includes: the system comprises a client, a first service end, a second service end and a gray micro service end;
the gray micro server is used for adding classification identification for the current user;
the client is used for adding a first gray level field in a front-end resource request corresponding to the current user and adding a second gray level field in a back-end resource request corresponding to the current user under the condition that the classification identifier is a gray level classification identifier;
the first server is used for calling gray level front-end resources based on the front-end resource request;
the second server is used for calling the gray back-end resource based on the back-end resource request.
Fig. 5 shows a schematic structural diagram of a gray scale distribution device according to an embodiment of the present disclosure, where the gray scale distribution device is applied to a first service end of a gray scale distribution system, as shown in fig. 5, and the gray scale distribution device includes:
The first obtaining module 10 is configured to obtain a front-end resource request corresponding to a current user issued by a client; the calling module 11 is configured to call, when the front-end resource request includes a first gray field, a gray front-end resource corresponding to the front-end resource request based on a first path, where the first gray field indicates that the current user is a gray user, and the first path is a storage path of the gray front-end resource.
In an embodiment, the calling module 11 is further configured to: establishing a first path, and storing gray front-end resources into the first path; and when the front-end resource request comprises the first gray scale field, calling the gray scale front-end resource corresponding to the front-end resource request in the first path.
Fig. 6 shows a second schematic structural diagram of a gray scale distribution device according to an embodiment of the present disclosure, where the gray scale distribution device is applied to a second service end of a gray scale distribution system, as shown in fig. 6, and the gray scale distribution device includes: the second obtaining module 20 is configured to obtain a back-end resource request corresponding to a current user issued by the client; the enabling module 21 is configured to enable, based on the first node, the gray-scale backend resource corresponding to the backend resource request if the backend resource request includes a second gray-scale field, where the second gray-scale field characterizes the current user as a gray-scale user, and the first node is configured to enable the gray-scale backend resource.
In an embodiment, the second service end is a single-body architecture, and the enabling module 21 is further configured to: mounting a gray scale resource pool on a load balancer; and distributing the back-end resource request to a gray-scale resource pool under the condition that the back-end resource request comprises the second gray-scale field, so that the gray-scale resource pool enables gray-scale back-end resources based on the back-end resource request, and the gray-scale resource pool is a first node in the load balancer.
In an embodiment, the second service end is a micro service architecture, and the enabling module 21 is further configured to: establishing a gray scale cluster based on the standard configuration; and distributing the back-end resource request to the gray scale cluster under the condition that the back-end resource request comprises the second gray scale field, so that the gray scale cluster enables the gray scale back-end resource based on the back-end resource request, and the gray scale cluster is a first node in the second service end.
In another embodiment, a gradation release apparatus is applied to a client of a gradation release system, the gradation release apparatus comprising:
the third acquisition module is used for acquiring the classification identification of the current user based on the gray service interface under the condition that the user account of the current user is not in the initial login state; and the field adding module is used for adding a first gray level field in the front-end resource request corresponding to the current user and adding a second gray level field in the back-end resource request corresponding to the current user under the condition that the classification identifier is the gray level classification identifier.
In an embodiment, the third obtaining module is further configured to: acquiring an initial classification identifier of a current user based on a gray service interface; if the gray service interface returns an initial classification identifier in the target time, determining the initial classification identifier as the classification identifier of the current user; and if the gray service interface does not return the initial classification identifier within the target time, determining the cache classification identifier as the classification identifier of the current user.
In another embodiment, a gray scale distribution device is applied to a gray scale micro server of a gray scale distribution system, and the gray scale distribution device includes:
the computing module is used for computing the hit proportion of the current user based on the user account number and the offset of the current user; the identification adding module is used for adding gray classification identification for the current user when the hit proportion of the current user meets a target threshold value and/or the user account of the current user belongs to a target guest group;
in an embodiment, the computing module is further configured to: the hit ratio of the current user is calculated based on the following formula:wherein->For hit ratio of current user, +.>For the end two digits of the user account +. >Is the offset.
According to an embodiment of the disclosure, the disclosure further provides an electronic device and a readable storage medium, where the electronic device may be a first service side, a second service side, a client side or a gray micro service side in the embodiment of the disclosure.
Fig. 7 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in electronic device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 801 performs the respective methods and processes described above, for example, a gradation release method. For example, in some embodiments, a gray scale distribution method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When a computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of one gradation release method described above can be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform a gray scale distribution method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (8)

1. A gray scale distribution method, characterized in that the method comprises:
the gray scale micro server calculates the hit proportion of the current user based on the user account number and the offset of the current user;
the gray micro server adds gray classification identification to the current user under the condition that the hit proportion of the current user meets a target threshold value and/or the user account of the current user belongs to a target guest group;
the hit ratio of the current user is calculated based on the following formula:
wherein,for hit ratio of current user, +.>For the end two digits of the user account +.>Is the offset;
the client acquires a classification identifier of the current user based on the gray service interface under the condition that the user account of the current user is not in the initial login state;
under the condition that the classification mark is a gray scale classification mark, the client adds a first gray scale field in a front-end resource request corresponding to the current user, and adds a second gray scale field in a back-end resource request corresponding to the current user;
the method comprises the steps that a first server side obtains a front-end resource request corresponding to a current user issued by a client side, and a second server side obtains a rear-end resource request corresponding to the current user issued by the client side;
the method comprises the steps that under the condition that a first gray scale field is included in a front-end resource request, gray scale front-end resources corresponding to the front-end resource request are called on the basis of a first path, the first gray scale field represents that a current user is a gray scale user, and the first path is a storage path of the gray scale front-end resources;
And under the condition that the back-end resource request comprises a second gray level field, enabling the gray level back-end resource corresponding to the back-end resource request based on a first node, wherein the second gray level field represents that the current user is a gray level user, and the first node is used for enabling the gray level back-end resource.
2. The method of claim 1, wherein the gray-scale based service interface obtains a classification identifier of a current user, comprising:
acquiring an initial classification identifier of a current user based on a gray service interface;
if the gray service interface returns the initial classification identifier in the target time, determining the initial classification identifier as the classification identifier of the current user;
and if the gray service interface does not return the initial classification identifier within the target time, determining the cache classification identifier as the classification identifier of the current user.
3. The method according to any one of claims 1 to 2, wherein the invoking the gray-scale front-end resource corresponding to the front-end resource request based on the first path comprises:
the first service end establishes the first path and stores the gray level front-end resource into the first path;
And the first service end calls the gray level front-end resource corresponding to the front-end resource request in the first path under the condition that the front-end resource request comprises a first gray level field.
4. The method according to any one of claims 1 to 2, wherein the second service side is a single-body architecture, and the enabling of the grayscale backend resource corresponding to the backend resource request based on the first node includes:
the second server side mounts a gray scale resource pool on the load balancer;
and the second service end distributes the back-end resource request to the gray-scale resource pool under the condition that the back-end resource request comprises a second gray-scale field, so that the gray-scale resource pool enables gray-scale back-end resources based on the back-end resource request, and the gray-scale resource pool is a first node in the load balancer.
5. The method according to any one of claims 1 to 2, wherein the second service side is a micro-service architecture, and the enabling of the grayscale backend resource corresponding to the backend resource request based on the first node comprises:
the second service end establishes a gray scale cluster based on standard configuration;
and the second service end distributes the back-end resource request to the gray scale cluster under the condition that the back-end resource request comprises a second gray scale field, so that the gray scale cluster enables gray scale back-end resources based on the back-end resource request, and the gray scale cluster is a first node in the second service end.
6. A gradation release system, comprising: the system comprises a client, a first service end, a second service end and a gray micro service end;
the gray micro server is used for adding classification identification for the current user;
the client is used for adding a first gray level field in a front-end resource request corresponding to the current user and adding a second gray level field in a back-end resource request corresponding to the current user under the condition that the classification identifier is a gray level classification identifier;
the first server is used for calling gray level front-end resources based on the front-end resource request;
the second server is used for calling gray back-end resources based on the back-end resource request;
the gray micro server is further used for calculating hit proportion of the current user based on the user account number and the offset of the current user, and adding gray classification identification for the current user when the hit proportion of the current user meets a target threshold value and/or the user account number of the current user belongs to a target guest group;
the hit ratio of the current user is calculated based on the following formula:
wherein,for hit ratio of current user, +.>For the end two digits of the user account +.>Is the offset;
The client is further used for acquiring a classification identifier of the current user based on the gray service interface under the condition that the user account of the current user is not in the first login state, and adding a first gray field in a front-end resource request corresponding to the current user and adding a second gray field in a back-end resource request corresponding to the current user under the condition that the classification identifier is a gray classification identifier;
the first server is further configured to obtain a front-end resource request corresponding to a current user issued by the client, and call a gray-scale front-end resource corresponding to the front-end resource request based on a first path when the front-end resource request includes a first gray-scale field, where the first gray-scale field represents that the current user is a gray-scale user, and the first path is a storage path of the gray-scale front-end resource;
the second server is further configured to obtain a back-end resource request corresponding to a current user issued by the client, and enable a gray-scale back-end resource corresponding to the back-end resource request based on a first node when the back-end resource request includes a second gray-scale field, where the second gray-scale field characterizes the current user as a gray-scale user, and the first node is configured to enable the gray-scale back-end resource.
7. A gradation issuing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a front-end resource request corresponding to a current user issued by the client;
the calling module is used for calling the gray level front-end resource corresponding to the front-end resource request based on a first path under the condition that the front-end resource request comprises a first gray level field, wherein the first gray level field represents that the current user is a gray level user, and the first path is a storage path of the gray level front-end resource;
before the front-end resource request corresponding to the current user issued by the client is acquired, the method further comprises the following steps:
the gray scale micro server calculates the hit proportion of the current user based on the user account number and the offset of the current user;
the gray micro server adds gray classification identification to the current user under the condition that the hit proportion of the current user meets a target threshold value and/or the user account of the current user belongs to a target guest group;
the hit ratio of the current user is calculated based on the following formula:
wherein,for hit ratio of current user, +.>For the end two digits of the user account +.>Is the offset;
the client acquires a classification identifier of the current user based on the gray service interface under the condition that the user account of the current user is not in the initial login state;
And under the condition that the classification identifier is a gray scale classification identifier, the client adds a first gray scale field in a front-end resource request corresponding to the current user, and adds a second gray scale field in a back-end resource request corresponding to the current user.
8. A gradation issuing apparatus, characterized in that the apparatus comprises:
the second acquisition module is used for acquiring a back-end resource request corresponding to the current user issued by the client;
the starting module is used for starting the gray level back-end resource corresponding to the back-end resource request based on a first node under the condition that the back-end resource request comprises a second gray level field, wherein the second gray level field represents that the current user is a gray level user, and the first node is used for starting the gray level back-end resource;
before obtaining the back-end resource request corresponding to the current user issued by the client, the method further comprises the following steps:
the gray scale micro server calculates the hit proportion of the current user based on the user account number and the offset of the current user;
the gray micro server adds gray classification identification to the current user under the condition that the hit proportion of the current user meets a target threshold value and/or the user account of the current user belongs to a target guest group;
The hit ratio of the current user is calculated based on the following formula:
wherein,for hit ratio of current user, +.>For the end two digits of the user account +.>Is the offset;
the client acquires a classification identifier of the current user based on the gray service interface under the condition that the user account of the current user is not in the initial login state;
and under the condition that the classification identifier is a gray scale classification identifier, the client adds a first gray scale field in a front-end resource request corresponding to the current user, and adds a second gray scale field in a back-end resource request corresponding to the current user.
CN202311286247.XA 2023-10-08 2023-10-08 Gray scale publishing method, device and system Active CN117032991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311286247.XA CN117032991B (en) 2023-10-08 2023-10-08 Gray scale publishing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311286247.XA CN117032991B (en) 2023-10-08 2023-10-08 Gray scale publishing method, device and system

Publications (2)

Publication Number Publication Date
CN117032991A CN117032991A (en) 2023-11-10
CN117032991B true CN117032991B (en) 2024-01-26

Family

ID=88641418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311286247.XA Active CN117032991B (en) 2023-10-08 2023-10-08 Gray scale publishing method, device and system

Country Status (1)

Country Link
CN (1) CN117032991B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107864175A (en) * 2017-08-24 2018-03-30 平安普惠企业管理有限公司 Gray scale distribution control method, device, equipment and storage medium
CN110311989A (en) * 2019-08-02 2019-10-08 中国工商银行股份有限公司 A kind of gray scale dissemination method, device, storage medium, equipment and system
CN110650163A (en) * 2018-06-26 2020-01-03 马上消费金融股份有限公司 Gray scale publishing method, system, equipment and computer readable storage medium
CN111538507A (en) * 2020-03-30 2020-08-14 中国平安人寿保险股份有限公司 Gray scale distribution method, device, equipment and readable storage medium
CN112653579A (en) * 2020-12-16 2021-04-13 中国人寿保险股份有限公司 OpenResty-based gray scale publishing method and related equipment
CN113014651A (en) * 2021-03-03 2021-06-22 中国工商银行股份有限公司 Gray scale publishing method, application server and gray scale publishing system
CN113391823A (en) * 2021-06-15 2021-09-14 中国工商银行股份有限公司 Gray scale publishing method, device and system
CN114615135A (en) * 2022-02-18 2022-06-10 佐朋数科(深圳)信息技术有限责任公司 Front-end gray level publishing method, system and storage medium
WO2022142536A1 (en) * 2020-12-28 2022-07-07 京东科技控股股份有限公司 Grayscale publishing method, system and apparatus, and device and storage medium
CN115665162A (en) * 2022-10-26 2023-01-31 广州明动软件股份有限公司 Intelligent shunting engine for gray scale release
CN116126372A (en) * 2022-12-12 2023-05-16 中国电信股份有限公司 Application program upgrading method and device, electronic equipment and storage medium
CN116776030A (en) * 2023-07-19 2023-09-19 金蝶征信有限公司 Gray release method, device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107864175A (en) * 2017-08-24 2018-03-30 平安普惠企业管理有限公司 Gray scale distribution control method, device, equipment and storage medium
CN110650163A (en) * 2018-06-26 2020-01-03 马上消费金融股份有限公司 Gray scale publishing method, system, equipment and computer readable storage medium
CN110311989A (en) * 2019-08-02 2019-10-08 中国工商银行股份有限公司 A kind of gray scale dissemination method, device, storage medium, equipment and system
CN111538507A (en) * 2020-03-30 2020-08-14 中国平安人寿保险股份有限公司 Gray scale distribution method, device, equipment and readable storage medium
CN112653579A (en) * 2020-12-16 2021-04-13 中国人寿保险股份有限公司 OpenResty-based gray scale publishing method and related equipment
WO2022142536A1 (en) * 2020-12-28 2022-07-07 京东科技控股股份有限公司 Grayscale publishing method, system and apparatus, and device and storage medium
CN113014651A (en) * 2021-03-03 2021-06-22 中国工商银行股份有限公司 Gray scale publishing method, application server and gray scale publishing system
CN113391823A (en) * 2021-06-15 2021-09-14 中国工商银行股份有限公司 Gray scale publishing method, device and system
CN114615135A (en) * 2022-02-18 2022-06-10 佐朋数科(深圳)信息技术有限责任公司 Front-end gray level publishing method, system and storage medium
CN115665162A (en) * 2022-10-26 2023-01-31 广州明动软件股份有限公司 Intelligent shunting engine for gray scale release
CN116126372A (en) * 2022-12-12 2023-05-16 中国电信股份有限公司 Application program upgrading method and device, electronic equipment and storage medium
CN116776030A (en) * 2023-07-19 2023-09-19 金蝶征信有限公司 Gray release method, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Design and Implementation of Gray Publishing System under Distributed Application Microservice;Hui Chen;《 2021 11th International Conference on Power and Energy Systems (ICPES)》;全文 *
基于微服务架构的教育发布与管理平台;郑方向;《中国优秀硕士学位论文全文数据库 (社会科学Ⅱ辑)》(第5期);全文 *

Also Published As

Publication number Publication date
CN117032991A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN113849312B (en) Data processing task allocation method and device, electronic equipment and storage medium
CN114095567B (en) Data access request processing method and device, computer equipment and medium
CN112953938B (en) Network attack defense method, device, electronic equipment and readable storage medium
CN113766487A (en) Cloud mobile phone information acquisition method, device, equipment and medium
CN112905314A (en) Asynchronous processing method and device, electronic equipment, storage medium and road side equipment
CN117032991B (en) Gray scale publishing method, device and system
CN115934076B (en) Cross-platform client micro-service providing device and method and electronic equipment
CN113935069B (en) Data verification method, device and equipment based on block chain and storage medium
CN112965836B (en) Service control method, device, electronic equipment and readable storage medium
CN114816393A (en) Information generation method, device, equipment and storage medium
CN114090247A (en) Method, device, equipment and storage medium for processing data
CN113556394A (en) Cloud product network usage processing method, device, equipment, storage medium and product
CN112989250B (en) Web service response method and device and electronic equipment
CN113535187B (en) Service online method, service updating method and service providing method
CN114428646B (en) Data processing method and device, electronic equipment and storage medium
CN115334040B (en) Method and device for determining Internet Protocol (IP) address of domain name
CN113961263B (en) Applet distribution method, device, apparatus and storage medium
CN114610575B (en) Method, apparatus, device and medium for calculating updated peak value of branch
CN116954862A (en) Task processing method and device and electronic equipment
CN116738470A (en) User identity association method and device, electronic equipment and storage medium
CN116385255A (en) Model scheduling method, device, equipment and medium based on GPU resources
CN116827703A (en) Virtual interaction control method, device, equipment and storage medium
CN116506165A (en) Cloud mobile phone identification method and device, electronic equipment and readable storage medium
CN115145725A (en) Cloud equipment distribution method and device, electronic equipment and storage medium
CN115934355A (en) Service degradation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant