CN111311336B - Method and system for testing and tracking policy execution - Google Patents

Method and system for testing and tracking policy execution Download PDF

Info

Publication number
CN111311336B
CN111311336B CN202010186955.6A CN202010186955A CN111311336B CN 111311336 B CN111311336 B CN 111311336B CN 202010186955 A CN202010186955 A CN 202010186955A CN 111311336 B CN111311336 B CN 111311336B
Authority
CN
China
Prior art keywords
policy
group
comparison
strategy
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010186955.6A
Other languages
Chinese (zh)
Other versions
CN111311336A (en
Inventor
周佳欢
王洵
欧黎源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202010186955.6A priority Critical patent/CN111311336B/en
Publication of CN111311336A publication Critical patent/CN111311336A/en
Application granted granted Critical
Publication of CN111311336B publication Critical patent/CN111311336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Landscapes

  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a method and a system for testing and tracking policy execution, which are used for testing a modified policy so as to reduce any influence possibly brought by the test. The test tracking system for policy execution may be an on-line experiment platform, each test policy in the system may correspond to an experiment in the experiment platform, and the experiment platform may perform multiple experiments simultaneously. For example, each time a product is changed, a small number of users are defined in an experiment platform to perform experiments, and when the experiment result shows that the change can reach the expected effect or meet the corresponding evaluation standard, the number of users adopting the strategy is gradually increased.

Description

Method and system for testing and tracking policy execution
Technical Field
The present disclosure relates to the field of policy testing, and in particular, to a method and system for testing and tracking policy execution.
Background
At present, products in the industries of the Internet and the like basically have a large number of user groups, and slight changes of product design strategies, such as adjustment of product appearance, can cause loss of a large number of users, so that competitiveness is reduced. Therefore, any modification of the design strategy of the product needs to be effectively evaluated, for example, whether the response of the modification of the design strategy in the user group can meet the requirements of a certain proportion of the user group needs to be evaluated. In the present situation, some policy test methods obtain a policy test result that is not accurate enough. In addition, when multiple strategies need to be tested at the same time, some strategy testing systems also have the problem of low testing efficiency. Therefore, it is necessary to provide a method and a system for evaluating the validity of a policy, which evaluate the validity of one or more kinds of policy modification, improve the accuracy of policy testing, and improve the testing efficiency of the whole testing system.
Disclosure of Invention
One of the embodiments of the present specification provides a method for testing and tracking policy execution, the method comprising: acquiring related information of a first user group through a network, screening the first user group according to a third preset condition, and determining a second user group; grouping the second user group according to a first preset condition and a first allocation algorithm, wherein the corresponding first grouping category at least comprises a first experiment group and a first comparison group; controlling user terminals of user groups in the first experiment group and the first comparison group to execute a first strategy and a first comparison strategy respectively through a network; monitoring corresponding first strategy result parameters and first comparison strategy result parameters after the first strategy and the first comparison strategy are executed through a network; grouping at least part of the second user group according to a second preset condition and a second distribution algorithm, wherein the corresponding second grouping group at least comprises a second experiment group and a second control group; controlling user terminals of user groups in the second experiment group and the second comparison group to execute a second strategy and a second comparison strategy respectively through a network; monitoring corresponding second strategy result parameters and second comparison strategy result parameters after the second strategy and the second comparison strategy are executed through a network; comparing the first strategy result parameter with the first comparison strategy result parameter to obtain a first comparison result, and judging the effectiveness of the first strategy based on the first comparison result; and comparing the second strategy result parameter with the second comparison strategy result parameter to obtain a second comparison result, and judging the effectiveness of the second strategy based on the second comparison result.
One of the embodiments of the present specification provides a test tracking system for policy enforcement, the system comprising: the user information acquisition module is used for acquiring related information of the first user group through a network; the user group screening module is used for screening the first user group according to a third preset condition and determining a second user group; the first grouping module is used for grouping the second user group according to a first preset condition and a first allocation algorithm, and the corresponding first grouping category at least comprises a first experiment group and a first comparison group; the first strategy executing module is used for controlling the user terminals of the user groups in the first experiment group and the first comparison group to execute a first strategy and a first comparison strategy respectively through a network; the first monitoring module is used for monitoring corresponding first strategy result parameters and first comparison strategy result parameters after the first strategy and the first comparison strategy are executed through a network; the second grouping module is used for grouping at least part of the second user group according to a second preset condition and a second distribution algorithm, and the corresponding second grouping comprises at least a second experiment group and a second comparison group; the second policy executing module is used for controlling the user terminals of the user groups in the second experiment group and the second comparison group to execute a second policy and a second comparison policy respectively through a network; the second monitoring module is used for monitoring corresponding second strategy result parameters and second comparison strategy result parameters after the second strategy and the second comparison strategy are executed through a network; the first result comparison module is used for comparing the first strategy result parameter with the first comparison strategy result parameter to obtain a first comparison result, and judging the effectiveness of the first strategy based on the first comparison result; and the second result comparison module is used for comparing the second strategy result parameter with the second comparison strategy result parameter to obtain a second comparison result, and judging the validity of the second strategy based on the second comparison result.
One of the embodiments of the present disclosure provides a test tracking apparatus for policy execution, where the apparatus includes a processor, and the processor is configured to perform the method for test tracking for policy execution described above.
One of the embodiments of the present disclosure provides a computer-readable storage medium storing computer instructions that, when read by a computer, perform the method of test tracking of policy enforcement described above.
Drawings
The present application will be further illustrated by way of example embodiments, which will be described in detail with reference to the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario of a test tracking system for policy enforcement according to some embodiments of the present application;
FIG. 2 is an exemplary flow chart of a test tracking method for policy enforcement according to some embodiments of the present application;
FIG. 3 is a schematic diagram of an experimental platform corresponding to a test tracking system for policy enforcement according to some embodiments of the present application; and
FIG. 4 is a block diagram of a test tracking system according to policy enforcement shown in some embodiments of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application may be applied to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies of different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules or units in a system according to embodiments of the present application, any number of different modules or units may be used and run on clients and/or servers. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
One or more embodiments of the present specification relate to a test tracking system for policy enforcement that may be applied to test the effectiveness of product design change policies in the internet industry, and the like. For example, when the test results indicate that the modification strategy of the internet product can achieve the expected effect or meet the corresponding evaluation criteria, the modification strategy may be considered to be effective, whereas when the test results indicate that the modification strategy of the internet product cannot achieve the expected effect or meet the corresponding evaluation criteria, the modification strategy may be considered to be ineffective. Industry products such as the internet may include, but are not limited to, tool type products, media type products, community type products, game type products, platform type products, complex type products, and the like. For example, products for pushing information about the vehicle fueling on-line for users, products for sharing trips, products for pushing news for users, and online shopping platforms, etc.
In some embodiments, the policy-implemented test tracking system may test the modification policy of the product to a lesser extent to reduce any impact that the test may have. The test tracking system for policy execution may be an on-line experiment platform, each test policy in the system may correspond to an experiment in the experiment platform, and the experiment platform may perform multiple experiments simultaneously. For example, each time a product is changed, a small number of users are defined in an experiment platform to perform experiments, and when the experiment result shows that the change can reach the expected effect or meet the corresponding evaluation standard, the number of users adopting the strategy is gradually increased. Even if the experimental result shows that the expected effect cannot be achieved or becomes worse, only a small part of users are affected, and no big problem is caused.
In some embodiments, all experiments of an experiment platform may be located at the same experiment layer, and when there is traffic (e.g., a certain number of online user groups) in the experiment platform, all experiments located at the same experiment layer may be partitioned into total traffic of the experiment platform according to rules. For example, all experiments may divide the total flow of the experimental platform equally. The experiment platform can count policy result parameters generated after the flow enters any one experiment, the policy result parameters can be embodied in information of each user in the flow, and the policy result parameters can include, but are not limited to, click volume, order volume, conversion rate, browsing time and waiting time of the corresponding flow. In this embodiment, all of the modification strategies of the product can be tested for effectiveness by experimentation, but when the product strategy is frequently adjusted, the flow rate allocated by each experiment also needs to be frequently adjusted, resulting in a failure to respond quickly to the business needs.
In some embodiments, the experimental platform may be divided into a plurality of different experimental layers. After the flow in the experimental platform is executed in one experimental layer, the flow in the experimental platform can enter the next experimental layer to continue other experiments. When the experimental platform has a new strategy experiment to be implemented, the experiment can be performed in the next experimental layer. In the embodiment, the flow can be repeatedly used in a plurality of experimental layers, so that flow sharing is realized, and the utilization rate of the flow is improved; and after a new experiment is added, the flow of the original experiment does not need to be adjusted, and the mode can rapidly respond to the service requirement.
In some embodiments, the experimental platform may be used to test the strategic availability of internet fueling products on-line. The experimental platform can be used for testing the effectiveness of different interface display mode strategies of the product, and the interface display modes can include, but are not limited to, interface background, font setting, placement positions of all functional areas, document content, information presentation modes and the like. For example, each experiment in the first layer may correspond to a different font color changing strategy, the first experiment in the first layer may perform a test when the font color changes to yellow, and the second experiment may perform a test when the font color changes to red. In some embodiments, the experimental platform may also be used to test the effectiveness of different background data processing strategies of the product, which may include, but are not limited to, recommendations of various information (e.g., gas stations), ordering, and the like. For example, a first experiment in the second tier may implement a gas station recommended by distance policy and a second experiment in the second tier may implement a gas station recommended by brand policy. In some embodiments, the experimental platform may also be used to test the policy validity of the shared travel class product. For example, the method can be used for testing the validity of different get-on point recommendation strategies, the validity of different driver pushing strategies, the validity of different dispatch type strategies and the like.
FIG. 1 is a schematic illustration of an application scenario of a test tracking system for policy enforcement according to some embodiments of the present application.
In some embodiments, the policy-enforced test tracking system 100 may include a server 110, a network 120, a user terminal 130, and a storage device 140. The server 110 may include a processor 112.
In some embodiments, a server may be used to process information and/or data related to policy testing. For example, the server 110 may automatically select a user group to conduct a product policy experiment based on the business domain of the product. In some embodiments, server 110 may also obtain information about the user population of user terminal 130 via network 120. For example, the server 110 may obtain the location of the corresponding user group by obtaining a GPS signal of the user terminal. For another example, the server 110 may also obtain attribute information and preference information of the user group from the corresponding storage device through the network. In some embodiments, the server 110 may also automatically control the user terminals 130 of the corresponding user group to execute the corresponding policies based on the grouping result of the user group. In some embodiments, server 110 may also record and evaluate the validity of the policy. For example, the server 110 may implement the first comparison policy to the user terminal of a certain user group through the network, and implement the first comparison policy to the user terminal of another user group through the network, and determine the validity of the first comparison policy by comparing the execution results of the two.
In some embodiments, the server 110 may be a stand-alone server or a group of servers. The server farm may be centralized or distributed (e.g., server 110 may be a distributed system). In some embodiments, the server 110 may be regional or remote. For example, server 110 may access information and/or material stored in user terminal 130, storage device 140 via network 120. In some embodiments, the server 110 may be directly connected to the user terminal 130, the storage device 140 to access information and/or material stored therein. In some embodiments, server 110 may execute on a cloud platform. For example, the cloud platform may include one of a private cloud, a public cloud, a hybrid cloud, or the like, or any combination thereof.
In some embodiments, the server 110 may include a processor 112. The processor 112 may process and perform one or more of the functions described herein. For example, the processor 112 may determine the second user population based on screening the first user population. For another example, the processor 112 may also divide the second population of users into an experimental group and a control group. For another example, the processor 112 may determine the validity of the first control strategy by comparing the first control strategy result parameter generated by executing the first control strategy with the first control strategy result parameter generated by executing the first control strategy. In some embodiments, the processor 112 may include one or more sub-processors (e.g., single core processing devices or multi-core processing devices). By way of example only, processor 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processor (GPU), a Physical Processor (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), an editable logic circuit (PLD), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, and the like, or any combination thereof.
The network 120 may facilitate the exchange of data and/or information. In some embodiments, one or more components in the policy-enforced test tracking system 100 (e.g., server 110, user terminal 130, storage device 140) may send data and/or information to other components in the policy-enforced test tracking system 100 over the network 120. For example, the server 110 may obtain relevant information of a user group through a network. For another example, the server 110 may send information to the user terminal 130 over a network to control it to execute the product policy. Also for example, server 110 may monitor and/or obtain policy result parameters generated by executing product policies in user terminal 130 via network 120. In some embodiments, network 120 may be any type of wired or wireless network. For example, the network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an internal network, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a Bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, and the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base station and/or Internet switching points 120-1, 120-2, …, through which one or more components of the policy-enforced test tracking system 100 may be connected to the network 120 to exchange data and/or information.
The user terminal 130 may be a client when the user is using the product, for example, a cell phone capable of displaying a fueling application interface that each user of the user holds in his or her hand in a community of users of fueling application products. In some embodiments, the user terminal 130 may be configured to execute a product policy and generate corresponding policy result parameters. In some embodiments, user terminals 130 of different user groups may implement different product policies. For example, the server 110 may control the user terminal 130 of the first experiment group user to execute the first collation policy through the network 120; the server 110 may also control the user terminal 130 of the first collation group user to execute the first collation policy through the network 120. For another example, the user terminal 130 may generate a first comparison policy result parameter after executing the first comparison policy, where the first comparison policy result parameter may also be transmitted to the server 110 through the network 120. In some embodiments, the user group may generate relevant information during use of the user terminal 130, which may be viewed as a policy-corresponding outcome parameter, e.g., how many people clicked on the advertisement page by the user terminals in the user group. In some embodiments, the user terminal 130 may include one or any combination of a mobile device 130-1, a tablet 130-2, a notebook 130-3, a vehicle-mounted device 130-4, and the like. In some embodiments, the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a metaverse device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the intelligent furniture apparatus may include intelligent lighting apparatus, control apparatus for intelligent appliances, intelligent monitoring apparatus, and the like, or any combination thereof. In some embodiments, the user terminal 130 may include a device with positioning functionality to determine the location of the user and/or the user terminal 130.
The storage device 140 may store data and/or instructions. For example, the storage device 140 may store preference information or the like for each user in the user group. For another example, the storage device 140 may acquire the result parameter after policy execution from the user terminal 130 through the network 120 and store it as related information of the user. In some embodiments, storage device 140 may store information and/or instructions for execution or use by server 110 to perform the exemplary methods described herein. In some embodiments, the storage device 140 may include mass storage, removable storage, volatile read-write memory (e.g., random access memory RAM), read-only memory (ROM), and the like, or any combination thereof. In some embodiments, storage device 140 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, etc., or any combination thereof.
In some embodiments, storage device 140 may be connected to network 120 to communicate with one or more components of system 100 (e.g., server 110, user terminal 130, etc.). One or more components of the policy-enforced test tracking system 100 may access materials or instructions stored in the storage device 140 via the network 120. In some embodiments, the storage device 140 may be directly connected to or in communication with one or more components (e.g., server 110, user terminal 130) in the policy-enforced test tracking system 100. In some embodiments, the storage device 140 may be part of the server 110.
In some embodiments, the test tracking system for executing the policy in the application may automatically screen the user group matched with the policy to be tested from the total online user traffic for testing, automatically group the tested traffic according to a preset condition and algorithm when a certain policy test experiment is performed, automatically monitor the executed result of the policy executed on the user terminal corresponding to the user group, and automatically give a judgment result about the validity of the policy based on the executed result. The user of the policy testing system only needs to input the policy to be tested into the system, and the system can automatically output the judging result matched with the user requirement, so that the efficiency of the effectiveness of the online testing policy is improved.
In some embodiments, if two or more strategies do not conflict with each other, the system can also test the two or more strategies based on the same user group, and the utilization rate of the test flow is improved by sharing the test flow through a plurality of strategy test experiments, so that the problems of insufficient test flow and low test efficiency are solved to a certain extent.
In some embodiments, in the test comparison experiment of the same strategy, the system further scientifically groups the user groups according to preset conditions and distribution standard parameters based on a preset distribution algorithm, so that the characteristics of each group of user groups are kept consistent as much as possible, the difference of the characteristics of the user groups among different groups caused by manual grouping or random grouping is avoided, and the accuracy of the corresponding strategy test result is further improved.
FIG. 2 is an exemplary flow chart of a test tracking method for policy enforcement according to some embodiments of the present application.
Step 210, acquiring related information of the first user group through the network, and screening the first user group according to a third preset condition to determine a second user group. In some embodiments, step 210 may be performed by user information acquisition module 410.
In some embodiments, the first group of users may be all online users accessing the internet product for a period of time. In some embodiments, a user accessing an internet product over time may use traffic to make statistics. In some embodiments, the first group of users may correspond to the full traffic of internet products.
In some embodiments, the related information may include location information of the user. In some embodiments, the user's location information may be reflected by the geographic location of the user's user terminal 130. In some embodiments, the geographic location of the user terminal 130 may include an absolute geographic location of the user terminal, e.g., the geographic location of the user terminal 130 is Hangzhou or Shanghai. In some embodiments, the geographic location of the user terminal may also include the relative geographic location of the user terminal, e.g., the geographic location of the user terminal 130 is 5 kilometers from the target gas station. In some embodiments, the server 110 may obtain the location of the user terminal 130 through a positioning technique. Including, but not limited to, satellite navigation positioning techniques (e.g., GPS navigation positioning, beidou navigation positioning) and techniques for position location by IP address. For example, the server 110 may acquire a GPS signal of the user terminal 130 through the network 120 and determine location information of the user based on the GPS signal.
In some embodiments, the related information may also include preference information of the user. In some embodiments, the user's preference information may be set in the product by the user. In some embodiments, the user preference information may also be obtained from a history of the user's use of the product. For example, when using the internet to refuel a product, the user may set his/her own brand of gas station that is preferred when refuelling. For another example, the server 110 may also determine the brand of gas station that the user prefers from the user's historical fueling records when using the internet fueling product. In some embodiments, the preference information of the user may be stored in the storage device 140 as a tag for the user, and the server 110 may obtain the preference information corresponding to a certain user from the storage device 140 through a network.
In some embodiments, the related information may further include time information of the user using the product in the user terminal 130. In some embodiments, the time information may have an impact on the user's use of the product. For example, when a user uses a shared travel product, the waiting time required for the user to drive is longer when the user uses the product during peak hours. In some embodiments, the time information may be embodied as a time displayed on the user terminal when the user operates the user terminal, and the server 110 may acquire the time of the user terminal of the user through the network.
In some embodiments, the second user group may be understood as a user group for testing whether a certain policy is valid. Further, it can be understood as a population of users for testing whether the first policy is valid. In some embodiments, the total traffic of the first user population may be treated as the second user population for policy testing. For example, all traffic data in the database may be used as the second user population to test whether the first policy is valid. In some embodiments, a portion of the fluid may also be selected from the first group of users as the second group of users. In some embodiments, the selection criteria of the second user group may be selected randomly or may be selected according to a preset condition. For example, the second user group may be understood as a part of the traffic selected from the total traffic by using a third preset condition, and used for performing a test experiment for determining whether the first policy is valid. In some embodiments, the third preset condition may be understood as a screening rule for screening the second user group from the batch of traffic. In some embodiments, the screening rules may be determined based on at least one of: the number of users, the location information of the users, whether the users are customers of the related products, etc. In some embodiments, the server 110 may screen out a corresponding amount of traffic as the second user group according to the preset number of users. For example, server 110 may randomly screen out 20% of the total traffic as the second user population; or server 110 may randomly screen out 2 ten thousand users as the second user group. In some embodiments, server 110 may delineate a group of users whose location information is within the same geographic range as a second group of users. For example, server 110 may determine a user group for which the location information is Hangzhou as the second user group. In some embodiments, server 110 may treat the customer base of the first policy corresponding product as the second customer base. For example, the first policy includes a display policy for fueling application pages, then server 110 may treat users who have used and are using fueling products as the second user group.
In some embodiments, an evaluation experiment of the effectiveness of a policy may be performed based on the second population of users after screening, in which the policy needs to be experimentally compared to its control policy in order to determine whether the policy is effective. Correspondingly, the subsequent step requires grouping the second user population for testing the above-mentioned policies and their comparison policies.
Step 220, grouping the second user group according to the first preset condition and the first allocation algorithm. In some embodiments, step 220 may be performed by the first grouping module 430.
In some embodiments, the server 110 may automatically group the second user group according to a first preset condition and a first allocation algorithm, where the group includes at least a first experiment group and a first control group; tests corresponding to the first strategy and the first control strategy, respectively.
In some embodiments, the first allocation algorithm may be understood as an allocation algorithm instruction that the server 110 automatically groups the second user group under the limitation of the first preset condition.
In some embodiments, the first preset condition may be understood as a defined condition when the server 110 groups the second user group, including, but not limited to, the number of groups, and the traffic ratio corresponding to each group. For example, the first preset condition may include dividing the second user group into two groups: experimental groups and control groups, wherein 30% was used for experimental groups and 70% was used for control groups.
In some embodiments, the first allocation algorithm may comprise a random allocation algorithm, i.e. a random allocation to each user of the second group of users. For example, the server 110 may randomly draw 30% of the users in the second user population as experimental group members, with the remaining 70% as control group members.
In some implementations, if different groups of users are used to test the effects of the changed policies, the more consistent the feature information of the user groups within the different groups, the better the differences in effects due to the changed policies can be reflected. The feature information of the user group may include at least one of preference information of the user group (for example, whether the user compares the attention price, whether the user compares the attention brand, or whether the user compares the attention distance), location information (for example, a city in which the user is located), and the like. Correspondingly, in order to improve the accuracy of the experimental result, when the second user group is grouped, the characteristic information of the user group can be used as the allocation standard parameter, so that the server can have scientificity when the second allocation algorithm is used for automatic grouping. The allocation criteria parameters include preference information and/or location information of the user population. In some embodiments, when the allocation standard parameter of the first allocation algorithm includes preference information, the first allocation algorithm may allocate the user groups with consistent preference types according to a defined ratio of the experimental group and the control group in the first preset condition. Specifically, the first allocation algorithm may classify the second user group according to the preference type, and then classify the user groups in each category into corresponding groups according to the corresponding proportion requirement. For example, the first allocation algorithm may first divide the second user group into two categories, the first category of user group favoring low prices; the second group of users likes a brand. The first class of user population is then divided into experimental and control group members at a ratio of 30% to 70%, and the second class of user population is also divided into experimental and control group members at a ratio of 30% to 70%. In some embodiments, when the allocation standard parameter of the first allocation algorithm includes location information, the first allocation algorithm may allocate the user groups with consistent locations according to a defined ratio of the experimental group and the control group in the first preset condition. For example, the first allocation algorithm may divide Hangzhou users in the second user group into experimental and control group members at a ratio of 30% to 70%, and Suzhou users in the second user group into experimental and control group members at a ratio of 30% to 70%. In some embodiments, when the allocation standard parameters of the first allocation algorithm include location information and preference information, the user groups of the same preference type and the same location area may be divided into an experimental group and a control group according to a preset ratio by referring to the foregoing method. Step 230, the user terminals of the user groups in the first experiment group and the first comparison group are controlled by the network to execute the first policy and the first comparison policy, respectively, and step 230 may be executed by the first policy execution module 440.
In some embodiments, the server 110 may control the user terminals 130 of the user groups in the first experimental group and the first control group to execute the first policy and the first control policy, respectively, through the network 120. In some embodiments, the first policy may be a post-product-change policy and the first control policy may be a pre-product-change policy.
In some embodiments, controlling the user terminal 130 to execute policies may be understood as the server 110 applying the corresponding policies to the user terminal 130 through the network 120; the user's response to the user terminal 130 may be understood as the result of the execution of the policy by the user terminal. In some embodiments, after a certain policy is executed by the user terminal 130, the policy may be perceived by the user through an interface display, for example, a front-end interface may expose the policy, or a background data policy or a background model policy that can cause the front-end interface to expose a change. In some embodiments, the user terminal 130 cannot display the policy through the front-end interface after executing the policy, that is, cannot be perceived by the user, for example, a dispatch algorithm policy in a background data policy.
Whether the user terminal executes the strategy can bring about the change of the front-end interface or not, the front-end interface is perceived by the user, and the response or operation of the user on the user terminal can be used as the execution result of the corresponding strategy within a certain time. For example, the change of the strategy of the oiling sorting algorithm corresponding to the oiling product on the user terminal can bring about the change of the display of the oiling station in the front-end interface, and the clicking operation or the ordering operation of the user on the user terminal can be used as the execution result after the strategy change. For another example, the changed dispatching algorithm strategy is executed in the background data corresponding to the travel products on the user terminal, and although the user cannot perceive the dispatching algorithm strategy in the front end interface, the boarding time operated by the user at the user terminal can be used as an execution result corresponding to the strategy.
Policies mentioned in one or more embodiments of the present description may include combinations of one or more of the following: the front-end interface displays strategies, data calling strategies of the front-end interface, background model related strategies and background data related strategies.
In some embodiments, the front-end interface presentation policy may include, but is not limited to, an interface background, a font-related setting, a placement location of each functional area, a document content, an information rendering mode, and the like. For example, the font color is yellow or red, etc. For example, the interface background is white, or colored with a holiday atmosphere, etc. For example, different placement orders of the plurality of function buttons, etc. For example, the content of the document is mandarin habit, dialect habit, or the like. For example, the information is presented in text form or picture form.
In some embodiments, the data call policy of the front-end interface may include, but is not limited to, policies of a call path of data, call content of data, call scope of data, and the like. For example, the invoked data is stored in memory to facilitate the next quick presentation policy. For example, when certain gas station data is called in the internet fueling product, a strategy of specific address, fuel price, historical fueling times and the like of the data is called. For example, when the fuel station data is invoked, the fuel station data within 5 km from the user is invoked.
In some embodiments, the background model-related policies may refer to policies of use of different background data processing frameworks or algorithm models, including, but not limited to, policies of ranking models, image recognition models, natural semantic models, deep learning models, and the like. For example, a policy for sharing a ranking model for travel products for recommending pick-up points. For example, a policy to obtain a natural semantic model of historical orders based on a user query.
In some embodiments, the background data-related policies may include, but are not limited to, policies for acquisition of data, invocation of data, adjustment of data, evaluation of data, and the like. For example, a strategy for acquiring data of a gas station within 3 km from the user. For example, the data priority is evaluated and a policy of higher priority data is used. For example, a parameter adjustment strategy in an order dispatch algorithm, etc.
Step 240, monitoring, by the network, a first policy result parameter and a first comparison policy result parameter corresponding to the first policy and the first comparison policy after being executed. In some embodiments, step 240 may be performed by the first monitoring module 450.
In some embodiments, after the server 110 controls the corresponding user terminal 130 to execute the corresponding policy through the network 120, an execution result related to the user behavior is induced, and at least part of the execution result may be regarded as a policy result parameter. Specifically, after executing the first policy, the user terminal 130 corresponding to the user group of the first experiment group may cause an execution result related to the user behavior of the user group, and obtain a corresponding first policy result parameter; after executing the first comparison policy, the user terminal 130 corresponding to the user group of the first comparison group may trigger an execution result related to the user behavior of the user group, and obtain a corresponding first comparison policy result parameter. In some embodiments, the server 110 may monitor and record the first policy result parameter and the first comparison policy result parameter through the network 120, so as to facilitate subsequent comparison and analysis of the policy execution result. For example, the server 110 may record the monitored first policy result parameter and the first control policy result parameter via the network 120 and store the recorded results in the storage device 140 for subsequent comparison of the two parameters. In some embodiments, server 110 may monitor the policy enforcement results for each user in the user population in the first experimental group and the first control group. The policy result parameters are different in different application scenarios, and will be described in detail below in connection with different scenarios.
In some embodiments, the policy result parameters may include, but are not limited to, click volume, order volume, conversion rate, browsing time, waiting time, etc. for the corresponding user population. In some embodiments, the click volume may be a number of clicks, or a click rate, for users in a corresponding user population; the order quantity can be the number of the orders of the users in the corresponding user group or the total amount of the orders; the browsing time can be the residence time of the user in the browsing interface in the corresponding user group; the waiting time may be a time from generation of a drive-in order to boarding of the drive-in order when the users in the corresponding user group use the shared travel product. In some embodiments, an increase in the click volume or order volume or conversion rate or browsing time indicates that the corresponding policy result parameter is better, whereas the corresponding policy result parameter is worse; and if the waiting time is reduced, the corresponding strategy result parameter is indicated to be better, otherwise, the corresponding strategy result parameter is indicated to be worse.
In some embodiments, the front-end interface may include a fueling application interface, the corresponding front-end interface presentation policy may include a policy related to address selection of a fueling station or a presentation order of related information, the related information may include information of a brand, a price, a distance, etc. of the fueling station, and the corresponding policy result parameter may include at least one of a click-through amount, an order amount, and a browsing time.
In some embodiments, the experimental platform may test policies related to address selection of gas stations. For example, the original address selection policy of the gas station may be selected by clicking on a text list, and the policy to be tested may be: the selection is made in the map by clicking on the gas station tab. The first comparison strategy corresponds to the strategy selected by clicking on the text list, and the first strategy corresponds to the strategy selected by clicking on the gas station tag in the map. The first comparison strategy result parameter corresponds to browsing time required by a user when the user selects by clicking the text list, and the first strategy result parameter corresponds to browsing time required by the user when the user selects by clicking the gas station tag in the map. In a subsequent step, the validity of the policy modification may be determined by comparing the browsing times of the two. In some embodiments, a decrease in browsing time indicates that the policy outcome parameter is good and the policy change is valid.
In some embodiments, the experimental platform may test the relevant information presentation order strategy of the gas station. For example, the original related information presentation sequence policy of the gas station may be presented according to distance, and the policy to be tested may be: and displaying according to brands. At this time, the first control policy corresponds to a policy to be presented in terms of distance, and the first policy corresponds to a policy to be presented in terms of brands. The first comparison strategy result parameter corresponds to the order quantity when the display is carried out according to the distance, and the first strategy result parameter corresponds to the order quantity when the display is carried out according to the brand. In a subsequent step, the validity of the policy change may be determined by comparing the order amounts of the two. In some embodiments, an increase in order volume indicates that the policy outcome parameter is good and the policy change is valid.
In some embodiments, the experimental platform may be used to test internet resource delivery products online. In some embodiments, the front-end interface may include a resource delivery application interface, e.g., a duty cycle, a location, etc., of the resource delivery in the application interface. In some embodiments, the corresponding policies may include, but are not limited to, a page design related to content or form of the resource display and an adjustment policy of a background algorithm related to the page design. In some embodiments, the corresponding policy result parameters may include click-through amount and/or conversion rate. In some embodiments, the conversion rate may be understood as the ratio of the number of users achieving the conversion goal in the total users of the resource release.
In some embodiments, the experimental platform may test policies of the page design related to the content or form of the resource display. For example, the internet resource delivery product delivers advertisements in an application, the original delivery strategy is to deliver advertisements in a picture form, and the strategy to be tested can be: advertisements in the form of videos are delivered. At this time, the first comparison policy corresponds to an advertisement policy in the form of a put-in picture, and the first policy corresponds to an advertisement policy in the form of a put-in video. The first comparison strategy result parameter corresponds to the click quantity of the user clicking the picture advertisement, and the first strategy result parameter corresponds to the click quantity of the user clicking the video advertisement. In a subsequent step, the effectiveness of the policy modification may be determined by comparing the click volumes of the two. And when the click rate becomes high, the strategy result parameters become good, and the strategy change is effective.
In some embodiments, the experimental platform may test the strategy of the background algorithm related to the page design. For example, the internet resource delivery product delivers activity information in an application, the delivery content is determined through a background algorithm, an original policy uses activity heat as a main parameter of the algorithm, and the policy to be tested can be: user preferences are used as the main parameters of the algorithm. At this time, the first comparison policy corresponds to a policy having activity hotness as a main parameter of the algorithm, and the first policy corresponds to a policy having user preference as a main parameter of the algorithm. The first comparison strategy result parameter corresponds to the conversion rate of the activity when the activity heat is taken as the main parameter of the algorithm; the first policy result parameter corresponds to the conversion rate of the activity when the user preference is the main parameter of the algorithm. In a subsequent step, the effectiveness of the policy modification can be determined by comparing the conversion of the two. And if the conversion rate is high, the strategy result parameters are good, and the strategy change is effective.
In some embodiments, the experimental platform may be used to test the strategic availability of shared travel products online. In some embodiments, the background data may include algorithmic data related to order dispatch, e.g., algorithmic data to dispatch a user order to a driver. In some embodiments, the policy includes adjustments to relevant parameters in the order dispatch algorithm, such as adjusting algorithm parameters based on congestion conditions of the road. In some embodiments, the corresponding policy result parameter may include a latency.
For example, when the shared travel product receives a passenger order, a background algorithm is used for dispatching a bill to a driver, and an original background algorithm is used for dispatching the bill according to the distance between the driver and the passenger, and the algorithm to be tested can be as follows: and dispatching the order according to the congestion condition of the road between the driver and the passenger. At this time, the first comparison strategy corresponds to an algorithm for dispatching orders according to the distance between the driver and the passenger, and the first strategy corresponds to an algorithm for dispatching orders according to the congestion condition of the road between the driver and the passenger. The first comparison strategy result parameter corresponds to waiting time of the passenger when the passenger is dispatched according to the distance between the driver and the passenger, and the first comparison strategy result parameter corresponds to waiting time of the passenger when the passenger is dispatched according to the congestion condition of the road between the driver and the passenger. In a subsequent step, the validity of the policy change may be determined by comparing the waiting times of the two. And if the waiting time is reduced, the strategy result parameter is improved, and the strategy change is effective.
At step 250, grouping at least a portion of the second user population according to a second preset condition and a second distribution algorithm. In some embodiments, step 250 may be performed by the second grouping module 431.
In some embodiments, when there are multiple product strategies to test at the same time, the experimental platform may be divided into multiple different experimental layers. The strategy test experiments with mutual interference can be performed in the same experimental layer, and the strategy test experiments without mutual interference can be performed in different experimental layers.
As previously described, the first strategy and its control strategy were experimentally compared using the second user population. In some embodiments, to increase the utilization of data traffic, a policy experiment (e.g., a second policy experiment) that does not interfere with the first policy may be performed simultaneously with the second user population. The experimental traffic also needs to be regrouped before the second strategy starts. In the grouping process, the second user group may be grouped again, or a part of the user groups in the second user group may be grouped.
In some embodiments, experimental testing of a certain strategy and its control strategy may be performed in the same experiment. For example, the first policy and the first control policy are tested for packets in the same experiment (e.g., the packets are tested for the first policy and the first control policy in experiment B of fig. 3). At this time, the server 110 may group the second user group again through the network. In some embodiments, testing of a certain strategy and its control strategy may be performed in different experiments. For example, a first strategy is tested in a first experiment and a first control strategy is tested in a second experiment (e.g., experiment a is used to conduct the first strategy test and experiment B is used to conduct the first control strategy test in fig. 3). At this time, the server 110 may perform regrouping on the user group corresponding to the first policy in the second user group through the network, or may perform regrouping on the user group corresponding to the first contrast policy.
In some embodiments, the server 110 may automatically perform the second grouping of at least a portion of the second user group via the network according to the second preset condition and the second allocation algorithm. In some embodiments, the second preset condition may be understood as some defined condition of grouping based on the second policy experiment, including the number of groupings, for example, including a second experiment group and a second control group; also included are corresponding flow ratios for each group, e.g., the second experimental group and the second control group are assigned by 40% and 60% of the number of users, respectively. A specific description of the second preset condition may be found in the foregoing step 220. Correspondingly, referring to the first allocation algorithm, the second allocation algorithm may use a random allocation algorithm, or may perform allocation based on the allocation standard parameters, so that users in the second experiment group and the second control have consistent preference types and/or location areas. For a specific description of the second allocation algorithm, reference is made to step 220 described above.
In some embodiments, when the user group after executing other policy tests is used to test the effectiveness of the next policy, the user group after executing other policies needs to be uniformly scattered, and in the grouping experiment of the second policy, the influence of the other policies on the user group in each group is consistent, so that the accuracy of the effectiveness judgment result of the second policy can be improved. Correspondingly, in some embodiments, the second allocation algorithm further comprises an algorithm capable of uniformly scattering the test traffic. In some embodiments, the second allocation algorithm may include a hash algorithm, for example, the preset algorithm may be a murmur3_128 algorithm in the hash algorithm, and the algorithm may uniformly distribute the flow after uniformly scattering the flow into each group that needs to be compared experimentally.
In step 260, the user terminals of the user groups in the second experiment group and the second comparison group are controlled by the network to execute the second policy and the second comparison policy respectively. In some embodiments, step 260 may be performed by second policy enforcement module 441.
In some embodiments, the server 110 may control the user terminals 130 of the user groups in the second experimental group and the second control group to execute the second policy and the second control policy, respectively, through the network 120. In some embodiments, the second policy may be a post-product-change policy and the second control policy may be a pre-product-change policy. In some embodiments, the second policy may be a policy that interferes with the first policy, that is, when the first policy and the second policy are executed simultaneously, neither policy cannot be implemented. For example, the policy that the font of the front-end page changes to red and the policy that the background of the front-end page changes to red are policies that there is mutual interference, and when the two policies are implemented simultaneously, the font color and the background color of the front-end page both change to the same color, so that the front-end page cannot be displayed normally. For example, the policy that the font of the front page turns red and the policy that the query result ordering rule are policies that there is no mutual interference, both of which may be implemented simultaneously.
Step 270, monitoring, by the network, a second policy result parameter and a second comparison policy result parameter corresponding to the second policy and the second comparison policy after being executed. In some embodiments, step 270 may be performed by the second monitoring module 451.
Similar to step 240, after the server 110 controls the corresponding user terminal 130 to execute the corresponding policy through the network 120, an execution result related to the user behavior is induced, and at least part of the execution result may be regarded as a policy result parameter. Specifically, after executing the second policy, the user terminal 130 corresponding to the user group of the second experiment group may cause an execution result related to the user behavior of the user group, and obtain a corresponding second policy result parameter; after executing the second comparison policy, the user terminal 130 corresponding to the user group of the second comparison group may trigger an execution result related to the user behavior of the user group, and obtain a corresponding second comparison policy result parameter. In some embodiments, the server 110 may monitor and record the second policy result parameter and the second comparison policy result parameter through the network 120, so as to facilitate subsequent comparison and analysis of the policy execution result. For example, the server 110 may record the monitored second policy result parameter and the second control policy result parameter via the network 120 and store the recorded results in the storage device 140 for subsequent comparison of the two parameters. In some embodiments, server 110 may monitor the policy enforcement results for each user in the user population in the second experimental group and the second control group.
Taking the internet fueling product as an example, in the first layer of experiments, the information presentation sequence strategy of the fueling station was tested. In the second layer, the active information push policy may be tested. Because the active information pushing strategy and the information display sequence strategy of the gas station are not mutually interfered, the experiment in the second layer can be performed on the experimental result in the first layer. In the second layer, the original active information pushing policy may be a policy that is pushed according to time, and the policy to be tested may be: and pushing according to the use frequency. At this time, the second control policy corresponds to a policy of pushing by time, and the second policy corresponds to a policy of pushing by frequency of use. The second comparison strategy result parameter corresponds to the click quantity of the activity information when pushing according to time, and the second strategy result parameter corresponds to the click quantity of the activity information when pushing according to the use frequency. Server 110 may monitor the second policy result parameter and the second control policy result parameter via network 120. In a subsequent step, the server 110 may also determine the validity of the policy change by comparing the click volumes of the two.
Step 280, comparing the first policy result parameter with the first comparison policy result parameter, and determining the validity of the first comparison policy. In some embodiments, step 280 may be performed by first result comparison module 460.
In some embodiments, the server 110 may compare the first comparison policy result parameter with the first policy result parameter, obtain a first comparison result, and determine the validity of the first policy based on the first comparison result. Wherein the process of obtaining a first comparison result may include comparing whether there is a significant difference between the first control policy result parameter and the first policy result parameter. In some embodiments, the determination of the significant difference may be based on a predetermined criterion, which may be formulated according to the specific experimental content. For example, in the embodiment of the internet fueling product policy test described above, the order quantity of the first control policy result parameter and the order quantity of the first policy result parameter may be compared, and if the difference between them is less than 3%, the difference between them is considered to be no significant difference, otherwise, if the difference between them is greater than or equal to 3%, the difference between them is considered to be significant. For another example, in the embodiment of the internet resource delivery product policy test, the click rate of the first control policy result and the click rate of the first policy result are compared, if the difference between the two is less than 5%, the two are considered to have no significant difference, otherwise, if the difference is greater than or equal to 5%, the two are considered to have significant difference. Also for example, in the embodiment of the shared travel product policy test, the waiting time of the first comparison policy result and the first policy result is compared, if the difference between the two is less than 3 minutes, the two are considered to have no significant difference, otherwise, if the difference is greater than or equal to 3 minutes, the two are considered to have significant difference. In some embodiments, when the first comparison result is that there is no significant difference, no further comparison of the two is necessary. And when the first comparison result shows that the significant difference exists, comparing the magnitudes of the first comparison result and the first comparison result to determine the magnitude relation between the first comparison strategy result parameter and the first strategy result parameter so as to determine the effectiveness of the first strategy and the first comparison strategy in the subsequent step.
In some embodiments, the process of obtaining the first comparison result may further include: and directly calculating the variation value of the first result parameter relative to the first comparison result parameter, and automatically judging whether the first result parameter is effective or not based on the variation value and a preset variation threshold value. In some embodiments, the calculation may include, but is not limited to, any one of calculating a difference between the two, a growth ratio, and the like. In some embodiments, the variation threshold may be set according to the specific experimental content. For example, in a gas station scenario, the change value of the first result parameter (e.g., the click rate) relative to the first comparison result parameter is calculated as: +50%. For another example, in a fueling scenario, a change value of a first outcome parameter (e.g., order quantity) relative to a first comparison outcome parameter is calculated as: -30%.
In some embodiments, the validity may indicate whether the policy brings about a better effect, where the policy brings about a better effect, the policy may be considered valid, whereas when the policy does not bring about a better effect, the policy may be considered not valid, or the policy may be invalid.
In some embodiments, when there is no significant difference between the comparison result of the first comparison policy result parameter and the comparison result of the first policy result parameter, the effect is substantially consistent when the first policy after the policy modification is compared with the first comparison policy before the policy modification, and the expected result is not obtained, and it may be determined that the first policy is not valid. In some embodiments, when there is a significant difference between the comparison result of the first comparison policy result parameter and the first comparison policy result parameter, the validity of the first comparison policy and the first policy may be determined according to the magnitude relation of the two. For example, in an embodiment of an internet fueling product policy test, the first policy is valid if there is a significant difference in order volumes and the order volume of the first policy is greater, whereas the first policy is not valid if the order volume of the first policy is smaller. For another example, in the embodiment of the internet resource delivery product policy test, if there is a significant difference in the click volumes of the two and the order volume of the first policy is larger, the first policy has validity, otherwise if the click volume of the first policy is smaller, the first policy does not have validity. Also for example, in the embodiment of the shared travel product policy test, if there is a significant difference in latency between the two and the latency of the first policy is shorter, the first policy is valid, whereas if the latency of the first policy is longer, the first policy is not valid.
In some embodiments, the server 110 may also determine the validity of the first policy directly according to the change value in the first comparison result and a preset threshold. And if the change value does not reach the preset threshold value, the first strategy is invalid, and if the change value is greater than or equal to the preset threshold value, the first strategy is valid. For example, in the fueling scene, the result parameter corresponding to the first policy is the click rate, the preset threshold is +40%, where the change value corresponding to the first policy is +50%, which indicates that the first policy is valid.
Step 290, comparing the second policy result parameter with the second comparison policy result parameter, and judging the validity of the second policy. In some embodiments, step 290 may be performed by the second result comparison module 461.
In some embodiments, the server 110 may compare the second policy result parameter with the second control policy result parameter to obtain a second comparison result, and determine the validity of the second policy based on the second comparison result. The method for comparing the second policy result parameter and the second comparison policy result parameter is similar to the method for comparing the first policy result parameter and the first comparison policy result parameter described above, and those skilled in the art will understand similarly with reference to the related description in step 280, and will not be repeated here.
In some embodiments, server 110 may also determine validity of the second policy and the second control policy based on a second comparison of the second policy result parameter and the second control policy result parameter. The method for determining the validity of the second policy and the second comparison policy is similar to the method for determining the validity of the first policy and the first comparison policy described above, and those skilled in the art can understand the method similarly with reference to the related description in step 280, and will not be repeated here.
It should be noted that the above description of the process 100 is for purposes of illustration and description only and is not intended to limit the scope of applicability of the present application. Various modifications and changes to the process 100 will be apparent to those skilled in the art in light of the present disclosure. However, such modifications and variations are still within the scope of the present application. For example, step 280 may be split into two steps for comparing the first policy result parameter with the first control policy result parameter, and determining the validity of the first policy and the first control policy, respectively.
In some embodiments, because each user in the user population has randomness, the user population may have a large volatility when executing the policy, and thus the comparison results may be significantly different due to the volatility of the user population itself. At this point, the comparison of the first policy result parameter and the first control policy result parameter will become untrusted. In some embodiments, to ensure the reliability of the comparison result of the first policy result parameter and the first comparison policy result parameter, the first packet class may further include a first verification group, which is also used to execute the first comparison policy to verify the reliability of the first comparison policy result parameter. In some embodiments, the grouping of the second user groups based on the first preset condition and the first allocation algorithm in the foregoing steps requires a grouping into three groups, namely a first experimental group, a first control group, and a first verification group. The user group in the first verification group and the user group in the first control group should keep consistency in the distribution standards of preference information and position information. In some embodiments, the server 110 may control the user terminals 130 of the user group of the first authentication group to execute the first collation policy through the network 120, and monitor the first authentication policy result parameters generated after the user group of the first authentication group executes the first collation policy on the user terminals 130 through the network 120. In some embodiments, the server 110 may compare the first verification policy result parameter with the first comparison policy result parameter to determine a confidence level of the first comparison result. In some embodiments, if there is a significant difference between the first verification policy result parameter and the first comparison policy result parameter, the first comparison result is not trusted, otherwise the first comparison result is trusted. If the first comparison result is confidence, judging the result confidence of the validity of the first strategy in the experiment, otherwise, judging the result confidence of the validity of the first strategy in the experiment.
Similarly, to ensure the reliability of the comparison result of the second policy result parameter and the second comparison policy result parameter, the second group may further include a second verification group, which also performs the second comparison policy to verify the reliability of the second comparison policy result parameter. In some embodiments, traffic corresponding to the second validation set may be allocated from traffic of the second packet class as needed. In some embodiments, the server 110 may control the user terminals 130 of the user group of the second authentication group to execute the second collation policy through the network 120, and monitor, through the network 120, second authentication policy result parameters generated after the user group of the second authentication group executes the second collation policy on the user terminals 130. In some embodiments, the server 110 may compare the second verification policy result parameter with the second comparison policy result parameter to determine a confidence level of the second comparison result. Here, the method for determining the confidence level of the second comparison result is similar to the method for determining the confidence level of the first comparison result described above, and will not be described herein.
Fig. 3 is a schematic diagram of an experimental platform corresponding to a test tracking system for policy enforcement according to some embodiments of the present application.
In order to clearly show the application of the present description scheme in the relevant scenario, a detailed description will be given below of a multi-layer experimental platform corresponding to the test tracking system for performing the strategy in fig. 3.
In some embodiments, the experiment platform may automatically obtain a user population from the total online traffic for the experiment. In some embodiments, the experimental platform may perform multiple experiments (e.g., experiments A-G) simultaneously, increasing the experimental efficiency of the system. The experiment platform can simultaneously perform a plurality of experiments (for example, experiments B-G) based on one user group n, so that the utilization rate of online sample data is improved, and the problem of insufficient sample quantity is solved to a certain extent.
In some embodiments, referring to fig. 3, an experimental platform may conduct online strategic testing of internet fueling products, the experimental platform having three experimental layers with two experiments in the first layer: experiment A, experiment B; there are three experiments in the second layer: experiment C, experiment D and experiment E; there are two experiments in the third layer: experiment F and experiment G. In some embodiments, any one of the experiments in the experimental platform may include a control group, a validation group, and an experimental group.
The total flow of the experimental platform was screened as entering the first layer into two parts: a user group m and a user group n. In some embodiments, the user population m is used to conduct experiment a and the user population n is used to conduct experiment B.
In the first layer, in experiment B: according to the strategy experiment for displaying the information of the gas station by brands, after the user group n enters the experiment B, the user group n is automatically distributed into three parts and is respectively used for performing a control group, a verification group and an experiment group on the experiment B. The experiment platform can judge whether the strategy of displaying the gas station information according to the brands, which is executed by the experiment platform, is effective according to the result of the experiment B.
In some embodiments, before the flow flows from experiment B to the second layer, the experiment platform may uniformly break up the flow in experiment B based on a preset algorithm, and then divide the flow into three flows n1, n2, n3 for experiment C, experiment D, and experiment E, respectively, to test. In experiment D: the strategy experiment with yellow interface font is taken as an example, and after the flow enters the experiment D, the flow is automatically divided into 3 parts and is respectively used for performing a control group, a verification group and an experiment group on the experiment D. After the experiment D is finished, whether the strategy of yellow interface font is effective or not can be judged according to the experiment result.
Similarly, as shown in fig. 3, the third layer may be tested using flow data from any one of experiments C, D, or F in the second layer. Taking experiment D in the second layer as an example, before the flow flows from experiment D to the third layer, the experiment platform may uniformly break up the flow for executing experiment D based on a preset algorithm, and divide the uniformly broken up flow into n21 and n22, and respectively allocate the flow to experiment F and experiment G in the third layer. When the flows n21 and n22 enter the experiment F or the experiment G respectively, they are equally divided into 3 minutes for the control group, the verification group and the experiment group respectively. And judging the strategy validity of the experiment F or the experiment G according to the corresponding experiment result in a similar method. After the 7 experiments are distributed on different layers of the experiment platform by the experiment platform, the system can automatically screen and distribute the user terminal distribution of the user group to execute strategies in different experiments according to preset rules, and output strategy effectiveness judgment results corresponding to each experiment.
FIG. 4 is a block diagram of a test tracking system according to policy enforcement shown in some embodiments of the present application.
As shown in fig. 4, the policy-implemented test tracking system 400 may include a user information acquisition module 410, a user population screening module 420, a first grouping module 430, a first policy-enforcement module 440, a first monitoring module 450, a second grouping module 431, a second policy-enforcement module 441, a second monitoring module 451, a first result comparison module 460, and a second result comparison module 461.
In some embodiments, the user information acquisition module 410 may be configured to acquire information related to the first user group over a network. In some embodiments, the user information obtaining module 410 is further configured to obtain, through the network, a GPS signal of a user terminal corresponding to the first user group, where the GPS signal can reflect the location information.
In some embodiments, the user group screening module 420 may be configured to screen the first user group according to a third preset condition, and determine a second user group.
In some embodiments, the first grouping module 430 may be configured to group the second user group according to a first preset condition and a first allocation algorithm, and the corresponding first grouping category includes at least a first experiment group and a first control group.
In some embodiments, the first policy executing module 440 may be configured to control, through a network, the user terminals of the user groups in the first experiment group and the first control group to execute the first policy and the first control policy, respectively. In some embodiments, the first policy enforcement module 440 is further configured to control, through a network, user terminals of the user group in the first authentication group to enforce the first collation policy.
In some embodiments, the first monitoring module 450 may be configured to monitor, through a network, the first policy and the first comparison policy result parameter corresponding to the first policy and the first comparison policy after execution. In some embodiments, the first monitoring module 450 is further configured to monitor, through the network, a first verification policy result parameter of the user population of the first verification group after the first comparison policy is executed on the user terminal.
In some embodiments, the second grouping module 431 may be configured to group at least a portion of the second user group according to a second preset condition and a second allocation algorithm, where the corresponding second grouping includes at least a second experiment group and a second control group.
In some embodiments, the second policy enforcement module 441 may be configured to control, via the network, the user terminals of the user groups in the second experiment group and the second control group to enforce the second policy and the second control policy, respectively. In some embodiments, the second policy enforcement module 441 is further configured to control, through a network, user terminals of the user group in the second authentication group to enforce the second collation policy.
In some embodiments, the second monitoring module 451 may be configured to monitor, via the network, the second policy and the second comparison policy result parameter corresponding to the second policy and the second comparison policy result parameter after the second policy and the second comparison policy are executed. In some embodiments, the second monitoring module 451 is further configured to monitor, via the network, a second authentication policy result parameter of the user population of the second authentication group after the second collation policy is executed on the user terminal.
In some embodiments, the first result comparing module 460 may be configured to compare the first policy result parameter with the first control policy result parameter to obtain a first comparison result, and determine validity of the first policy based on the first comparison result. In some embodiments, the first result comparing module 460 is further configured to compare the first verification policy result parameter with the first comparison policy result parameter, and determine a confidence level of the first comparison result.
In some embodiments, the second result comparing module 461 may be configured to compare the second policy result parameter with the second control policy result parameter to obtain a second comparison result, and determine the validity of the second policy based on the second comparison result. In some embodiments, the second result comparing module 461 is further configured to compare the second verification policy result parameter with the second comparison policy result parameter, and determine a confidence level of the second comparison result.
It should be understood that the system shown in fig. 4 and its modules may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only with hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software, such as executed by various types of processors, and with a combination of the above hardware circuitry and software (e.g., firmware).
It should be noted that the above description of the policy-implemented test tracking system and its modules is for convenience of description only and is not intended to limit the application to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art, having the benefit of this disclosure, that various modules may be combined arbitrarily or a subsystem may be constructed to connect with other modules without departing from this concept. For example, in some embodiments, the user information acquisition module 410, the user group screening module 420, the first grouping module 430, the first policy enforcement module 440, the first monitoring module 450, the second grouping module 431, the second policy enforcement module 441, the second monitoring module 451, the first result comparison module 460, and the second result comparison module 461 disclosed in fig. 4 may be different modules in one system, or may be one module to implement the functions of two or more modules. For example, the first result comparing module 460 may be one module, or may be two modules for obtaining the first comparison result and for determining the confidence of the first comparison result, respectively. For another example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the present application.
Possible beneficial effects of embodiments of the present application include, but are not limited to: (1) According to the method, a plurality of strategy experiments are divided according to layers, experiments which are not interfered with each other can be placed on different layers, and the same flow can be subjected to multi-layer experiments at the same time, so that flow sharing is realized; (2) In the method, when the flow enters the lower-layer experiment from the upper-layer experiment, the flow is uniformly scattered, so that different experiments cannot interfere with each other; (3) The method and the device divide the total flow into a part of flow as a verification group, and compare the part of flow with a test result of a control group to verify the confidence coefficient of a corresponding strategy judgment result. It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations of the present application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this application, and are therefore within the spirit and scope of the exemplary embodiments of this application.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, etc., a conventional programming language such as C language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, ruby and Groovy, or other programming languages, etc. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application and are not intended to limit the order in which the processes and methods of the application are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed herein and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.

Claims (22)

1. A method of test tracking of policy execution, the method comprising:
acquiring related information of a first user group through a network, and determining a second user group for testing;
grouping the second user group according to a first preset condition and a first allocation algorithm, wherein the corresponding first grouping category at least comprises a first experiment group and a first comparison group;
controlling user terminals of user groups in the first experiment group and the first comparison group to execute a first strategy and a first comparison strategy respectively through a network;
monitoring corresponding first strategy result parameters and first comparison strategy result parameters after the first strategy and the first comparison strategy are executed through a network;
grouping at least part of the second user group according to a second preset condition and a second distribution algorithm, wherein the corresponding second grouping group at least comprises a second experiment group and a second control group;
controlling user terminals of user groups in the second experiment group and the second comparison group to execute a second strategy and a second comparison strategy respectively through a network;
monitoring corresponding second strategy result parameters and second comparison strategy result parameters after the second strategy and the second comparison strategy are executed through a network;
Comparing the first strategy result parameter with the first comparison strategy result parameter to obtain a first comparison result, and judging the effectiveness of the first strategy based on the first comparison result;
and comparing the second strategy result parameter with the second comparison strategy result parameter to obtain a second comparison result, and judging the effectiveness of the second strategy based on the second comparison result.
2. The method of claim 1, wherein the obtaining, via the network, information about the first group of users and determining the second group of users for testing further comprises:
and screening the first user group according to a third preset condition to determine the second user group.
3. The method of claim 1, wherein the information related to the first group of users includes at least location information of the group of users;
the acquiring the related information of the first user group through the network comprises the following steps:
and acquiring GPS signals of the user terminals corresponding to the first user group through a network, wherein the GPS signals can reflect the position information.
4. The method according to claim 1, wherein the relevant information of the first user group comprises preference information and/or location information of the user group; the allocation standard parameters in the first allocation algorithm and/or the second allocation algorithm comprise at least preference information and/or location information of the user group.
5. The method of claim 1, wherein the policy comprises a combination of one or more of: the front-end interface displays the strategy, the data calling strategy of the front-end interface, the related strategy of the background model and the related strategy of the background data; wherein the policies include at least the first policy and the second policy.
6. The method of claim 1, wherein the policy result parameters include at least one of a click volume, an order volume, a conversion rate, a browsing time, and a waiting time of the corresponding network data; wherein the policy result parameters include at least the first policy result parameter and the second policy result parameter.
7. The method of claim 5, wherein the front end interface comprises a fueling application interface and the policy comprises an address selection or related information presentation sequence with a fueling station; the corresponding policy result parameters include at least one of click volume, order volume, and browsing time.
8. The method of claim 5, wherein the front-end interface comprises a resource delivery application interface, and wherein the policy comprises an adjustment of a page design related to content or form of a resource display or a background algorithm related to the page design; the corresponding policy result parameters include click through size and/or conversion rate.
9. The method of claim 5, wherein the background data comprises algorithm data related to order dispatch, and the policy comprises adjustment of related parameters in an order dispatch algorithm; the corresponding policy result parameters include latency.
10. The method of claim 1, wherein the first grouping category further comprises a first validation group; the second grouping category further includes a second validation group; the method further comprises the steps of:
controlling user terminals of a user group in the first verification group to execute the first comparison strategy through a network;
monitoring a first verification policy result parameter of a user group of the first verification group after executing a first comparison policy on a user terminal through a network;
controlling user terminals of a user group in the second verification group to execute the second comparison strategy through a network;
monitoring second verification policy result parameters of the user group of the second verification group after executing a second comparison policy on the user terminal through a network;
comparing the first verification strategy result parameter with the first comparison strategy result parameter, and judging the confidence coefficient of the first comparison result;
and comparing the second verification strategy result parameter with the second comparison strategy result parameter, and judging the confidence degree of the second comparison result.
11. A system for testing and tracking policy enforcement, the system comprising:
the user information acquisition module is used for acquiring the related information of the first user group through a network and determining a second user group for testing;
the first grouping module is used for grouping the second user group according to a first preset condition and a first allocation algorithm, and the corresponding first grouping category at least comprises a first experiment group and a first comparison group;
the first strategy executing module is used for controlling the user terminals of the user groups in the first experiment group and the first comparison group to execute a first strategy and a first comparison strategy respectively through a network;
the first monitoring module is used for monitoring corresponding first strategy result parameters and first comparison strategy result parameters after the first strategy and the first comparison strategy are executed through a network;
the second grouping module is used for grouping at least part of the second user group according to a second preset condition and a second distribution algorithm, and the corresponding second grouping comprises at least a second experiment group and a second comparison group;
the second policy executing module is used for controlling the user terminals of the user groups in the second experiment group and the second comparison group to execute a second policy and a second comparison policy respectively through a network;
The second monitoring module is used for monitoring corresponding second strategy result parameters and second comparison strategy result parameters after the second strategy and the second comparison strategy are executed through a network;
the first result comparison module is used for comparing the first strategy result parameter with the first comparison strategy result parameter to obtain a first comparison result, and judging the effectiveness of the first strategy based on the first comparison result;
and the second result comparison module is used for comparing the second strategy result parameter with the second comparison strategy result parameter to obtain a second comparison result, and judging the validity of the second strategy based on the second comparison result.
12. The system of claim 11, wherein the system further comprises:
and the user group screening module is used for screening the first user group according to a third preset condition and determining the second user group.
13. The system of claim 11, wherein the information related to the first group of users includes at least location information of the group of users;
the user information acquisition module is further configured to acquire a GPS signal of a user terminal corresponding to the first user group through a network, where the GPS signal can reflect the location information.
14. The system of claim 11, wherein the information related to the first group of users includes preference information and/or location information for the group of users; the allocation standard parameters in the first allocation algorithm and/or the second allocation algorithm comprise at least preference information and/or location information of the user group.
15. The system of claim 11, wherein the policy comprises a combination of one or more of: the front-end interface displays the strategy, the data calling strategy of the front-end interface, the related strategy of the background model and the related strategy of the background data; wherein the policies include at least the first policy and the second policy.
16. The system of claim 11, wherein the policy result parameters include at least one of a click volume, an order volume, a conversion rate, a browsing time, and a waiting time of the corresponding network data; wherein the policy result parameters include at least the first policy result parameter and the second policy result parameter.
17. The system of claim 15, wherein the front end interface comprises a fueling application interface and the policy comprises an address selection or related information presentation sequence with a fueling station; the corresponding policy result parameters include at least one of click volume, order volume, and browsing time.
18. The system of claim 15, wherein the front-end interface comprises a resource delivery application interface, and wherein the policy comprises an adjustment of a page design related to content or form of a resource display or a background algorithm related to the page design; the corresponding policy result parameters include click through size and/or conversion rate.
19. The system of claim 15, wherein the background data comprises algorithm data related to order dispatch, and the policy comprises adjustment of related parameters in an order dispatch algorithm; the corresponding policy result parameters include latency.
20. The system of claim 11, wherein the first grouping category further comprises a first validation group; the second grouping category further includes a second validation group;
the first policy executing module is further configured to control, through a network, user terminals of a user group in the first authentication group to execute the first comparison policy;
the first monitoring module is further configured to monitor, through a network, a first verification policy result parameter of the user group of the first verification group after the user terminal executes the first comparison policy;
the second policy executing module is further configured to control, through a network, user terminals of a user group in the second authentication group to execute the second comparison policy;
The second monitoring module is further configured to monitor, through a network, a second verification policy result parameter of the user group of the second verification group after the second comparison policy is executed on the user terminal;
the first result comparison module is further configured to compare the first verification policy result parameter with the first comparison policy result parameter, and determine a confidence level of the first comparison result;
the second result comparison module is further configured to compare the second verification policy result parameter with the second comparison policy result parameter, and determine a confidence level of the second comparison result.
21. An apparatus for policy-implemented test tracking comprising a processor, wherein the processor is configured to perform the policy-implemented test tracking method of any of claims 1-10.
22. A computer readable storage medium storing computer instructions which, when read by a computer in the storage medium, perform the method of test tracking of policy enforcement as claimed in any one of claims 1 to 10.
CN202010186955.6A 2020-03-17 2020-03-17 Method and system for testing and tracking policy execution Active CN111311336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010186955.6A CN111311336B (en) 2020-03-17 2020-03-17 Method and system for testing and tracking policy execution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010186955.6A CN111311336B (en) 2020-03-17 2020-03-17 Method and system for testing and tracking policy execution

Publications (2)

Publication Number Publication Date
CN111311336A CN111311336A (en) 2020-06-19
CN111311336B true CN111311336B (en) 2023-07-04

Family

ID=71158776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010186955.6A Active CN111311336B (en) 2020-03-17 2020-03-17 Method and system for testing and tracking policy execution

Country Status (1)

Country Link
CN (1) CN111311336B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131079B (en) * 2020-09-22 2024-05-14 北京达佳互联信息技术有限公司 Data monitoring method, device, electronic equipment and storage medium
CN112546632A (en) * 2020-12-09 2021-03-26 百果园技术(新加坡)有限公司 Game map parameter adjusting method, device, equipment and storage medium
CN113159815B (en) * 2021-01-25 2023-04-18 腾讯科技(深圳)有限公司 Information delivery strategy testing method and device, storage medium and electronic equipment
CN112732765B (en) * 2021-04-01 2021-07-13 北京世纪好未来教育科技有限公司 Method and device for determining experimental path and electronic equipment
CN113485931B (en) * 2021-07-14 2024-03-22 广州虎牙科技有限公司 Test method, test device, electronic equipment and computer readable storage medium
CN113657930B (en) * 2021-08-12 2024-05-28 广州虎牙科技有限公司 Method and device for testing policy effectiveness, electronic equipment and readable storage medium
CN113448876B (en) * 2021-08-31 2021-11-19 腾讯科技(深圳)有限公司 Service testing method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6321206B1 (en) * 1998-03-05 2001-11-20 American Management Systems, Inc. Decision management system for creating strategies to control movement of clients across categories
CN108845936A (en) * 2018-05-31 2018-11-20 阿里巴巴集团控股有限公司 A kind of AB test method and system based on mass users
CN110619548A (en) * 2019-09-20 2019-12-27 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for determining media content delivery strategy
CN110808872A (en) * 2019-10-21 2020-02-18 微梦创科网络科技(中国)有限公司 Method and device for realizing flow experiment and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7080027B2 (en) * 2003-04-17 2006-07-18 Targetrx, Inc. Method and system for analyzing the effectiveness of marketing strategies
US20160034468A1 (en) * 2014-07-23 2016-02-04 Attune, Inc. Testing of and adapting to user responses to web applications
US20180357654A1 (en) * 2017-06-08 2018-12-13 Microsoft Technology Licensing, Llc Testing and evaluating predictive systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6321206B1 (en) * 1998-03-05 2001-11-20 American Management Systems, Inc. Decision management system for creating strategies to control movement of clients across categories
CN108845936A (en) * 2018-05-31 2018-11-20 阿里巴巴集团控股有限公司 A kind of AB test method and system based on mass users
CN110619548A (en) * 2019-09-20 2019-12-27 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for determining media content delivery strategy
CN110808872A (en) * 2019-10-21 2020-02-18 微梦创科网络科技(中国)有限公司 Method and device for realizing flow experiment and electronic equipment

Also Published As

Publication number Publication date
CN111311336A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111311336B (en) Method and system for testing and tracking policy execution
US20190130333A1 (en) Systems and methods for determining an optimal strategy
WO2021129586A1 (en) Method and system for determining vehicle-boarding location
CN106776660A (en) A kind of information recommendation method and device
Nahmias-Biran et al. From traditional to automated mobility on demand: a comprehensive framework for modeling on-demand services in SimMobility
CN111311295B (en) Service mode determining method, device, electronic equipment and storage medium
US20210201393A1 (en) System and method for bidding-based ridesharing
US20060259342A1 (en) Rule based document distribution to partners
KR20180017085A (en) Location information providing method and device
Schlenther et al. Addressing spatial service provision equity for pooled ride‐hailing services through rebalancing
CN108416619A (en) A kind of consumption interval time prediction technique, device and readable storage medium storing program for executing
US20150213026A1 (en) Method for providing personalized content
Lubis et al. Disruptive innovation service oriented framework: a case study of transportation in Indonesia
US20180089774A1 (en) Method for automatic property valuation
CN110909267A (en) Method and device for displaying entity object side, electronic equipment and storage medium
CN111460301A (en) Object pushing method and device, electronic equipment and storage medium
CN107067128A (en) Task quality detecting method, the method and device for determining dereferenced user
CN111401969B (en) Method, device, server and storage medium for improving user retention
CN111695919B (en) Evaluation data processing method, device, electronic equipment and storage medium
CN114331564A (en) Member information management method, member information management apparatus, member information management device, member information management storage medium, and program product
CN106383907A (en) Application recommendation method and device
CN117892004B (en) Customer data analysis method and system based on CRM
CN111127108A (en) Article distribution method, device, electronic equipment and computer readable storage medium
CN111260427B (en) Service order processing method, device, electronic equipment and storage medium
CN111832767B (en) Automatic play list strategy testing device and method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant