CN118229378A - Method and computing device for optimizing search performance of electronic commerce - Google Patents

Method and computing device for optimizing search performance of electronic commerce Download PDF

Info

Publication number
CN118229378A
CN118229378A CN202410203248.1A CN202410203248A CN118229378A CN 118229378 A CN118229378 A CN 118229378A CN 202410203248 A CN202410203248 A CN 202410203248A CN 118229378 A CN118229378 A CN 118229378A
Authority
CN
China
Prior art keywords
full link
search
module
searching
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410203248.1A
Other languages
Chinese (zh)
Inventor
张贺
罗俊鸿
孙海涛
贺菊华
张驰
李凯琪
卢小康
张咪
侯国睿
李宣余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Network Technology Co Ltd
Original Assignee
Alibaba China Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Network Technology Co Ltd filed Critical Alibaba China Network Technology Co Ltd
Priority to CN202410203248.1A priority Critical patent/CN118229378A/en
Publication of CN118229378A publication Critical patent/CN118229378A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a method and computing equipment for optimizing search performance of electronic commerce. The method comprises the following steps: acquiring search request identifiers and corresponding response times of corresponding service module categories from a plurality of service modules searching for a full link; correlating the response time with the search request identifier so as to obtain response time data of the search full link; and establishing a full link performance monitoring large disk by using the response time data of the searching full link so as to check the full link optimization strategy. According to the technical scheme provided by the application, the optimization can be performed based on the RT change condition of the full link, and the optimization effect is improved.

Description

Method and computing device for optimizing search performance of electronic commerce
Technical Field
The invention relates to the technical field of electronic commerce, in particular to a method and computing equipment for optimizing search performance of electronic commerce.
Background
The e-commerce search service is a vital part of the e-commerce platform, and improves the shopping experience of users and the sales efficiency of merchants through a series of interrelated components. For example, when a user inputs a search term, the search engine can quickly find related commodities through the indexing system, then determine the display order of the commodities through the sorting algorithm, and finally display the commodities to the user through the user interface. Meanwhile, the data analysis can continuously monitor the search behavior, and data support is provided for personalized recommendation and optimization of a search algorithm. Searching is an important traffic scenario where the e-commerce platform connects buyers and sellers, requiring continuous optimization of the customer's search experience.
Disclosure of Invention
The application aims to provide a method and computing equipment for optimizing the search performance of an electronic commerce, which can optimize based on the RT change condition of a full link and improve the optimization effect.
According to an aspect of the present application, there is provided a method for e-commerce search performance optimization, the method comprising:
acquiring search request identifiers and corresponding response times of corresponding service module categories from a plurality of service modules searching for a full link;
correlating the response time with the search request identifier so as to obtain response time data of the search full link;
and establishing a full link performance monitoring large disk by using the response time data of the searching full link so as to check the full link optimization strategy.
According to some embodiments, the full link optimization strategy includes full link performance optimization with a predetermined period of response time as an optimization objective.
According to some embodiments, the full link performance optimization includes code layer optimization of the plurality of service modules searching for full links.
According to some embodiments, the searching the plurality of service modules for the full link includes: the system comprises a search back-end module, a search query understanding module, a search engine central module, a search recall module, a search ordering module and a mobile open platform module.
According to some embodiments, the foregoing method further comprises: and presetting buried points in a plurality of service modules searching the full link so as to acquire search request identifiers and corresponding response times of corresponding service module categories.
According to some embodiments, the full link performance monitoring chassis includes a real-time performance chassis and an offline performance chassis.
According to some embodiments, the foregoing method further comprises: an anti-degradation mechanism is constructed to perform module attribution by monitoring abnormal fluctuation of response time of a plurality of service modules searching the full link in real time.
According to some embodiments, module attribution by monitoring in real time abnormal fluctuations in response times of the plurality of service modules searching for the full link comprises: monitoring whether the congruence and/or ring ratio of response time of the plurality of service modules searching the full link is higher than a set threshold.
According to some embodiments, module attribution is performed by monitoring abnormal fluctuations in response time of the plurality of service modules searching for the full link in real time, further comprising: excluding traffic disturbances and/or excluding upstream disturbances.
According to another aspect of the present application, there is provided a computing device comprising:
A processor; and
A memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any one of the preceding claims.
According to another aspect of the application there is provided a non-transitory computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor, cause the processor to perform the method of any of the above.
According to an example embodiment, the technical scheme of the application provides a method for optimizing the search performance of an electronic commerce, and a search full-link Response Time (RT) large disk is constructed, so that the influence of an optimization result on a service transparent layer is conveniently monitored, the optimization process is guided, and the optimization effect is improved.
According to some embodiments, an anti-degradation mechanism is constructed to prevent performance of the optimized system from becoming degraded again after a period of time by monitoring in real time abnormal fluctuations in response times of the plurality of service modules searching the full link for module attribution.
According to some embodiments, through full-link buried point series connection, the searching of full-link real-time performance large disk and off-line performance large disk can be easily constructed, and the monitoring of full-link RT change conditions is facilitated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below.
FIG. 1 illustrates a method flow diagram for e-commerce search performance optimization in accordance with an example embodiment.
Fig. 2A shows a total response time real-time large disk of an application server.
Figure 2B shows the home response time real-time big disc of the application server,
Fig. 2C shows an intermediate page response time real-time big disk of an application server.
Fig. 2D shows a post page response time real-time big disk of an application server.
Fig. 3A shows iOS mobile system second-rate offline large disks.
Fig. 3B shows an android mobile system second-rate offline large disc.
Fig. 3C shows a P50 response time offline large disk of an application server.
Fig. 3D shows a P99 response time offline large disk of an application server.
FIG. 4 illustrates a flow chart of a method for module attribution by monitoring in real time abnormal fluctuations in response times of a plurality of service modules searching for a full link, according to an example embodiment.
FIG. 5 illustrates a block diagram of a computing device in accordance with an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the application may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another element. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the present inventive concept. As used herein, the term "and/or" includes any one of the associated listed items and all combinations of one or more.
The user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of related data is required to comply with the relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation entries for the user to select authorization or rejection.
Those skilled in the art will appreciate that the drawings are schematic representations of example embodiments and that the modules or flows in the drawings are not necessarily required to practice the application and therefore should not be taken to limit the scope of the application.
In the E-commerce searching system, the number of service modules involved in searching the full link is particularly large, and the content involved in optimization is also particularly large. Based on the past optimization, if the optimization can break through again, a lot of careful and innovative matters need to be done, including but not limited to architecture innovation, code complexity reduction, algorithm mode innovation and the like. However, the current optimization work is generally that each module performs optimization independently in stages, so that it is difficult to see the influence of the optimization result on the service penetration layer and evaluate the weight of the module optimization result in the whole system. In addition, the current optimization point mainly stays in the transformation of the architecture layer, and the optimization and evaluation of the code layer penetrating into each service are basically in a absent state. Moreover, there is currently no mechanism for preventing degradation in the system, resulting in degradation of subsequent performance after a period of optimization.
Therefore, the application provides a method for optimizing the search performance of the E-commerce, and constructs a search full-link RT large disc, so that the influence of the optimization result on the service transparent layer is conveniently monitored, the optimization process is guided, and the optimization effect is improved.
Before describing embodiments of the present application, some terms or concepts related to the embodiments of the present application are explained.
Category prediction: category prediction is an important link in an e-commerce search system, and helps to improve search accuracy and relevance. By predicting the category of the commodity to which the user query belongs, the search system can recall related commodities more accurately and display the commodities in order according to the relevance with the user query. Category prediction may be achieved by a variety of methods, including statistical-based methods, content-based methods, feature fusion-based methods, and the like. In order to ensure that the search results do not deviate from the search intent of the user, e-commerce search systems typically rely on a relatively complete category tree, in combination with user behavior data, such as clicking, browsing, collecting, etc., to continually optimize and calibrate the accuracy of category predictions.
Second opening rate: from the user entering keywords to the presentation of results, a scale of less than 1000ms is consumed.
AB barrel: in technical implementations, user traffic is typically distributed into different "buckets" in order to test the effect of new functions, algorithms, or optimizations. These buckets run different versions of the product or service, respectively. For example, bucket A may run a current stable version, while bucket B runs a version containing new functionality. By comparing the performance of the two buckets, the impact of the new function can be evaluated.
First page: after the user inputs the keyword search, the system returns a first commodity list. This portion of the merchandise is considered to be most relevant to the search term or top-ranked according to some priority policy, with a key effect on attracting user clicks and conversions.
Middle page: the middle part page of the search result, i.e. the second page and the pages after but before the last few pages. As the user turns pages to view more merchandise, the merchandise exposure of the middle page is relatively low, but still an important component of merchandise display, providing more options for the user.
Rear page: and o pages displayed when the user continues to page down to near the end of the search results. The post page merchandise may often have a weak relevance or a low sales volume to the search keywords, but still has some exposure value, especially in some users' deep browsing scenarios. Reasonably designing the presentation form and content of the post page can help to promote the overall search effect and mine potential business opportunities.
Exemplary embodiments of the present application are described below with reference to the accompanying drawings.
FIG. 1 illustrates a method flow diagram for e-commerce search performance optimization in accordance with an example embodiment.
Referring to fig. 1, in S101, a search request identification and a corresponding response time of a corresponding service module category are acquired from a plurality of service modules searching for a full link.
According to some embodiments, the plurality of service modules searching the full link include, but are not limited to, a plurality of search back-end modules, search query understanding modules, search engine hub modules, search recall modules, search ranking modules, mobile open platform modules.
For example, the search back-end module includes background data processing and storage, index building and maintenance that support the operation of the search engine, is capable of handling large amounts of data, such as log processing services, and is capable of efficiently handling and storing TB-level data.
The query understanding module carries out deep analysis on the search intention of the user, and comprises functions of basic analysis (such as preprocessing, word segmentation, part-of-speech recognition and the like), query rewrite (including error correction, expansion, synonymous substitution and the like), intention recognition and the like, and the query understanding module aims to ensure that the requirement of the user can be accurately understood, so that the relevance of recall and the precision of final sequencing are improved.
The central search engine module coordinates the work of each service module and ensures the smooth progress of the search flow. The hub module may be involved in request processing, routing requests, data distribution, result summary merging, etc. In addition, the central module can also perform cross-service task scheduling, state monitoring and the like.
The search recall module quickly finds all candidate document sets relevant to the user query from the index library. This process does not involve a refined ordering, but rather finds out as widely as possible all data records that may be relevant to ensure that no potentially relevant information is missed in the subsequent refined ordering process. In addition, because of the rich user and merchandise features required, multiple types of recall sources are required, and thus multiple recall engines may be required.
The search ranking module is a process of finely ranking the relevant content after it is acquired in the recall phase. Ranking algorithms typically consider a variety of factors, such as relevance of content to queries, user behavior data, timeliness, business value, etc., and rank search results according to a variety of ranking strategies, such as relevance, sales, ratings, new products, etc. According to some embodiments, the ranking service is completed by a deep learning search ranking function or is ranked according to ranking rules specified in the search request.
The mobile open platform module provides service for mobile application and is used for processing various application requests of a mobile terminal (mobile phone terminal). The system can provide a unified service access mode and support the efficient development and realization of various application logics. In addition, the mobile open platform module can also provide support for interfaces for direct interaction of users, including UI components for users to input query words, interfaces for displaying search results, and personalized processing of the search results.
In addition, the searching full link can also comprise a caching module, and for hot or repeated inquiry requests, caching service is provided to reduce the pressure of a database or an index library and improve the response speed. For example, memory databases such as redis, memcached, etc. may be used to store the heat storage point query results, reducing the need to directly access the underlying data sources.
For finer performance monitoring and analysis, according to some embodiments, buried points are preset in the plurality of service modules searching the full link so as to obtain search request identifications and corresponding response times of respective service module categories. Specifically, the full-link performance burial point is a performance monitoring and data acquisition process performed under the full-link view angle, not only focuses on performance of a single module or link, but also tracks performance indexes of each service module in the whole application flow from the triggering action of a user until the operation target is finally completed (such as completion of one transaction, completion of content loading and the like). The full-link performance embedded point not only needs to consider the loading speed and rendering efficiency of the front-end page, but also comprises the speed of processing the request by the back-end server, the time consumption of calling among different service modules, various processing time consumption, database operation performance and the like. The embedded point of each service module records each search request identifier and response time to the search request, so that service module nodes affecting the overall performance can be rapidly positioned, and the overall improvement and optimization of the system performance are realized.
And at S103, the response time is associated through the search request identification, so that response time data of the search full link is obtained.
According to an example embodiment, the embedded point of each service module records each search request identifier and the response time to the search request, so that the response time data of the search full link for a specific search request can be obtained by associating the corresponding response time of each service module with the search request identifier, and the response time data comprises real-time data and statistical data.
At S105, a full link performance monitoring chassis is built using the response time data of the search full link to verify a full link optimization strategy.
According to an example embodiment, the full link performance monitoring chassis may include a real-time performance chassis and an offline performance chassis. The real-time performance large disc can provide real-time up-to-date performance information, and is very useful for situations requiring fast response and processing, so as to find and solve problems in time. Offline performance footprints may be used to analyze historical data to help understand the performance of the system at different times or scenarios. According to the embodiment of the application, each search request identifier and the response time to the search request are recorded through the buried point of each service module, so that a full-link performance monitoring large disc can be established, and the full-link optimization strategy and the optimization result can be checked.
According to some embodiments, the full link optimization strategy may include full link performance optimization with a predetermined period of response time as an optimization objective. According to some embodiments, the full link performance optimization includes code layer optimization of the plurality of service modules searching for full links.
For example, according to some embodiments, for a search back-end module, by allocating resources for its bottleneck service, the average response time can be reduced by 20ms; the average response time can be reduced by 8ms by optimizing the commodity summary information copy; by optimizing the object data copy from json to mapstruct bytecode, the average response time is reduced by 15ms; the average response time can be reduced by 17ms by performing the general AB bucket conflict checking logic optimization on the application server module, limiting the third party interface timeout time (t imeout), reducing the user information remote request and the like; the average response time can be reduced by 7ms through pan AB performance optimization.
According to some embodiments, the average response time can be reduced by 35ms by properly configuring the index ranking module to balance the stand-alone load.
According to some embodiments, for the search query understanding module, the average response time may be reduced by 15ms by optimizing the category prediction outcome number limit; the average response time can be reduced by 20ms by splitting the search chain into a personalized chain, a flow control chain and a scattered chain; by buffering portions of the search chain, the average response time can be reduced by 4ms.
According to some embodiments, for a search engine backbone module, by optimizing the invoke reflection mechanism, caching member functions can reduce the average response time by 15ms; the average response time can be reduced by 18ms by changing the computational complexity from n square to nlogn, map setting initial size, string optimization, etc.
Therefore, by recording each search request identifier and response time to the search request through the buried point of each service module, a full-link performance monitoring large disc can be established, and the full-link optimization strategy is checked, so that the optimization result is clear at a glance.
According to some embodiments, the full link optimization strategy may include full link resource optimization and/or stability construction with response time of a predetermined period as an optimization objective, and may be unified with performance optimization. For example, the performance may be further optimized by allocating resource optimization modes such as the number of CPU cores and the memory size for each service module.
Referring to fig. 2A-2D, fig. 2A illustrates a total response time real-time big disk of an application server, fig. 2B illustrates a front page response time real-time big disk of an application server, fig. 2C illustrates a middle page response time real-time big disk of an application server, and fig. 2D illustrates a rear page response time real-time big disk of an application server. In the figure, the abscissa indicates time of day, the ordinate indicates response time, ios indicates that the mobile client is an ios platform, aod indicates that the mobile client is an android platform, avg indicates average response time, p50 indicates median in all request response time distribution, and p99 indicates that 1% of request response time is longer than this value. P50 is commonly used to describe the average performance level of a system. The tail delay condition is focused on by P99, which is particularly important for a system with higher service quality requirement, because the system can reflect the slowest response speed of the system in peak time, and has direct influence on user experience. In e-commerce searches or other high concurrency scenarios, the concern about peak period P99 is that even though most requests can respond quickly, if P99 is high, meaning that there may be a very significant delay in a small portion of the requests, which may result in a user perceived system performance degradation or unavailability. Thus, optimizing P99 performance helps to improve overall quality of service and user experience.
Referring to fig. 3A-3D, fig. 3A shows an iOS mobile system second-rate offline large disc, fig. 3B shows an Android mobile system (Android) second-rate offline large disc, fig. 3C shows a P50 response time offline large disc of an application server, and fig. 3D shows a P99 response time offline large disc of the application server. The second rate represents the proportion of less than 1000ms from the time the keyword is input by the user to the time the result is presented. The offline large disc monitoring results of fig. 3A show a 3.9% decrease in the cycle-to-cycle ratio, a 4.9% decrease in the ring-to-cycle ratio, and a 0.5% increase in the month-to-cycle ratio. The offline large disc monitoring results of fig. 3B show a 6.9% decrease in the cycle-to-cycle ratio, a 4.9% decrease in the ring-to-cycle ratio, and a 5.5% increase in the month-to-cycle ratio. The offline large disc monitoring results of fig. 3C show a 1.4% increase in P50 response time cycle to cycle ratio, a 5.3% increase in ring ratio, and a 1.2% increase in month to cycle ratio. The offline large disc monitoring results of fig. 3D show a 4.9% decrease in P99 response time cycle to cycle ratio, an 8.4% increase in ring ratio, and a 9.0% increase in month to cycle ratio.
In this way, according to the example embodiment, by constructing the search full-link RT big disk, the influence of the optimization result on the service transparent layer is conveniently monitored, so that the optimization process is guided, and the optimization effect is improved. According to some embodiments, through full-link buried point series connection, the searching of full-link real-time performance large disk and off-line performance large disk can be easily constructed, and the monitoring of full-link RT change conditions is facilitated.
FIG. 4 illustrates a flow chart of a method for module attribution by monitoring in real time abnormal fluctuations in response times of a plurality of service modules searching for a full link, according to an example embodiment.
According to an example embodiment, an anti-degradation mechanism is constructed to prevent performance of the optimized system from becoming degraded again after a period of time by monitoring in real time abnormal fluctuations in response times of the plurality of service modules searching for the full link for module attribution.
Referring to fig. 4, at S401, the same ratio and/or ring ratio of response times of a plurality of service modules searching for a full link is monitored.
According to example embodiments, a monitoring trigger time may be set, such as configuring a monitoring period and whether holidays are excluded or not.
At S403, it is determined whether the equivalence ratio or the ring ratio is higher than a set threshold; if so, go to S405.
According to an example embodiment, the response time thresholds for the various service modules may be preconfigured. If the equivalence and/or loop ratio of the response times is above a set threshold, a subsequent process is triggered.
At S405, it is determined whether or not it is necessary to belong to a situation that is excluded due to interference; if the result is negative, the optimized alarm is carried out.
According to some embodiments, it is further determined whether to exclude traffic disturbances and/or to exclude upstream disturbances. If the situation is to be excluded, only the event can be recorded, but an optimization alarm is not sent out; otherwise, an optimization alarm can be sent out, so that the subsequent optimization of the service module with the same ratio or the ring ratio higher than the set threshold value can be triggered.
According to some embodiments, different levels of optimization alarms may be triggered based on the increase in response time.
According to some embodiments, upstream and downstream performance monitoring data may be recorded simultaneously, with monitoring in series to facilitate analysis of upstream and downstream effects and correlations of performance changes.
QPS (Queries Per Second: query rate per second) data attribution and/or published data attribution is also performed on unusual fluctuations in response time, according to some embodiments. QPS data attribution refers to analyzing the change in search response time at different QPS levels, thereby finding out the reason for the prolonged response time of the system at high concurrent requests, and attributing analysis to the performance bottleneck of the system. Distribution data attribution refers to associating the response time of search with different distribution elements (such as commodity information, optimized version of search algorithm, server load condition, index updating frequency and the like), and determining which distribution level factors have significant influence on search response time through data analysis. For example, the analysis search system may determine whether the search response time has changed after releasing a new commodity, adjusting a commodity ordering rule, or updating an index structure; analyzing the relation between server resource use conditions (such as CPU, memory, I/O and the like) and search response time in a specific time period, and finding out a technical bottleneck causing the response time to be prolonged; after functional iterations or technical upgrades are performed, changes in search response time are monitored to determine if these changes affect search speed.
According to an example embodiment, an anti-degradation mechanism is constructed to prevent performance of the optimized system from becoming degraded again after a period of time by monitoring in real time abnormal fluctuations in response times of the plurality of service modules searching for the full link for module attribution.
FIG. 5 illustrates a block diagram of a computing device according to an example embodiment of the application.
As shown in fig. 5, computing device 30 includes processor 12 and memory 14. Computing device 30 may also include a bus 22, a network interface 16, and an I/O interface 18. The processor 12, memory 14, network interface 16, and I/O interface 18 may communicate with each other via a bus 22.
The processor 12 may include one or more general purpose CPUs (Central Processing Unit, processors), microprocessors, or application specific integrated circuits, etc. for executing associated program instructions. According to some embodiments, computing device 30 may also include a high performance display adapter (GPU) 20 that accelerates processor 12.
Memory 14 may include machine-system-readable media in the form of volatile memory, such as Random Access Memory (RAM), read Only Memory (ROM), and/or cache memory. Memory 14 is used to store one or more programs including instructions as well as data. The processor 12 may read instructions stored in the memory 14 to perform the methods according to embodiments of the application described above.
Computing device 30 may also communicate with one or more networks through network interface 16. The network interface 16 may be a wireless network interface.
Bus 22 may be a bus including an address bus, a data bus, a control bus, etc. Bus 22 provides a path for exchanging information between the components.
It should be noted that, in the implementation, the computing device 30 may further include other components necessary to achieve normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), network storage devices, cloud storage devices, or any type of media or device suitable for storing instructions and/or data.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above.
It will be clear to a person skilled in the art that the solution according to the application can be implemented by means of software and/or hardware. "Unit" and "module" in this specification refer to software and/or hardware capable of performing a specific function, either alone or in combination with other components, where the hardware may be, for example, a field programmable gate array, an integrated circuit, or the like.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The exemplary embodiments of the present application have been particularly shown and described above. It is to be understood that this application is not limited to the precise arrangements, instrumentalities and instrumentalities described herein; on the contrary, the application is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A method for e-commerce search performance optimization, the method comprising:
acquiring search request identifiers and corresponding response times of corresponding service module categories from a plurality of service modules searching for a full link;
correlating the response time with the search request identifier so as to obtain response time data of the search full link;
and establishing a full link performance monitoring large disk by using the response time data of the searching full link so as to check the full link optimization strategy.
2. The method of claim 1, wherein the full link optimization strategy comprises full link performance optimization with a predetermined period of response time as an optimization objective.
3. The method of claim 2, wherein the full link performance optimization comprises code layer optimization of the plurality of service modules searching for full links.
4. The method of claim 3, wherein the searching for the plurality of service modules for the full link comprises: the system comprises a search back-end module, a search query understanding module, a search engine central module, a search recall module, a search ordering module and a mobile open platform module.
5. The method as recited in claim 1, further comprising: and presetting buried points in a plurality of service modules searching the full link so as to acquire search request identifiers and corresponding response times of corresponding service module categories.
6. The method of claim 1, wherein the full link performance monitoring chassis comprises a real-time performance chassis and an offline performance chassis.
7. The method as recited in claim 1, further comprising: an anti-degradation mechanism is constructed to perform module attribution by monitoring abnormal fluctuation of response time of a plurality of service modules searching the full link in real time.
8. The method of claim 7, wherein module attribution by monitoring in real time abnormal fluctuations in response times of the plurality of service modules searching for the full link comprises: monitoring whether the congruence and/or ring ratio of response time of the plurality of service modules searching the full link is higher than a set threshold.
9. The method of claim 8, wherein module attribution is performed by monitoring in real time abnormal fluctuations in response times of the plurality of service modules searching for the full link, further comprising: excluding traffic disturbances and/or excluding upstream disturbances.
10. A computing device, comprising:
A processor; and
A memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any one of claims 1-9.
CN202410203248.1A 2024-02-22 2024-02-22 Method and computing device for optimizing search performance of electronic commerce Pending CN118229378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410203248.1A CN118229378A (en) 2024-02-22 2024-02-22 Method and computing device for optimizing search performance of electronic commerce

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410203248.1A CN118229378A (en) 2024-02-22 2024-02-22 Method and computing device for optimizing search performance of electronic commerce

Publications (1)

Publication Number Publication Date
CN118229378A true CN118229378A (en) 2024-06-21

Family

ID=91507935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410203248.1A Pending CN118229378A (en) 2024-02-22 2024-02-22 Method and computing device for optimizing search performance of electronic commerce

Country Status (1)

Country Link
CN (1) CN118229378A (en)

Similar Documents

Publication Publication Date Title
US11409645B1 (en) Intermittent failure metrics in technological processes
US11775501B2 (en) Trace and span sampling and analysis for instrumented software
US8775409B1 (en) Query ranking based on query clustering and categorization
CN107077489B (en) Automatic insights for multidimensional data
US10915706B2 (en) Sorting text report categories
TWI643076B (en) Financial analysis system and method for unstructured text data
US10984056B2 (en) Systems and methods for evaluating search query terms for improving search results
Irudeen et al. Big data solution for Sri Lankan development: A case study from travel and tourism
CN112269816B (en) Government affair appointment correlation retrieval method
CA2956627A1 (en) System and engine for seeded clustering of news events
CN109871368A (en) Database detection method, apparatus, computer installation and storage medium
US20220164396A1 (en) Metadata indexing for information management
US20230153286A1 (en) Method and system for hybrid query based on cloud analysis scene, and storage medium
CN116860311A (en) Script analysis method, script analysis device, computer equipment and storage medium
CN118229378A (en) Method and computing device for optimizing search performance of electronic commerce
US20230205760A1 (en) Multiple index scans
KR102360061B1 (en) Systems and methods for improving database query efficiency.
US11645283B2 (en) Predictive query processing
CN113625967A (en) Data storage method, data query method and server
CN112882956A (en) Method and device for automatically generating full-scene automatic test case through data combination calculation, storage medium and electronic equipment
CN113312410B (en) Data map construction method, data query method and terminal equipment
CN113240472B (en) Financial product recommendation method, electronic equipment and storage medium
CN114356965B (en) Method, system, server and storage medium for generating dynamic form
CN112445973B (en) Method, device, storage medium and computer equipment for searching items
US20240232194A1 (en) Key range query optimization

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination