CN116738081B - Front-end component binding method, device and storage medium - Google Patents

Front-end component binding method, device and storage medium Download PDF

Info

Publication number
CN116738081B
CN116738081B CN202310987015.0A CN202310987015A CN116738081B CN 116738081 B CN116738081 B CN 116738081B CN 202310987015 A CN202310987015 A CN 202310987015A CN 116738081 B CN116738081 B CN 116738081B
Authority
CN
China
Prior art keywords
data
commodity
query
module
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310987015.0A
Other languages
Chinese (zh)
Other versions
CN116738081A (en
Inventor
王文林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Youteyun Technology Co ltd
Original Assignee
Guizhou Youteyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Youteyun Technology Co ltd filed Critical Guizhou Youteyun Technology Co ltd
Priority to CN202310987015.0A priority Critical patent/CN116738081B/en
Publication of CN116738081A publication Critical patent/CN116738081A/en
Application granted granted Critical
Publication of CN116738081B publication Critical patent/CN116738081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/27Regression, e.g. linear or logistic regression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/1396Protocols specially adapted for monitoring users' activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a front-end component binding method, a device and a storage medium, and provides a method for realizing effective butt joint of commodity data and a front-end component in the design process of the front-end component. The method is mainly aimed at front-end development projects of light applications, and dynamic display of the front-end components is realized by inquiring key parameters and storing inquiry results. Meanwhile, the invention also introduces a caching mechanism, which can effectively save commodity data queried in the earlier stage so as to improve the efficiency of data query and acquisition. In addition, the invention adopts a plurality of artificial intelligent models to bind data and search data, accelerates the efficiency, adopts an artificial intelligent cascade model to output the preference degree of the user for the commodity, and preferentially displays the preference degree on the front-end page and recommends the preference degree to the user.

Description

Front-end component binding method, device and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence and Internet, in particular to a front-end component binding method, a device and a storage medium.
Background
In front-end development, acquisition, processing and presentation of data are vital links. Conventional practice is to request data directly from the server through the API, and then parse and render at the front end. However, this approach has several significant drawbacks:
The data acquisition efficiency is low: each time a user accesses a page or refreshes a page, an API request needs to be sent to obtain data from the server, which may result in inefficient data acquisition and increase the load on the server.
The user experience is poor: because data needs to be obtained from the server every time, users may encounter slow loading problems when accessing pages, especially in situations where network conditions are not good, which can greatly reduce the user's experience.
Disclosure of Invention
The application provides a front-end component binding method, a device and a storage medium, and provides a method for realizing effective butt joint/opening of commodity data and a front-end component, which are used for solving the problems of low data acquisition efficiency and poor user experience in the prior art.
In view of the above, the present application provides a front-end component binding method, device and storage medium.
The embodiment of the application provides a front-end component binding method, which comprises the following steps:
constructing a front-end development platform, and defining components and corresponding parameters under the front-end development platform;
using a user interface module and an editor module under the front-end development platform to bind data;
after the data binding is successful, the component inquires commodity data from the corresponding business system interface;
Storing the acquired commodity data into a storage module;
defining a display mode of the component, and rendering the commodity data so as to display the commodity data on a front-end page through the component;
the data binding method comprises the steps of using a user interface module and an editor module under the front-end development platform to bind data, and comprising the following steps:
collecting user behavior data;
preprocessing the collected user behavior data;
constructing a long and short-term memory network LSTM;
training the LSTM using the collected user behavior data;
predicting commodity categories and labels of interest to a user by using the trained LSTM;
converting the predicted result into default parameters of the component;
and the user interface module sends a calling instruction to the editor module, wherein the calling instruction comprises default parameters of the component so that the editor module can acquire commodity data in a business system corresponding to the default parameters of the component.
Optionally, the method further comprises:
and when the component inquires the same commodity data again, directly acquiring the data from the storage module, and not inquiring the commodity data from the corresponding business system interface.
Optionally, the constructing the long and short time memory network LSTM includes:
determining the shape of the input data and the shape of the output data;
creating an LSTM comprising an input layer, one or more LSTM layers, and an output layer;
said training said LSTM using said collected user behavior data, comprising:
defining a loss function;
selecting an optimization algorithm for updating parameters of the LSTM to minimize a loss function;
training the LSTM using the collected user behavior data and the loss function, and updating parameters of LSTM at each traversal by the optimization algorithm to reduce the value of the loss function.
Optionally, the component queries the commodity data from the corresponding business system interface, including:
using a bayesian network to represent the dependency between query parameters;
calculating a query path with a maximum probability using the bayesian network;
and using the query path to a query module of the front-end development platform to acquire a commodity list.
Optionally, the using a bayesian network to represent the dependency relationship between the query parameters includes:
collecting query parameters, wherein the query parameters comprise parameter possible values and conditional probabilities of the parameter possible values;
Creating a Bayesian network, and defining nodes and edges of the Bayesian network according to the collected query data, wherein the nodes represent the query parameters, and the edges represent the dependency relationships among the parameters;
then using the bayesian network inference to compute a most probable query path, comprising:
setting all node distribution of the Bayesian network to be uniform distribution;
each node transmits the probability distribution of the node to the neighbor of the node, and the transmission process is continued until the probability distribution converges or reaches the preset iteration times;
acquiring final probability distribution by adopting a maximum probability propagation (MPMP) algorithm;
selecting a state sequence with the highest probability as a maximum probability path according to the final probability distribution;
the query path is used by a query module of a front-end development platform to obtain a commodity list, including:
generating a corresponding SQL query statement according to a query path with the maximum probability obtained by the Bayesian network;
executing the generated SQL query statement in the query module, and acquiring a commodity list from a database;
and sequencing and filtering the obtained commodity list.
Optionally, after the storing the acquired commodity data in the storage module, the method further includes:
Predicting commodity data for a high frequency query using a random forest;
according to the predicted information, a data cache list of the storage module is adjusted, and commodity data of the high-frequency query is placed at the head position of the cache list;
wherein the predicting commodity data for a high frequency query using a random forest comprises:
collecting historical data of commodities, wherein the historical data of the commodities comprise characteristics of the commodities and commodity inquiry frequency;
carrying out numerical processing on the historical data of the commodity;
the method comprises the steps of constructing a random forest model, wherein the random forest model is composed of a plurality of decision trees, and setting parameters of the random forest, wherein the parameters of the random forest comprise the number of the trees, the maximum depth of each tree and the feature number considered by each splitting, and each decision tree is obtained by training on a training sample of a random subset;
training the plurality of decision trees by utilizing the commodity historical data after numerical processing, wherein each decision tree is used for predicting commodity inquiry frequency;
summarizing the prediction results of each decision tree, and determining the prediction results of the random forest, wherein the prediction results of the random forest are commodity data of high-frequency query.
Optionally, after displaying the commodity data on the front-end page by the component, the method further comprises:
collecting behavior data of a user from the service system interface, wherein the behavior data comprise browsing history and purchase history of the user;
preprocessing behavior data of the user;
analyzing a behavior sequence of a user by using a long-short-term memory LSTM, and extracting behavior characteristics of the user;
analyzing a commodity image by using a convolutional neural network CNN model, and extracting the characteristics of the commodity image;
combining the behavior characteristics of the user and the characteristics of the commodity image by using a random forest, and predicting the preference degree score of the user for each commodity;
based on the prediction result, the commodity list displayed by the component on the front-end page is adjusted.
The embodiment of the invention also provides a front-end component binding device, which comprises:
the user interface module and the editor module are used for binding data;
the inquiry module is used for inquiring commodity data from the corresponding business system interface;
the storage module is used for storing the acquired commodity data;
the display module is used for defining the display mode of the component and rendering commodity data so as to display the commodity data module on a front-end page through the component;
The user interface module and the editor module are used for binding data, and comprise:
collecting user behavior data;
preprocessing the collected user behavior data;
constructing a long and short-term memory network LSTM;
training the LSTM using the collected user behavior data;
predicting commodity categories and labels of interest to a user by using the trained LSTM;
converting the predicted result into default parameters of the component;
and the user interface module sends a calling instruction to the editor module, wherein the calling instruction comprises default parameters of the component so that the editor module can acquire commodity data in a business system corresponding to the default parameters of the component.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the method steps of front end component binding described above.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
according to the technical scheme provided by the embodiment of the application, the effective butt joint/opening of each commodity data and the front-end component is realized by constructing the front-end platform and binding data by utilizing the UI module and the editor module in the front-end platform. The method is mainly aimed at the magic cube projects of light application, and the query results are queried and stored through key parameters, so that only the required information is queried in a certain service system, and all commodity data information is not required to be synchronously loaded, thereby not only realizing the dynamic display of the front-end assembly, but also improving the efficiency of data query and acquisition. Meanwhile, the scheme also introduces a caching mechanism, and can effectively save commodity data queried in the earlier stage so as to improve the efficiency of data query and acquisition. In addition, the application adopts a plurality of artificial intelligent models to bind data and search data, accelerates the efficiency, adopts an artificial intelligent cascade model to output the preference degree of the user for the commodity, and preferentially displays the preference degree on the front-end page and recommends the preference degree to the user.
Drawings
FIG. 1 is a schematic flow chart of a front end module binding method according to the present application;
FIG. 2 is a schematic diagram of a front-end development platform architecture provided by the present application;
FIG. 3 is a schematic flow chart of data binding by using the artificial intelligence method provided by the application;
FIG. 4 is a schematic flow chart of efficient commodity inquiry by adopting a Bayesian network;
FIG. 5 is a schematic diagram of a front end module binding apparatus according to the present application;
fig. 6 is a schematic structural diagram of a front end component binding system according to the present application.
Detailed Description
The application provides a front-end component binding method, a device and a storage medium, and aims to solve the problems of low data acquisition efficiency and poor user experience in the prior art by realizing an effective docking/opening method of commodity data and the front-end component.
Example 1
As shown in fig. 1, the present application provides a front-end component binding method, which includes:
s101, constructing a front-end development platform, and defining components and corresponding parameters under the front-end development platform;
front-end development platform as shown in fig. 2, the relationship and effect between the main modules of the front-end development platform-magic cube project is shown in fig. 2. The concrete explanation is as follows:
The magic cube project is a product of a typical MERN (MongoDB/Express/act/Node) architecture, has a relatively large number of project modules and is managed by lerna. The upper left 1 represents some tool class modules, such as x-site-core-osd, x-site-core-db, etc., providing underlying support. The upper right corner 2 represents the xsite-editor module, which is the most commonly modified module in the secondary development of the front end of the cube Boss. The bottom left corner 3 represents some packaged client modules for rendering of the magic cube editing page. The lower right corner 4 represents the server side module of the cube, including the API service and the template component management service. Arrows represent dependencies or interactions between different modules.
Wherein the definition (part) of the individual components and modules mentioned in fig. 2 is as follows:
MERN is a full-stack JavaScript solution for quickly building efficient websites and applications. It includes four open source components: mongoDB as a database, express as a back-end web framework, react as a front-end framework, and node. Js as a back-end runtime.
Lerna is a tool for managing JavaScript items with multiple packages that optimizes the workflow of such items, such as version control and dependency management, etc., where multiple packages can be handled simultaneously.
The storage module is a tool class and is used for accessing static resources, and comprises an Arian OSS object storage method, an Amazon AWS object storage method, a Tencent cloud COS object storage method and a cloud OBS object storage method.
The query module is a module for providing a mong odb access tool method and comprises information such as templates, pages and the like.
An x site-editor, which is a Web project based on the implementation of React and Plume2, is called by the x-site-Web-opapi module and is mainly used for site building editing.
x-site-Web-client, x-site-Web-openapi, and x-site-admin-client, which are all Web projects, are packaged into bundle resources for service invocation using weback.
X-site-web-server, which is an EJS-based Server Side Render project, provides API interface services while rendering views through the response approach res.
x-site-admin-API, a service based on NestJS implementation that provides an API interface.
9.x-site-UI (not shown), a user interface UI module, an item containing PC-side and mobile-side components, managed by lerna, the components having a fixed format, including an edit directory and a view directory.
The xwidget-cli is a tool used for modifying, publishing and debugging components and assisting in developing magic cube components.
X-site-admin-server-this is a service that manages updates and new components and publishes them on the alicloud OSS.
The magic-box is a Web project, enables a magic cube page to be displayed on a C-end page through configOrder and other information, and uses infinite scrolling optimization performance.
S102, performing data binding by using a user interface module and an editor module under the front-end development platform;
in the design process of the components of the operation activity page of the magic cube application, the components need to be communicated with each commodity data. For example, the front-end page displays a commodity list component, which does not acquire all commodity categories and labels synchronously, but rather queries key parameters, and stores the query results and displays them on the front-end component. The docking process is implemented primarily using an x-site-ui component and an xsite-editor module.
Specifically, data binding will be performed using the x-site-ui component and the xsite-editor. The magic cube component has a fixed format, an wait directory and a view directory. Under the edit directory, parameters of the component will be defined and data will be acquired from the service system interface. And defining the display mode of the component under the view directory, and rendering by using the data acquired from the wait directory.
For example, parameters of the component, such as the commodity category, commodity label, etc., may be defined under the edit directory of the x-site-ui component.
In editing the 'edit' directory of the component, a JavaScript file is first created to define the parameters that the component needs to accept. For example, a file named 'parameters' may be created and then various parameters may be defined in the file. Then, commodity list data is obtained from the business system interface according to the defined parameters using the x-site-core-db module.
In acquiring merchandise listing data, the tool method provided by the x-site-core-db module is required. The x-site-core-db module needs to be installed in the project first, and then imported where data is needed, and the method provided by the module is used.
In one embodiment, the data binding may also be performed by using an artificial intelligence (e.g., LSTM) method, as shown in fig. 3, and specifically includes steps A1-A7:
a1, collecting user behavior data;
including pages they browse, clicked on merchandise, purchased merchandise, etc.;
a2, preprocessing the collected user behavior data;
preprocessing comprises serializing user behaviors, normalizing characteristic values, processing missing values and the like;
A3, constructing a long and short-term memory network LSTM;
LSTM, collectively referred to as Long Short Term Memory (Long Short Term Memory), is a special Recurrent Neural Network (RNN) capable of capturing Long Term dependencies in Long sequence data. It helps to solve the gradient vanishing and gradient explosion problems that may occur when RNNs process long sequence data by introducing a structure called a "gate" to control the flow of information.
The model requires an input layer (corresponding to the number of features of the user's behavior), one or more LSTM layers (for capturing the time dependence of the user's behavior), and an output layer (corresponding to the predicted merchandise category and label).
The A3 specifically comprises the following steps:
A31. determining the shape of the input data and the shape of the output data;
the shape of the input data refers to the format and meaning of the input data, i.e., the requirement of the specific format and content of the input data, for example, if the data is the number of goods each user browses per day in the past 10 days, the input shape may be (10, 1) (format), meaning representing a sequence of 10 days, one feature per day (number of goods browsed).
Similarly, the shape of the output data refers to the format and meaning of the output data, and generally depends on what is desired to be predicted. For example, if it is desired to predict a category of merchandise that a user may be interested in a future day, an output node may be required to represent the category, defining the shape of the output data, i.e., defining the output format of the output data and the meaning of the output data.
A32. An LSTM is created that includes an input layer, one or more LSTM layers, and an output layer, each LSTM layer requiring knowledge of the shape of the input.
A4, training the LSTM by using the collected user behavior data;
a4 specifically includes:
A41. defining a loss function;
the loss function is a measure of how much the model tries to minimize during the training process. For example, the loss function may select a mean square error (for regression problems) and a cross entropy loss (for classification problems).
A42. Selecting an optimization algorithm for updating parameters of the LSTM to minimize a loss function;
the optimization algorithm defines how the parameters of the model are updated to minimize the loss function. Common optimization algorithms include random gradient descent (SGD) and variants thereof, such as Adam.
A43. Training the LSTM using the collected user behavior data and the loss function, and updating parameters of LSTM at each traversal by the optimization algorithm to reduce the value of the loss function.
A43 typically involves traversing the entire training dataset multiple times, each traversal updating parameters of the model to reduce the value of the penalty function.
A5, predicting commodity categories and labels of interest of the user by using the trained LSTM;
Assuming data on the number of items a user browses daily over the past 10 days, the trained LSTM model may be used to predict the categories of items the user may be interested in a future day. The specific steps may be as follows:
A51. the number of goods the user browses daily over the past 10 days is used as input data of the LSTM model.
A52. And predicting the input data by using the LSTM model to obtain a prediction result. The predictive result is a number that indicates the category of merchandise that the user may be interested in a future day.
A53. Converting the predicted result into an actual commodity category. That is, the predicted result is rounded and then mapped to the actual commodity category.
A6, converting the prediction result into default parameters of the component;
for example, if a component requires a commodity category and a commodity label as parameters, the most likely commodity category and label predicted by LSTM may be used as parameters for the component. In addition, a JavaScript file is created in the edit directory of the x-site-ui component, defining the parameters of the component. In this document, the results of LSTM model predictions are introduced as default parameters for the component.
And A7, the user interface module sends a calling instruction to the editor module, wherein the calling instruction comprises default parameters of the component so that the editor module can acquire commodity data in a business system corresponding to the default parameters of the component.
In the embodiment of the invention, the user interface module is x-site-ui, the editor module is x-site-editor, and default parameters are defined in the edit catalog of the x-site-ui component, then based on the default parameters, an API calling instruction is sent to the editor, the calling instruction includes the default parameters, based on the default parameters, the editor module only obtains commodity data related to the default parameters, for example, a commodity list only needs to obtain the commodity list from a certain electric business system, and the commodity list is a commodity category of interest to a user, and does not need to obtain a commodity category of interest not to the electric business system and not to the user.
The advantages of using the LSTM model mainly include:
the LSTM model is capable of processing long sequences of data, and is advantageous in processing user behavior data that contains time sequences.
The LSTM model may capture long-term dependencies in the data, helping to more accurately predict items of interest to the user.
Personalized commodity recommendation can be realized by using the LSTM model, and the use experience and purchase conversion rate of a user are improved.
In embodiments of the present invention, other artificial intelligence techniques besides LSTM may be used to analyze user behavior, such as:
convolutional Neural Network (CNN): if the user behavior data contains images (e.g., merchandise pictures viewed by the user), then the CNN can be used to analyze the image data.
Converter model (transducer): if the user behavior data is sequence data, then the data may be analyzed using a converter model. The converter model can process long sequence data and can capture long distance dependencies in the sequence.
Reinforcement learning: if the user behavior data can be represented as a decision process (e.g., a process where a user browses, clicks, purchases goods on an e-commerce web site), reinforcement learning can be used to analyze the decision process. Reinforcement learning may allow the model to learn optimal behavior strategies in interactions with the environment.
S103, after the data binding is successful, the component inquires commodity data from the corresponding service system interface;
specifically, in performing the parameter query, a query module (x-site-core-db module) is used, which provides a mondab access tool method including information of templates, pages, and the like. And inquiring according to key parameters such as commodity category, commodity label and the like to obtain a commodity list. For example, by querying an interface of the business system of the electric business, a commodity list corresponding to the business system of the electric business is obtained.
In addition, in one embodiment, S103 may also use a bayesian network to represent the dependency relationship between the query parameters, and use bayesian inference to calculate the query path with the highest probability. This query path may be used in the x-site-core-db module to more efficiently obtain the item list.
A Bayesian Network (Bayesian Network) is a probabilistic graph model representing a set of random variables and their conditional dependencies. The bayesian network is composed of nodes, each representing a random variable, and directed edges, each representing a dependency between the variables. Bayesian networks aim to establish an explicit, quantitative, visual way to represent complex relationships between variables.
In S103, as shown in fig. 4, the steps of S103 for performing efficient commodity query using bayesian network include B1-B3:
b1, using a Bayesian network to represent the dependency relationship between query parameters;
wherein, B1 specifically includes:
B11. collecting query parameters, wherein the query parameters comprise parameter possible values and conditional probabilities of the parameter possible values; such data may be obtained from historical query records.
B12. And creating a Bayesian network, and defining nodes and edges of the Bayesian network according to the collected query data, wherein the nodes represent the query parameters, and the edges represent the dependency relationships among the parameters.
B2, calculating a query path with the highest probability by using the Bayesian network;
b2 specifically comprises:
b21, setting all node distribution of the Bayesian network to be uniform distribution;
this process is also called initialization, i.e. the probability distribution of all nodes is set to be uniform (or set according to a priori knowledge).
B22, each node transmits the probability distribution to the neighbor thereof, and the transmission process is continued until the probability distribution converges or reaches the preset iteration times;
this process is also called messaging, in which each node's own probability distribution is calculated from the information of its neighbors and its own observations.
B23, acquiring final probability distribution by adopting a maximum probability propagation (MPMP) algorithm;
the maximum probability propagation (Max-Product Message Passing, MPMP) is a special graph algorithm for obtaining the final probability distribution in a graph model such as a bayesian network or a markov random field and finding the most likely state sequence.
B24, selecting a state sequence with the highest probability as a maximum probability path according to the final probability distribution;
this process is also called decoding, i.e. selecting the most probable state sequence as the most probable path based on the final probability distribution.
And B3, using the query path to a query module of the front-end development platform to acquire a commodity list.
B3 specifically comprises:
b31, generating a corresponding SQL query sentence according to the query path with the maximum probability obtained by the Bayesian network;
for example, each node in the query path may be converted to a WHERE clause in an SQL statement.
Three nodes are assumed in the Bayesian network, which respectively represent Category (Category), color (Color) and Price (Price) of the commodity. The maximum probability path of the bayesian network is: the Electronics category ('Electronics') - > Black ('Black') - > price is 500-1000 yuan ('500-1000').
Then this path can be converted into the following SQL query statement:
```sql
SELECT * FROM Products
WHERE Category = 'Electronics' AND Color = 'Black' AND Price BETWEEN 500 AND 1000;
```
b32, executing the generated SQL query statement in the query module, and acquiring a commodity list from a database;
in the x-site-core-db module, there is typically a database connection object or an interface to database queries through which SQL query statements can be executed.
For example, if a SQLite library of Python is used, then the SQL query statement may be executed as follows:
```python
import sqlite3
Creating a database connection
conn = sqlite3.connect('example.db')
Creating a cursor object
cur = conn.cursor()
Executing SQL query statements
cur.execute("SELECT * FROM Products WHERE Category = 'Electronics' AND Color = 'Black' AND Price BETWEEN 500 AND 1000")
Obtaining query results
rows = cur.fetchall()
for row in rows:
print(row)
```
And B33, sequencing and filtering the obtained commodity list.
For example, assume that there are two query parameters, one is a category of merchandise and the other is a tag of merchandise. A bayesian network can be used to represent the dependency between the two parameters and bayesian inference can then be used to calculate which query path has the greatest probability. It is assumed that the most probable query path is to query the electronic product category first and then query the "most recently released" tag. Then, the commodity list can be acquired according to the category of the electronic product, and then the commodity list is filtered according to the label of 'latest release', so that the commodity possibly interested by the user can be acquired more effectively.
The advantages of using bayesian networks compared to conventional techniques mainly include:
the bayesian network can clearly represent the dependency between query parameters, helping to understand and interpret the query process.
Through Bayesian inference, the query path with the maximum probability can be calculated, so that the query process is more effective, and the query speed is improved.
Bayesian networks can easily handle uncertainty and missing data, making it more robust in handling actual query problems.
S104, storing the acquired commodity data into a storage module;
in one embodiment, where the merchandise data is merchandise, a machine learning model, such as a random forest, may be used to predict which merchandise may be queried at high frequency. The data cache in the x-site-core-osd module may then be adjusted based on this prediction, placing the commodity data for the high frequency query in the front/head of the cache list (the front and head may be custom, e.g., the first 5 items of the cache list are defined as front or head positions, and the last 5 items are defined as rear or tail positions) so that it may be quickly queried.
Specifically, after S104, the method further includes the following steps C1-C2:
c1, predicting commodity data of the high-frequency query by using a random forest;
random forests are an integrated learning algorithm, consisting of a number of decision trees. Each tree trains a random subset of training samples and attempts to predict target variables. Finally, the random forest makes the final predictions by voting (for classification problems) or averaging (for regression problems) the predictions of all trees.
Wherein, C1 includes:
C11. collecting historical data of commodities, wherein the historical data of the commodities comprise characteristics (such as category, price, score and the like) of the commodities and commodity inquiry frequency;
C12. Carrying out numerical processing on the historical data of the commodity;
random forests can only process numeric data, so non-numeric attributes (e.g., brands) need to be converted into numeric data. For classification problems, the class labels need to be encoded as integers.
C13. Constructing a random forest model, wherein the random forest model is composed of a plurality of decision trees, and parameters of the random forest are set, the parameters of the random forest comprise the number of the trees, the maximum depth of each tree and the feature number considered by each splitting, and each decision tree is obtained by training on a training sample of a random subset;
C14. training the plurality of decision trees by utilizing the commodity historical data after numerical processing, wherein each decision tree is used for predicting commodity inquiry frequency;
during the training process, the model learns which commodity features are associated with high query frequencies. Each decision tree is used to predict query frequency. During training, each split selects the best feature from the randomly selected features.
C15. Summarizing the prediction results of each decision tree, and determining the prediction results of the random forest, wherein the prediction results of the random forest are commodity data of high-frequency query.
Finally, the prediction result of the random forest is the average of all the decision tree predictions (for regression problems) or the voting result (for classification problems).
And C2, adjusting a data cache list of the storage module according to the predicted information, and placing commodity data of the high-frequency query at the head position of the cache list.
In the embodiment of the invention, the head position can be manually determined in the range of the position, for example, the first 5 items of the cache list are defined as head positions, the last 5 items are defined as tail positions and the other items are defined as middle positions, so that the cache list is traversed from the head to the tail in the sequence, and commodity data of high-frequency query can be quickly found.
In the embodiment of the invention, the high frequency, the medium frequency and the low frequency represent a definite range, and the range can be defined artificially, for example, the high frequency is commodity which is browsed/visited more than 100 times a day, the medium frequency is browsed/visited 50-100 times a day, and the low frequency is lower than 50 times a day.
The following is a simple example of building and training a random forest model using the sklearn library of Python:
```python
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import pandas as pd
import numpy as np
reading data
data = pd.read_csv('product_data.csv')
X = data.drop('query_frequency', axis=1)
y = data['query_frequency']
Partitioning training sets and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Creating random forest models
model = RandomForestRegressor(n_estimators=100, max_depth=10, random_state=42)
Training model
model.fit(X_train, y_train)
Predicting query frequency
y_pred = model.predict(X_test)
Calculating mean square error
mse = mean_squared_error(y_test, y_pred)
print('Mean Squared Error:', mse)
```
The above code first reads a CSV file containing commodity attributes and query frequency. The data is then divided into a training set and a test set. Next, a random forest model is created and trained using the training set data. And finally, predicting the test set data, and calculating the mean square error of the prediction result.
Wherein, the specific steps of C2 include:
first, a random forest model is used to predict the query frequency of all items.
Then, the commodities are ordered according to the predicted query frequency, and the commodities with high query frequency are ranked in front.
And finally, storing commodity data into a data cache of the x-site-core-osd module according to the sequencing result.
In the data cache, the storage location of commodity data may be regarded as its priority, and commodity data stored in the front has a higher priority because it is found preferentially at the time of inquiry.
Through the above steps, commodity data predicted as a high-frequency query have been placed in front of the cache list, thereby raising their priority.
For example, a hypothetical random forest model predicts that the following three items are likely to be queried at high frequencies: iPhone 13, macbook Pro and AirPods Pro. The information for these three items may then be placed in front of the data cache. Therefore, when a user inquires the three commodities, commodity data can be directly obtained from the data cache, and an inquiry request is not required to be sent to the database, so that the inquiry efficiency is improved.
The advantages of using random forests compared to conventional techniques mainly include:
random forests are a non-parametric model that can process various types of data, including continuous values, discrete values, ordered values, and unordered values.
Random forests can evaluate the importance of features, helping to understand which features are most important to predicting the commodity of a high frequency query.
The random forest is an integrated method, so that the risk of overfitting can be reduced, and the generalization capability of the model can be improved.
By predicting the inquiry frequency of the commodity, the data cache can be better managed, and the response speed of the system is improved.
In addition to random forests, there are many other artificial intelligence techniques that can be used to predict query frequencies, such as Support Vector Machines (SVMs), neural networks, gradient-lifted trees (Gradient Boosting Trees), linear regression, and the like. These models have respective advantages and applicable scenes, and an appropriate model needs to be selected according to practical problems.
S105, defining a display mode of the assembly, and rendering the commodity data so as to display the commodity data on a front-end page through the assembly.
In addition, when the component inquires the same commodity data again, the data is directly obtained from the storage module, and commodity data is not inquired from a corresponding service system interface, so that the inquiry efficiency is improved.
In one embodiment, if the commodity data is a commodity, the method further includes the following steps D1-D6 after S105:
D1. collecting behavior data of a user from the service system interface, wherein the behavior data comprise browsing history and purchase history of the user; these data may be acquired from a business system interface.
D2. Preprocessing behavior data of the user; after the data is collected, the data needs to be preprocessed, such as normalization processing, filling in missing values, and the like.
D3. Analyzing a behavior sequence of a user by using LSTM, and extracting behavior characteristics of the user;
for example, the user's browsing history may be considered a time series, and then the LSTM model is used to learn the pattern of this time series. Specific implementation steps of D3 include constructing LSTM, training LSTM and analyzing with LSTM, some of which are the same as some of the technical descriptions in A1-A7, and are not further described herein. Analysis of the behavior sequence of a user using LSTM is prior art and embodiments of the present invention are not further described.
D4. Analyzing a commodity image by using a convolutional neural network CNN model, and extracting the characteristics of the commodity image;
convolutional neural networks (Convolutional Neural Networks, CNN) are a deep learning algorithm that has very high accuracy in the field of image processing. CNN extracts features of an image by performing a series of convolution and pooling operations on the image. The following is a basic step of using CNN for image classification:
Data preprocessing: the image data needs to be preprocessed before training the model. Preprocessing may include scaling the image to match the input layer of the network or normalizing the pixel values to be in the range of 0-1.
Constructing a CNN model: a basic CNN model generally consists of the following parts:
convolution layer: these layers extract image features by sliding small windows (also known as convolution kernels) over the input image and performing dot product operations. Each convolutional layer learns some specific features, such as edges, textures, etc.
An activation layer: the convolution operation is typically followed by an activation function, such as ReLU, to introduce nonlinearities that enable the model to learn more complex features.
Pooling layer: the pooling operation can reduce the spatial size of the image, reduce the calculation amount, and simultaneously retain important characteristic information.
Full tie layer: after a series of rolling and pooling operations, the network may have one or more fully connected layers for final classification or regression tasks.
Training a model: training CNN models typically requires a large amount of annotated image data. In the training process, the model optimizes parameters of the model through algorithms such as back propagation, gradient descent and the like, so that a prediction result of the model is as close to a real label as possible.
Predicting a new image: once the model training is complete, it can be used to predict new images. In prediction, a new image is input into a model after preprocessing, the model outputs the probability of each category, and the category with the highest probability is generally selected as a prediction result.
In practice, there may be many variations and extensions, such as using multiple parallel convolution kernels, or using techniques of depth separable convolution (depthwise separable convolution), batch normalization (batch normalization), residual connection (residual connection), etc. to refine the model.
D5. Combining the behavior characteristics of the user and the characteristics of the commodity image by using a random forest, and predicting the preference degree score of the user for each commodity;
the extracted features need to be combined. This step typically involves concatenating the features extracted by the LSTM and CNN together to form a larger feature vector. Thus, there is a large feature vector that contains both time series features and visual features.
These merged features can then be used to train a random forest model. During the training phase, the random forest model will attempt to find the relationship between these features and the target variables.
After training is completed, new data can be predicted by using the trained random forest model. In the prediction stage, the same feature extraction and combination steps as those in the training stage are required to be carried out on the new data, and then the features are input into a random forest model to obtain a prediction result.
Specifically, the following is a more specific step, taking as an example the prediction of sales of a certain commodity. It is assumed that feature vectors have been derived from LSTM and CNN, respectively.
Let LSTM extracted feature vector be 'lstm_features' whose dimension is 'n_samples, n_lstm_features', where 'n_samples' is the number of samples and 'n_lstm_features' is the number of features extracted by LSTM.
Also, assume that the feature vector extracted by CNN is 'cnn_features', whose dimension is 'n_samples, n_cnn_features', where 'n_cnn_features' is the number of features extracted by CNN.
The two feature vectors may be connected together using the 'confcate' function of 'numpy'.
```python
import numpy as np
Combining features extracted from LSTM and CNN
all_features = np.concatenate((lstm_features, cnn_features), axis=1)
```
Thus, a feature vector 'all_features' having a dimension of 'n_samples, n_lstm_features+n_cnn_features' is obtained.
Assuming a target variable (i.e., preference score for each commodity) of 'interval', the random forest model can be trained using 'random forest' of 'scikit-learn'.
```python
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
# division training set and test set
X_train, X_test, y_train, y_test = train_test_split(all_features, interest, test_size=0.2, random_state=42)
# creation of random forest model
model=RandomForestRegressor(n_estimators=100, random_state=42)
Training model #
model.fit(X_train, y_train)
# predictive test set
y_pred = model.predict(X_test)
```
Thus, a random forest model is trained and can be used for predicting the preference degree of commodities.
D6. Based on the prediction result, the commodity list displayed by the component on the front-end page is adjusted.
After obtaining the predictive score for each commodity, the commodities may be preferentially cached in order of the score from high to low, so that commodities with high predictive scores (e.g., more than 90 scores) are preferentially displayed on the front page and recommended to the user.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
according to the technical scheme provided by the embodiment of the application, the effective butt joint/opening of each commodity data and the front-end component is realized by constructing the front-end platform and binding data by utilizing the UI module and the editor module in the front-end platform. The method is mainly aimed at the magic cube projects of light application, and the query results are queried and stored through key parameters, so that only the required information is queried in a certain service system, and all commodity data information is not required to be synchronously loaded, thereby not only realizing the dynamic display of the front-end assembly, but also improving the efficiency of data query and acquisition. Meanwhile, the scheme also introduces a caching mechanism, and can effectively save commodity data queried in the earlier stage so as to improve the efficiency of data query and acquisition. In addition, the application adopts a plurality of artificial intelligent models to bind data and search data, accelerates the efficiency, adopts an artificial intelligent cascade model to output the preference degree of the user for the commodity, and preferentially displays the preference degree on the front-end page and recommends the preference degree to the user.
Example two
Based on the same inventive concept as one of the front-end component binding methods in the foregoing embodiments, as shown in fig. 5, an embodiment of the present application further provides a front-end component binding apparatus (i.e., a front-end development platform of the first embodiment), where the apparatus includes:
a user interface module 51 and an editor module 52 for data binding;
a query module 52, configured to query commodity data from a corresponding service system interface;
a storage module 54, configured to store the acquired commodity data;
and the display module 55 is used for defining the display mode of the component and rendering commodity data so as to display the commodity data module on a front-end page through the component.
The related art description is the same as that of the first embodiment, and thus will not be described again.
Example III
Based on the same inventive concept as the front-end component binding method in the previous embodiment, the present application also provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the method as in the first embodiment.
Example IV
The embodiment of the present application further provides a front-end component binding system 6000, as shown in fig. 6, including a memory 64 and a processor 61, where the memory stores computer executable instructions, and the processor implements the method when running the computer executable instructions on the memory. In practical applications, the system may further include other necessary elements, including but not limited to any number of input devices 62, output devices 63, processors 61, controllers, memories 64, etc., and all systems that can implement the front end component binding method of the embodiments of the present application are within the scope of the present application.
The memory includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read to only memory, CD to ROM) for the associated instructions and data.
The input means 62 are for inputting data and/or signals and the output means 63 are for outputting data and/or signals. The output device 63 and the input device 62 may be separate devices or may be an integral device.
A processor may include one or more processors, including for example one or more central processing units (central processing unit, CPU), which in the case of a CPU may be a single core CPU or a multi-core CPU. The processor may also include one or more special purpose processors, which may include GPUs, FPGAs, etc., for acceleration processing.
The memory is used to store program codes and data for the network device.
The processor is used to call the program code and data in the memory to perform the steps of the method embodiments described above. Reference may be made specifically to the description of the method embodiments, and no further description is given here.
In the several embodiments provided by the present application, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the division of the unit is merely a logic function division, and there may be another division manner when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. The coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, system or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a read-only memory (ROM), or a random-access memory (random access memory, RAM), or a magnetic medium such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium such as a digital versatile disk (digital versatile disc, DVD), or a semiconductor medium such as a Solid State Disk (SSD), or the like.
The specification and figures are merely exemplary illustrations of the present application and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, the present application is intended to include such modifications and alterations insofar as they come within the scope of the application or the equivalents thereof.

Claims (7)

1. A method of front end component binding, comprising:
constructing a front-end development platform, and defining components and corresponding parameters under the front-end development platform;
using a user interface module and an editor module under the front-end development platform to bind data;
after the data binding is successful, the component inquires commodity data from the corresponding business system interface;
storing the acquired commodity data into a storage module;
defining a display mode of the component, and rendering the commodity data so as to display the commodity data on a front-end page through the component;
the data binding method comprises the steps of using a user interface module and an editor module under the front-end development platform to bind data, and comprising the following steps:
Collecting user behavior data;
preprocessing the collected user behavior data;
constructing a long and short-term memory network LSTM;
training the LSTM using the collected user behavior data;
predicting commodity categories and labels of interest to a user by using the trained LSTM;
converting the predicted result into default parameters of the component;
the user interface module sends a calling instruction to the editor module, wherein the calling instruction comprises default parameters of the component so that the editor module can acquire commodity data in a business system corresponding to the default parameters of the component;
wherein the component queries commodity data from the corresponding business system interface, comprising:
using a bayesian network to represent the dependency between query parameters;
calculating a query path with a maximum probability using the bayesian network;
the query path is used for a query module of a front-end development platform to acquire a commodity list;
wherein using a bayesian network to represent the dependency between query parameters includes:
collecting query parameters, wherein the query parameters comprise parameter possible values and conditional probabilities of the parameter possible values;
Creating a Bayesian network, and defining nodes and edges of the Bayesian network according to the collected query data, wherein the nodes represent the query parameters, and the edges represent the dependency relationships among the parameters;
then using the bayesian network inference to compute a most probable query path, comprising:
setting all node distribution of the Bayesian network to be uniform distribution;
each node transmits the probability distribution of the node to the neighbor of the node, and the transmission process is continued until the probability distribution converges or reaches the preset iteration times;
acquiring final probability distribution by adopting a maximum probability propagation (MPMP) algorithm;
selecting a state sequence with the highest probability as a maximum probability path according to the final probability distribution;
the query path is used by a query module of a front-end development platform to obtain a commodity list, including:
generating a corresponding SQL query statement according to a query path with the maximum probability obtained by the Bayesian network;
executing the generated SQL query statement in the query module, and acquiring a commodity list from a database;
and sequencing and filtering the obtained commodity list.
2. The method according to claim 1, wherein the method further comprises:
And when the component inquires the same commodity data again, directly acquiring the data from the storage module, and not inquiring the commodity data from the corresponding business system interface.
3. The method of claim 1, wherein the constructing the long short-term memory network LSTM comprises:
determining the shape of the input data and the shape of the output data;
creating an LSTM comprising an input layer, one or more LSTM layers, and an output layer;
said training said LSTM using said collected user behavior data, comprising:
defining a loss function;
selecting an optimization algorithm for updating parameters of the LSTM to minimize a loss function;
training the LSTM using the collected user behavior data and the loss function, and updating parameters of LSTM at each traversal by the optimization algorithm to reduce the value of the loss function.
4. The method of claim 1, wherein after the storing the acquired commodity data to the storage module, the method further comprises:
predicting commodity data for a high frequency query using a random forest;
according to the predicted information, a data cache list of the storage module is adjusted, and commodity data of the high-frequency query is placed at the head position of the cache list;
Wherein the predicting commodity data for a high frequency query using a random forest comprises:
collecting historical data of commodities, wherein the historical data of the commodities comprise characteristics of the commodities and commodity inquiry frequency;
carrying out numerical processing on the historical data of the commodity;
the method comprises the steps of constructing a random forest model, wherein the random forest model is composed of a plurality of decision trees, and setting parameters of the random forest, wherein the parameters of the random forest comprise the number of the trees, the maximum depth of each tree and the feature number considered by each splitting, and each decision tree is obtained by training on a training sample of a random subset;
training the plurality of decision trees by utilizing the commodity historical data after numerical processing, wherein each decision tree is used for predicting commodity inquiry frequency;
summarizing the prediction results of each decision tree, and determining the prediction results of the random forest, wherein the prediction results of the random forest are commodity data of high-frequency query.
5. The method of claim 1, wherein after displaying the merchandise data on a front end page by the component, the method further comprises:
Collecting behavior data of a user from the service system interface, wherein the behavior data comprise browsing history and purchase history of the user;
preprocessing behavior data of the user;
analyzing a behavior sequence of a user by using a long-short-term memory LSTM, and extracting behavior characteristics of the user;
analyzing a commodity image by using a convolutional neural network CNN model, and extracting the characteristics of the commodity image;
combining the behavior characteristics of the user and the characteristics of the commodity image by using a random forest, and predicting the preference degree score of the user for each commodity;
based on the prediction result, the commodity list displayed by the component on the front-end page is adjusted.
6. A front end component binding apparatus, the apparatus comprising:
the user interface module and the editor module are used for binding data;
the inquiry module is used for inquiring commodity data from the corresponding business system interface;
the storage module is used for storing the acquired commodity data;
the display module is used for defining the display mode of the component and rendering commodity data so as to display the commodity data module on a front-end page through the component;
the user interface module and the editor module are used for binding data, and comprise:
Collecting user behavior data;
preprocessing the collected user behavior data;
constructing a long and short-term memory network LSTM;
training the LSTM using the collected user behavior data;
predicting commodity categories and labels of interest to a user by using the trained LSTM;
converting the predicted result into default parameters of the component;
the user interface module sends a calling instruction to the editor module, wherein the calling instruction comprises default parameters of the component so that the editor module can acquire commodity data in a business system corresponding to the default parameters of the component;
the query module is configured to query commodity data from a corresponding service system interface, and includes:
using a bayesian network to represent the dependency between query parameters;
calculating a query path with a maximum probability using the bayesian network;
the query path is used for a query module of a front-end development platform to acquire a commodity list;
wherein using a bayesian network to represent the dependency between query parameters includes:
collecting query parameters, wherein the query parameters comprise parameter possible values and conditional probabilities of the parameter possible values;
Creating a Bayesian network, and defining nodes and edges of the Bayesian network according to the collected query data, wherein the nodes represent the query parameters, and the edges represent the dependency relationships among the parameters;
then using the bayesian network inference to compute a most probable query path, comprising:
setting all node distribution of the Bayesian network to be uniform distribution;
each node transmits the probability distribution of the node to the neighbor of the node, and the transmission process is continued until the probability distribution converges or reaches the preset iteration times;
acquiring final probability distribution by adopting a maximum probability propagation (MPMP) algorithm;
selecting a state sequence with the highest probability as a maximum probability path according to the final probability distribution;
the query path is used by a query module of a front-end development platform to obtain a commodity list, including:
generating a corresponding SQL query statement according to a query path with the maximum probability obtained by the Bayesian network;
executing the generated SQL query statement in the query module, and acquiring a commodity list from a database;
and sequencing and filtering the obtained commodity list.
7. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-5.
CN202310987015.0A 2023-08-08 2023-08-08 Front-end component binding method, device and storage medium Active CN116738081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310987015.0A CN116738081B (en) 2023-08-08 2023-08-08 Front-end component binding method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310987015.0A CN116738081B (en) 2023-08-08 2023-08-08 Front-end component binding method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116738081A CN116738081A (en) 2023-09-12
CN116738081B true CN116738081B (en) 2023-10-27

Family

ID=87901462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310987015.0A Active CN116738081B (en) 2023-08-08 2023-08-08 Front-end component binding method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116738081B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932588A (en) * 2018-06-29 2018-12-04 华中科技大学 A kind of the GROUP OF HYDROPOWER STATIONS Optimal Scheduling and method of front and back end separation
CN109670267A (en) * 2018-12-29 2019-04-23 北京航天数据股份有限公司 A kind of data processing method and device
CN110807808A (en) * 2019-10-14 2020-02-18 浙江理工大学 Commodity identification method based on physical engine and deep full convolution network
CN111652653A (en) * 2020-06-10 2020-09-11 创新奇智(南京)科技有限公司 Price determination and prediction model construction method, device, equipment and storage medium
CN112528525A (en) * 2020-12-31 2021-03-19 河钢数字技术股份有限公司 Visual industrial process management and control platform based on modeling technology
CN112835570A (en) * 2021-03-15 2021-05-25 深圳中科西力数字科技有限公司 Machine learning-based visual mathematical modeling method and system
CN113597629A (en) * 2019-03-28 2021-11-02 脸谱公司 Generating digital media clusters corresponding to predicted distribution categories from a repository of digital media based on network distribution history
CN113919797A (en) * 2021-09-02 2022-01-11 用友网络科技股份有限公司 Artificial intelligence service generation method and device and computer readable storage medium
CN114663135A (en) * 2022-03-03 2022-06-24 支付宝(杭州)信息技术有限公司 Information sending method, device, equipment and readable medium
CN114912972A (en) * 2022-04-06 2022-08-16 刘二松 Block chain based network promotion marketing platform and method thereof
CN115686492A (en) * 2021-07-30 2023-02-03 青岛海尔科技有限公司 H5 page editing method and device
CN115760300A (en) * 2022-11-28 2023-03-07 天翼电子商务有限公司 Method and system for managing sales products based on labels

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11736973B2 (en) * 2018-08-29 2023-08-22 Carleton University Enabling wireless network personalization using zone of tolerance modeling and predictive analytics
JP2023513314A (en) * 2020-02-13 2023-03-30 ザイマージェン インコーポレイテッド Metagenome library and natural product discovery platform
US20220075877A1 (en) * 2020-09-09 2022-03-10 Self Financial, Inc. Interface and system for updating isolated repositories

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932588A (en) * 2018-06-29 2018-12-04 华中科技大学 A kind of the GROUP OF HYDROPOWER STATIONS Optimal Scheduling and method of front and back end separation
CN109670267A (en) * 2018-12-29 2019-04-23 北京航天数据股份有限公司 A kind of data processing method and device
CN113597629A (en) * 2019-03-28 2021-11-02 脸谱公司 Generating digital media clusters corresponding to predicted distribution categories from a repository of digital media based on network distribution history
CN110807808A (en) * 2019-10-14 2020-02-18 浙江理工大学 Commodity identification method based on physical engine and deep full convolution network
CN111652653A (en) * 2020-06-10 2020-09-11 创新奇智(南京)科技有限公司 Price determination and prediction model construction method, device, equipment and storage medium
CN112528525A (en) * 2020-12-31 2021-03-19 河钢数字技术股份有限公司 Visual industrial process management and control platform based on modeling technology
CN112835570A (en) * 2021-03-15 2021-05-25 深圳中科西力数字科技有限公司 Machine learning-based visual mathematical modeling method and system
CN115686492A (en) * 2021-07-30 2023-02-03 青岛海尔科技有限公司 H5 page editing method and device
CN113919797A (en) * 2021-09-02 2022-01-11 用友网络科技股份有限公司 Artificial intelligence service generation method and device and computer readable storage medium
CN114663135A (en) * 2022-03-03 2022-06-24 支付宝(杭州)信息技术有限公司 Information sending method, device, equipment and readable medium
CN114912972A (en) * 2022-04-06 2022-08-16 刘二松 Block chain based network promotion marketing platform and method thereof
CN115760300A (en) * 2022-11-28 2023-03-07 天翼电子商务有限公司 Method and system for managing sales products based on labels

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Commodity classification based on feature enhancement;Yige Fang et al.;《International Conference on Electronic Information Engineering and Data Process》》;1-6 *
医疗保险数据可视化系统设计与实现;陈旭 等;《软件导刊》(第06期);62-65 *
基于SSH整合MVC分层的网上商城的设计与实现;徐红升 等;《洛阳师范学院学报》(第02期);81-84 *
基于Web的气象项目管理系统的设计与实现;邱忠洋;雷正翠;刘文伟;;计算机技术与发展(第07期);211-216 *
基于机器学习融合算法的网络购买行为预测研究;祝歆 等;《统计与信息论坛》(第12期);108-114 *
基于贝叶斯网的知识图谱链接预测;韩路;尹子都;王钰杰;胡矿;岳昆;;计算机科学与探索(第05期);67-76 *
梦想云应用商店建设研究;魏春柳 等;《中国石油勘探》(第05期);108-114 *

Also Published As

Publication number Publication date
CN116738081A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
US11038976B2 (en) Utilizing a recommendation system approach to determine electronic communication send times
EP4242955A1 (en) User profile-based object recommendation method and device
CN112308650B (en) Recommendation reason generation method, device, equipment and storage medium
US20210133612A1 (en) Graph data structure for using inter-feature dependencies in machine-learning
CN110110233B (en) Information processing method, device, medium and computing equipment
US11416754B1 (en) Automated cloud data and technology solution delivery using machine learning and artificial intelligence modeling
US11741111B2 (en) Machine learning systems architectures for ranking
US20230096118A1 (en) Smart dataset collection system
US20230083891A1 (en) Methods and systems for integrated design and execution of machine learning models
Penchikala Big data processing with apache spark
CN110264277B (en) Data processing method and device executed by computing equipment, medium and computing equipment
US20230040412A1 (en) Multi-language source code search engine
CN112784157A (en) Training method of behavior prediction model, behavior prediction method, device and equipment
CN116738081B (en) Front-end component binding method, device and storage medium
CN112749325A (en) Training method and device for search ranking model, electronic equipment and computer medium
CN114429384B (en) Intelligent product recommendation method and system based on e-commerce platform
US20230030341A1 (en) Dynamic user interface and machine learning tools for generating digital content and multivariate testing recommendations
CN112328899B (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN115965089A (en) Machine learning method for interface feature display across time zones or geographic regions
CN113743973A (en) Method and device for analyzing market hotspot trend
Liu Apache spark machine learning blueprints
CN113159877A (en) Data processing method, device, system and computer readable storage medium
Yuan et al. Research of intelligent reasoning system of Arabidopsis thaliana phenotype based on automated multi-task machine learning
US20240005146A1 (en) Extraction of high-value sequential patterns using reinforcement learning techniques
US20240135283A1 (en) Multi-layer micro model analytics framework in information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant