WO2022040972A1 - 产品信息可视化处理方法、装置、计算机设备 - Google Patents

产品信息可视化处理方法、装置、计算机设备 Download PDF

Info

Publication number
WO2022040972A1
WO2022040972A1 PCT/CN2020/111320 CN2020111320W WO2022040972A1 WO 2022040972 A1 WO2022040972 A1 WO 2022040972A1 CN 2020111320 W CN2020111320 W CN 2020111320W WO 2022040972 A1 WO2022040972 A1 WO 2022040972A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute data
dimension
model
sorting
probability
Prior art date
Application number
PCT/CN2020/111320
Other languages
English (en)
French (fr)
Inventor
胡瑞珍
黄惠
陈滨
许聚展
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to US17/997,491 priority Critical patent/US20230162254A1/en
Publication of WO2022040972A1 publication Critical patent/WO2022040972A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy
    • G06Q30/0627Directed, with specific intent or strategy using item specifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products

Definitions

  • the present application relates to the field of computer technology, and in particular, to a product information visualization processing method, device, computer equipment and storage medium.
  • a product information visualization processing method comprising:
  • a product information visualization processing device comprising:
  • the sorting model is used to identify the attribute data set of each dimension until sorting results corresponding to multiple dimensions are output according to the preset number of attribute dimensions.
  • a computer device comprising a memory and one or more processors, the memory having computer-readable instructions stored therein, the computer-readable instructions, when executed by the processor, cause the one or more processors to execute The following steps: get product information;
  • One or more computer storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps: obtain product information;
  • FIG. 1 is an application environment diagram of a product information visualization processing method in one embodiment.
  • FIG. 2 is a schematic flowchart of a method for visualizing product information in one embodiment.
  • FIG. 3A is a schematic flowchart of a step of identifying attribute data sets of each dimension by using a ranking model in one embodiment.
  • FIG. 3B is a schematic diagram of a network structure of a ranking model in one embodiment.
  • FIG. 4 is a schematic flowchart of the steps of training a ranking model in one embodiment.
  • FIG. 5 is a schematic flowchart of the steps of training a ranking model in another embodiment.
  • FIG. 6A is a schematic flowchart of training steps of a distance prediction model in one embodiment.
  • FIG. 6B is a schematic diagram of a network structure of a distance prediction model in one embodiment.
  • FIG. 7 is a structural block diagram of an apparatus for visualizing product information in one embodiment.
  • FIG. 8 is a diagram of the internal structure of a computer device in one embodiment.
  • the product information visualization processing method provided in the embodiment of the present application can be applied to the application environment as shown in FIG. 1 , where the terminal 102 communicates with the server 104 through the network.
  • the terminal 102 sends a product information acquisition request to the server 104 , and the server 104 queries the corresponding multiple-dimensional product information according to the received product information acquisition request and returns the corresponding sorting result to the terminal 102 .
  • the server 104 obtains the product information, the server 104 extracts attribute data sets of multiple dimensions corresponding to the product information, the server 104 inputs the attribute data sets of multiple dimensions into the pre-trained sorting model, and the server 104 uses the sorting model to classify the attributes of each dimension The data set is identified until the sorting results corresponding to multiple dimensions are output according to the preset number of attribute dimensions.
  • the terminal 102 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server 104 can be implemented by an independent server or a server cluster composed of multiple servers.
  • the following embodiments illustrate that the product information visualization processing method is applied to the server in FIG. 1 as an example, but it should be noted that, in practical application, the method is not limited to the above-mentioned server.
  • a flow chart of a product information visualization processing method the method specifically includes the following steps:
  • Step 202 obtain product information.
  • the server can receive product information query instructions sent by different terminals, and the product information query instructions can come from users or staff.
  • the server may receive information query instructions in different ways in different scenarios.
  • the server may receive a product information query instruction by detecting a gesture of a user or a staff member.
  • the server can also receive the product information query instruction by detecting the voice instruction of the user or staff.
  • the server detects a product information query instruction sent by a user or a staff member, the server can acquire corresponding product information from the database according to the received product information query instruction.
  • Product information refers to news, intelligence, data, etc. related to a product.
  • a product is anything that is offered to the market as a commodity and can satisfy a certain need of people, including tangible items, intangible services, organizations, ideas, or combinations thereof.
  • product information can contain various types of product information, such as clothing, food, automobiles, etc.
  • the server cluster can lock different types of product resource information in more detail according to product information query instructions. .
  • Step 204 extracting attribute data sets of multiple dimensions corresponding to the product information.
  • the server extracts attribute data sets of multiple dimensions corresponding to the product information.
  • An attribute data set refers to a collection of multiple dimension attribute data corresponding to a product, that is, a collection of product attribute data.
  • the product attribute data may include product performance data, such as strength, hardness, and safety.
  • the server extracts attribute data sets of multiple dimensions corresponding to the various types of car information.
  • the performance data of a car may include performance data of multiple dimensions, such as power, fuel economy, braking, handling stability, ride comfort, emission pollution, and noise.
  • the performance data set corresponding to each dimension may include multiple data.
  • the performance data set of the power dimension of the car may include the following three key indicators: (1) the maximum speed parameter of the car; (2) the acceleration capability parameter of the car ; (3) The climbing ability of the car.
  • Step 206 Input the attribute data sets of multiple dimensions into the pre-trained sorting model, and use the sorting model to identify the attribute data sets of each dimension until the sorting results corresponding to the multiple dimensions are output according to the preset number of attribute dimensions.
  • the server can input the attribute data sets of multiple dimensions into the pre-trained sorting model, and use the sorting model to identify the attribute data sets of each dimension until the The preset number of attribute dimensions outputs the sorting results corresponding to multiple dimensions.
  • the pre-trained sorting model refers to pre-training the neural network model with a high-dimensional data sample set until the training stop condition is met, and the trained neural network model is obtained.
  • the sorting result refers to sorting according to the attributes of each dimension of the product, and the obtained multiple dimension attribute data corresponds to the sorting result.
  • the server inputs the attribute data sets of multiple dimensions corresponding to the product information into the pre-trained sorting model, and uses the sorting model to identify the attribute data sets of each dimension until the preset number of attribute dimensions is output.
  • the sorting result corresponding to the dimension For example, the server inputs multiple dimension attribute data sets corresponding to different types of car information into a pre-trained sorting model, and uses the sorting model to identify the attribute data sets of each dimension of the car, until the number of attribute dimensions is determined according to the preset number of attributes. Output the performance ranking results corresponding to multiple dimensions of different types of cars.
  • the server when it is necessary to perform an effective visual difference assessment on multiple dimensional information contained in different categories of products, the server obtains the product information, extracts attribute data sets of multiple dimensions corresponding to the product information, and combines the multiple dimensions
  • the attribute data set is input into the pre-trained sorting model, and the server uses the sorting model to identify the attribute data set of each dimension until the sorting results corresponding to multiple dimensions are output according to the preset number of attribute dimensions.
  • the pre-trained neural network model based on reinforcement learning can quickly and effectively process data of different data volumes, dimensions and categories, and ensure the sorting results of multiple dimensions of the output.
  • the step of identifying the attribute data set of each dimension by using the ranking model includes:
  • step 302 the encoder calculates the cluster center corresponding to each category of attribute data in the attribute data set, and obtains the corresponding dimension attribute data to be selected.
  • Step 304 Calculate the probability of the attribute data of the dimension to be selected by using the attention mechanism, and select the attribute data of the target dimension corresponding to the maximum probability in the attribute data of the dimension to be selected as the input data of the decoder.
  • Step 306 Input the target dimension attribute data into the decoder, output the sorting result of the dimension corresponding to the target dimension attribute data, and set the probability corresponding to the target dimension attribute data to zero.
  • FIG. 3B it is a schematic diagram of the network structure of the ranking model according to one or more embodiments, wherein RNN, RNN 1 , and RNN 2 all represent a recurrent neural network.
  • X 1 to X n represent input data of n dimensions.
  • X 1 is the target dimension data corresponding to the maximum probability filtered by the attention mechanism.
  • ⁇ t represents the feature data calculated by the RNN at the current step t.
  • ⁇ t+1 represents the feature data calculated by the RNN at the next step t+1.
  • ⁇ t-1 represents the feature data calculated by the RNN at the time t-1 in the previous step.
  • y t-1 is the optimal dimension y t- 1 corresponding to the previous step time t-1 output by the decoder according to the previous target dimension.
  • the decoder calculates the corresponding optimal dimension y t+ 1 at time t+1 in the next step according to the input data in the current step, and the loop process is repeated until a sequence corresponding to n dimensions is obtained.
  • the data x 1 of the first dimension is selected as the output. Once y t is selected, it means that the axis corresponding to x 1 is no longer a valid option. By using a masking mechanism, the log probability of an invalid state option is set to negative infinity.
  • the ranking model uses y t as the input data for the decoder and computes the optimal dimension y t+1 for the next step t+1 . This process is repeated until a sequence of n axes is obtained.
  • This ranking model is built on an encoder-decoder architecture, where the encoder uses a recurrent neural network to encode input multi-dimensional data points and class label information.
  • the decoder is also composed of a recurrent neural network.
  • the decoder uses the dimension attribute data corresponding to the maximum probability currently selected by the attention mechanism as input data to calculate, and outputs the sorting result corresponding to the next dimension.
  • the server sets the probability corresponding to the target dimension attribute data to zero through the decoder, in order to exclude the output dimension attribute data and calculate the remaining unsorted dimensions in the next calculation cycle.
  • the encoder and decoder together determine a sequence of optimal axes When the input is n-dimensional data, the sorting model will be repeated n times, each time one of the dimension subscripts is output, and the length of the final output data is also n.
  • the server calculates the clustering center corresponding to each category of attribute data in the attribute data set through the encoder, and obtains the corresponding candidates for selection.
  • Dimension attribute data Further, the server calculates the probability of the attribute data of the dimension to be selected by using the attention mechanism, and selects the attribute data of the target dimension corresponding to the maximum probability in the attribute data of the dimension to be selected as the input data of the decoder.
  • the server inputs the target dimension attribute data into the decoder, outputs the sorting result of the dimension corresponding to the target dimension attribute data, and sets the probability corresponding to the target dimension attribute data to zero.
  • the server calculates the cluster centers corresponding to each dimension attribute data in different types of car attribute data sets through the encoder, and obtains the corresponding Select dimension attribute data, for example: the first dimension, namely X 1 , is the attribute data of the dynamic dimension, the second dimension, namely X 2 , is the attribute data of the fuel economy dimension, and the third dimension, namely X 3 , is the attribute data of the braking dimension Wait.
  • the server uses the attention mechanism to calculate the probability of each dimension attribute data to be selected, and selects the target dimension attribute data corresponding to the maximum probability in the dimension attribute data to be selected, that is, selects the dynamic dimension data as the target dimension attribute number, and the server will
  • the dynamic dimension data is used as the input data of the decoder.
  • the server inputs the dynamic dimension data into the decoder, outputs the sorting results corresponding to the dynamic dimension, and sets the probability corresponding to the dynamic dimension attribute data to zero, and executes the above steps cyclically until the output of each dimension corresponding to different types of cars Sort results.
  • the neural sorting model of the cyclic network is executed in a loop for n times, and each time data of one dimension is output, and the dimension data output each time is selected from the unoutput data, and the attention mechanism is used to calculate each For the probability of the dimension data to be selected, the dimension data with the highest probability is selected as the output, so that the obtained dimension sequence has a better visualization effect after n cycles.
  • the attention mechanism is used to calculate the effective probability map corresponding to the attribute data set of multiple dimensions.
  • the ordinate of the probability map is used to represent the probability, and the abscissa of the probability map is used to represent the dimension.
  • the server calculates the cluster center corresponding to each category of attribute data in the attribute data set through the encoder, and after obtaining the corresponding dimension attribute data to be selected, the server uses the attention mechanism to calculate the probability of the attribute data of the dimension to be selected, and selects the attribute data of the dimension to be selected.
  • the attribute data of the target dimension corresponding to the maximum probability is used as the input data of the decoder.
  • FIG. 3B which is a schematic diagram of the network structure of the ranking model of one or more embodiments, the server uses the attention mechanism to calculate the effective probability map corresponding to the attribute data sets of multiple dimensions, and the ordinate of the probability map is used for Indicates the probability, and the abscissa of the probability map is used to represent the dimension.
  • the server selects the target dimension attribute data corresponding to the ordinate of the maximum probability in the probability map as the input data of the decoder.
  • the input data is m n-dimensional data point set ⁇ pi ⁇
  • the encoder calculates the cluster center of each of the K categories of data in the n-dimensional space, and each Each data point corresponds to the cluster center of its class, and thus a matrix C ⁇ Rm ⁇ n is obtained, and C i is the i-th column data of matrix C.
  • the decoder takes the dimension data y t-1 selected by the attention mechanism as input, and calculates the next coordinate axis on this basis. For each step t, use the attention mechanism to accumulate the information calculated up to step t-1, and output a probability map of all valid dimensions, where the dimension with the highest probability is selected as the output y t .
  • a valid probability map refers to a valid probability value whose probability is not zero.
  • the steps of training the ranking model include:
  • Step 402 Input the attribute data sample set into the initial ranking model.
  • Step 404 Obtain a first function corresponding to the attribute data sample set, use the first function as an objective function, and determine a loss value based on the objective function.
  • the first function is calculated and generated based on the predicted distance value output by the distance prediction model, and is used to evaluate the global index of the cube.
  • Step 406 iterative training is performed by adjusting the parameters of the initial sorting model according to the loss value, until the determined loss value reaches the training stop condition, and a trained sorting model is obtained.
  • the server may pre-train the sorting model. Specifically, the server may input the attribute data sample set corresponding to the product information into the initial sorting model. The server obtains the first function corresponding to the attribute data sample set, uses the first function as the objective function, and determines the loss value based on the objective function. The first function is calculated and generated based on the predicted distance value output by the distance prediction model, and is used to evaluate the global index of the cube. The server adjusts the parameters of the initial sorting model according to the loss value to perform iterative training until the determined loss value reaches the training stop condition, and the trained sorting model is obtained.
  • the server may input the attribute data sample set corresponding to the product information into the initial sorting model, and the attribute data sample set may be a set of attribute data of multiple dimensions, for example, a star chart sample set.
  • a star chart is a high-dimensional data visualization method, and each coordinate axis of a star chart corresponds to one dimension of data. Therefore, the server can train the ranking model with the star graph sample set.
  • the server may input a stellate sample set containing multiple dimension attribute data into the initial sorting model, the server obtains the first function corresponding to the stellate sample set, uses the first function as the objective function, and determines based on the objective function loss value.
  • the first function may be a contour coefficient function, which is defined as the largest value among the average contour values of all stellate shapes in each category, and the calculation formula is as follows:
  • SC represents the silhouette coefficient
  • contour value S i For shape sets with different class labels, we define a contour value S i to measure the similarity between shape S i and other shapes of the class it belongs to and the shapes of other classes.
  • the calculation formula of contour value S i is as follows:
  • a i refers to the average distance between the shape Si and other shapes of the same class
  • b i refers to the minimum distance between the shape Si and all shapes of different classes
  • the calculation formula is as follows:
  • C i represents the class where Si is located
  • the server uses the silhouette coefficient function as the objective function, and determines the loss value based on the objective function.
  • the silhouette coefficient function is calculated from the predicted distance value output by the distance prediction model, and is used to evaluate the global index of the star atlas. Further, the server adjusts the parameters of the initial sorting model to perform iterative training according to the determined loss value, until the determined loss value reaches the training stop condition, and obtains the completed sorting model.
  • the gradient strategy is adopted. , that is, the neural network training method of reinforcement learning is adopted.
  • the reward function is defined as the silhouette coefficient SC of the star atlas.
  • the silhouette coefficient SC is calculated by combining the pre-trained shape context distance prediction model to improve the training efficiency of the sorting network. That is, the server uses the silhouette coefficient function as the objective function and the silhouette coefficient SC of the star atlas as the loss value.
  • the loss value that is, the slope corresponding to the silhouette coefficient SC
  • the neural network sorting model is trained by reinforcement learning, so that the star-shaped atlas drawn after sorting can allow users to better distinguish different Category data can handle data with different data volumes, dimensions, and categories, while having better sorting effects.
  • the attribute data set includes a star atlas
  • the star atlas of multiple dimensions are input into a pre-trained sorting model
  • the sorting model is used to identify the star atlas of each dimension until the The preset number of attribute dimensions outputs the sorting results corresponding to multiple dimensions of the star atlas.
  • the server pre-trains the sorting model using the star map sample set, and after obtaining the trained sorting model, the server can input the star atlas set containing data of multiple dimensions into the pre-trained sorting model, and use the sorting model to The star atlas of each dimension is identified until the sorting result corresponding to the star atlas is output according to the preset number of attribute dimensions.
  • the star atlas is sorted by using the pre-trained sorting model.
  • the value of the average silhouette coefficient of the optimized coordinate axis sorting is higher than the value of the initial coordinate axis sorting average silhouette coefficient, so a better sorting effect can be achieved, and the neural network model can be used to solve high-dimensional data.
  • the problem of coordinate axis sorting in visualization thus solving the problem of visual processing of multi-dimensional data contained in different types of product information, even when there is a large amount of multi-dimensional data in big data, it can ensure that the multi-dimensional information output after sorting has more A good sorting effect enables users to more intuitively distinguish the features of different types of data.
  • the steps of training the ranking model include:
  • Step 502 input the scatter atlas into the initial sorting model.
  • Step 504 taking the second function as the objective function, and determining the loss value based on the objective function.
  • the second function is used to evaluate the global indicator of the scatter plot.
  • Step 506 Adjust the parameters of the initial sorting model according to the loss value to perform iterative training until the training stop condition is met, and obtain a trained sorting model.
  • the server may pre-train the sorting model. Specifically, the server may input the scatterplot into the initial ranking model. The server may use the second function as an objective function, and determine the loss value based on the objective function. The second function is used to evaluate the global metrics of the scatter plot. Further, the server adjusts the parameters of the initial sorting model according to the loss value to perform iterative training until the training stop condition is met, and the trained sorting model is obtained. For example, in the field of information visualization, scatter plots are also used as a visualization method for high-dimensional data.
  • radial coordinate visualization is a scatterplot for visualizing high-dimensional data. Similar to the coordinate axis sorting problem of the star chart, the radial coordinate visualization also needs to be defined and evaluated first. index, and then use the algorithm to optimize the ranking. Therefore, the RadViz objective function is used as the reward function to train the network.
  • the reward function is defined as the ratio of the original data point to the Davies-Bouldin index of the point mapped to the two-dimensional plane. The larger the value, It means that the visualization effect of RadViz is better, which makes it possible to effectively solve the coordinate axis sorting problem of radial coordinate visualization.
  • the steps of training the distance prediction model include:
  • Step 602 Obtain sampling point sets corresponding to two attribute data samples in the attribute data sample set.
  • Step 604 Input the sample point set into the initial distance prediction model to obtain the corresponding predicted value.
  • Step 606 Obtain the supervision value of the distance between the sampling point sets, compare the predicted value with the supervision value, and obtain the corresponding loss value.
  • Step 608 Adjust the parameters of the initial distance prediction model according to the loss value to perform iterative training until the training stop condition is met, and obtain the distance prediction model after training.
  • the server can use the sample set to train the distance prediction model first, and train the distance prediction model by using the training method of supervised learning.
  • the loss function is the difference between the predicted value and the supervised value.
  • the mean squared error, the supervision value is the true value.
  • the server may obtain sampling point sets corresponding to two attribute data samples in the attribute data sample set. The server inputs the sampling point sets corresponding to the two attribute data samples into the initial distance prediction model to obtain the corresponding predicted values. Further, the server obtains the supervised value of the distance between the sampling point sets, and compares the predicted value with the supervised value to obtain the corresponding loss value.
  • the server adjusts the parameters of the initial distance prediction model according to the loss value to perform iterative training until the training stop condition is met, and the trained distance prediction model is obtained.
  • FIG. 6B it is a schematic structural diagram of a prediction model of shape context distance in one or more embodiments.
  • the distance prediction model is composed of a recurrent neural network plus two fully connected layers, and finally outputs the predicted distance value through the sigmoid activation layer.
  • the input data is the point set obtained by sampling each of the two shapes.
  • the server can pre-fetch two shape samples and obtain 80 sample points corresponding to each shape.
  • the server inputs 80 sampling points corresponding to the two shape samples into the initial distance prediction model, and obtains the corresponding shape context distance prediction value.
  • FIGS. 1-6 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 1-6 may include multiple steps or multiple stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution of these steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or phases within the other steps.
  • a product information visualization processing device including: an acquisition module, an extraction module and an identification module, wherein:
  • the obtaining module 702 is used for obtaining product information.
  • the extraction module 704 is configured to extract attribute data sets of multiple dimensions corresponding to the product information.
  • the identification module 706 is used to input the attribute data sets of multiple dimensions into the pre-trained sorting model, and use the sorting model to identify the attribute data sets of each dimension, until the corresponding number of multiple dimensions is output according to the preset number of attribute dimensions. sorting results.
  • the apparatus further includes: a calculation module, a selection module and an input module.
  • the calculation module is used to calculate the cluster center corresponding to each category of attribute data in the attribute data set through the encoder, and obtain the corresponding dimension attribute data to be selected.
  • the selection module is used to calculate the probability of the attribute data of the dimension to be selected by using the attention mechanism, and select the attribute data of the target dimension corresponding to the maximum probability in the attribute data of the dimension to be selected as the input data of the decoder.
  • the input module inputs the target dimension attribute data into the decoder, outputs the sorting result of the dimension corresponding to the target dimension attribute data, and sets the probability corresponding to the target dimension attribute data to zero.
  • the calculation module is further configured to use the attention mechanism to calculate the effective probability maps corresponding to the attribute data sets of multiple dimensions.
  • the selection module is also used to select the target dimension attribute data corresponding to the ordinate of the maximum probability in the probability map as the input data of the decoder.
  • the apparatus further includes: a training module.
  • the input module is also used to input a sample set of attribute data into the initial ranking model.
  • the obtaining module is further configured to obtain the first function corresponding to the attribute data sample set, and the first function is used as the objective function, and the loss value is determined based on the objective function, wherein the first function is calculated and generated according to the predicted distance value output by the distance prediction model, Global metrics for evaluating cubes.
  • the training module is used for iterative training by adjusting the parameters of the initial sorting model according to the loss value, until the determined loss value reaches the training stop condition, and the trained sorting model is obtained.
  • the identification module is further configured to input the star atlas of multiple dimensions into the pre-trained sorting model, and use the sorting model to identify the star atlas of each dimension until the preset attributes are used.
  • the number of dimensions outputs the sorting results corresponding to multiple dimensions of the star atlas.
  • the apparatus further includes: a determining module.
  • the input block is also used to input the scatterplot into the initial ranking model.
  • the determining module is configured to use the second function as the objective function, and determine the loss value based on the objective function, wherein the second function is used to evaluate the global index of the scatter plot.
  • the training module is also used for iterative training by adjusting the parameters of the initial sorting model according to the loss value, until the training stop condition is met, and the trained sorting model is obtained.
  • the apparatus further includes: a comparison module.
  • the acquisition module is also used for acquiring the sampling point sets corresponding to the two attribute data samples in the attribute data sample set; the input module is used for inputting the sampling point sets into the initial distance prediction model to obtain corresponding predicted values.
  • the comparison module is used to obtain the supervision value of the distance between the sampling point sets, compare the predicted value with the supervision value, and obtain the corresponding loss value.
  • the training module is also used for iterative training by adjusting the parameters of the initial distance prediction model according to the loss value, until the training stop condition is met, and the trained distance prediction model is obtained.
  • each module in the above-mentioned product information visualization processing device may be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided, and the computer device may be a server, and its internal structure diagram may be as shown in FIG. 8 .
  • the computer device includes a processor, memory, and a network interface connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer device is used for storing product information visualization processing data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection. When the computer program is executed by the processor, a product information visualization processing method is realized.
  • FIG. 8 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and one or more processors, the memory having computer-readable instructions stored in the memory, and one or more non-volatile storage media storing the computer-readable instructions, the computer-readable instructions being stored by one or more non-volatile storage media
  • processors When executed by each processor, one or more processors are made to implement the steps of the product information visualization processing method provided in any one of the embodiments of this application.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)

Abstract

一种产品信息可视化处理方法包括:获取产品信息;提取产品信息对应的多个维度的属性数据集;将多个维度的属性数据集输入预先训练好的排序模型中,利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。

Description

产品信息可视化处理方法、装置、计算机设备
相关申请的交叉引用
本申请要求于2020年08月24日提交中国专利局,申请号为2020108568456,申请名称为“产品信息可视化处理方法、装置、计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种产品信息可视化处理方法、装置、计算机设备和存储介质。
背景技术
随着计算机技术的发展,5G时代的来临,各种产品信息呈现出海量增长的模式。在云计算网络中,大数据是云计算的基础和核心技术,在大数据中存在大量的高维数据,对高维数据进行可视化处理可以更加快速准确的掌握复杂多变的产品信息中蕴含的深层信息。
然而,当前人类认知能力具有一定的局限性,传统的多种产品信息处理方式中,无法对不同类别产品的多个维度信息进行有效的可视化差异评估,比如,如何对不同类别汽车的多个维度的属性信息进行有效的可视化差异评估,即无法给用户提供一个直观的全局视角的产品多维信息的视觉结构差异,因此,如何有效的对产品信息中包含的多维数据进行可视化处理成为了当前亟待解决的主要问题。
发明内容
根据本申请公开的各种实施例,提供一种产品信息可视化处理方法、装置、计算机设备和存储介质。
一种产品信息可视化处理方法,包括:
获取产品信息;提取所述产品信息对应的多个维度的属性数据集;及将多个维度的所述属性数据集输入预先训练的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
一种产品信息可视化处理装置,包括:
获取模块,用于获取产品信息;提取模块,用于提取所述产品信息对应的多个维度的属性数据集;及识别模块,用于将多个维度的所述属性数据集输入预先训练的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:获取产品信息;
提取所述产品信息对应的多个维度的属性数据集;及将多个维度的所述属性数据集输入预先训练的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
一个或多个存储有计算机可读指令的计算机存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:获取产品信息;
提取所述产品信息对应的多个维度的属性数据集;及将多个维度的所述属性数据集输入预先训练的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为一个实施例中产品信息可视化处理方法的应用环境图。
图2为一个实施例中产品信息可视化处理方法的流程示意图。
图3A为一个实施例中利用排序模型对每个维度的属性数据集进行识别步骤的流程示意图。
图3B为一个实施例中排序模型的网络结构示意图。
图4为一个实施例中对排序模型进行训练步骤的流程示意图。
图5为另一个实施例中对排序模型进行训练步骤的流程示意图。
图6A为一个实施例中对距离预测模型的训练步骤的流程示意图。
图6B为一个实施例中距离预测模型的网络结构示意图。
图7为一个实施例中产品信息可视化处理装置的结构框图。
图8为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例中所提供的产品信息可视化处理方法可以应用于如图1所示的应用环境中,终端102通过网络与服务器104通过网络进行通信。终端102向服务器104发送产品信息获取请求,服务器104根据接收的产品信息获取请求,查询对应的多个维度产品信息并将对应的排序结果返回至终端102。服务器104获取产品信息,服务器104提取产品信息对应的多个维度的属性数据集,服务器104将多个维度的属性数据集输入预先训练的排序模型中,服务器104利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量 输出多个维度对应的排序结果。终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
下述实施方式以产品信息可视化处理方法应用于图1的服务器为例进行说明,但需要说明的是,实际应用中该方法并不仅限应用于上述服务器。
如图2所示,在其中一个实施例中的产品信息可视化处理方法的流程图,该方法具体包括以下步骤:
步骤202,获取产品信息。
服务器可以接收不同终端发送的产品信息查询指令,产品信息查询指令可以来自用户或工作人员等。服务器可以在不同场景下,通过不同的方式接收信息查询指令。例如,服务器可以通过检测到用户或工作人员的手势动作,接收产品信息查询指令。服务器也可以通过检测用户或工作人员的语音指令,接收产品信息查询指令。具体的,当服务器检测到用户或工作人员发送的产品信息查询指令时,服务器可以根据接收的产品信息查询指令,从数据库中获取对应的产品信息。产品信息是指与产品有关的消息、情报、数据等。产品是指一种作为商品提供给市场,能满足人们某种需求的任何东西,包括有形的物品、无形的服务、组织、观念或它们的组合。例如,产品信息中可以包含多种类型产品信息,比如:服饰、食品、汽车等,针对不同的产品信息,服务器集群可以根据产品信息查询指令,对不同类型的产品资源信息进行更加细化的锁定。
步骤204,提取产品信息对应的多个维度的属性数据集。
服务器获取产品信息之后,服务器提取产品信息对应的多个维度的属性数据集。属性数据集是指与产品对应的多个维度属性数据的集合,即产品属性数据的集合,产品属性数据可以包括产品的性能数据,比如强度、硬度、安全性等。例如,当用户发送的产品信息查询指令中的产品信息为汽车时,则服务器获取多种类型的汽车信息之后,服务器提取多种类型汽车信息对应的多个维度的属性数据集。比如,汽车的性能数据,汽车的性能数据可以包括动多个维度的性能数据,比如动力性、燃油经济性、制动性、操纵稳定性、行驶平顺性、 排放污染及噪声等。每个维度对应的性能数据集中可以包括多个数据,比如,汽车的动力性维度的性能数据集中可以包括以下三个关键指标:(1)汽车的最高车速参数;(2)汽车的加速能力参数;(3)汽车的爬坡能力。
步骤206,将多个维度的属性数据集输入预先训练的排序模型中,利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
当服务器提取产品信息对应的多个维度的属性数据集之后,服务器可以将多个维度的属性数据集输入预先训练的排序模型中,利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。预先训练的排序模型是指预先利用高维数据样本集对神经网络模型进行训练,直至满足训练停止条件时,得到训练完成的神经网络模型。排序结果是指根据产品每个维度的属性进行排序,得到的多个维度属性数据对应排序结果。具体的,服务器将与产品信息对应的多个维度的属性数据集输入预先训练的排序模型中,利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。例如,服务器将与不同类型的汽车信息对应的多个维度属性数据集,输入预先训练的排序模型中,利用排序模型对汽车每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出不同类型汽车多个维度对应的性能排序结果。
本实施例中,当需要对不同类别产品所包含的多个维度信息进行有效的可视化差异评估时,服务器通过获取产品信息,提取产品信息对应的多个维度的属性数据集,并将多个维度的属性数据集输入预先训练的排序模型中,服务器利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。相对于传统的多种产品信息处理方式,基于强化学习预先训练好的神经网络模型,能够快速有效的处理不同数据量、维度数和类别量的数据,确保输出的多个维度的排序结果,能够给用户提供一个直观的全局视角的产品多维信息的视觉结构差异,解决了对不同类型产品信息中包含的多维数据进行可视化处理的问题,即使当大数据中存在大量的多维数据时,也能够确保排序后输出的多维信息有更好的排序效果,使得用户能够更直 观的区分不同类别数据的特征。
在其中一个实施例中,如图3A所示,利用排序模型对每个维度的属性数据集进行识别的步骤,包括:
步骤302,通过编码器计算属性数据集中每个类别属性数据对应的聚类中心,得到对应的待选维度属性数据。
步骤304,利用注意力机制计算待选维度属性数据的概率,并选取待选维度属性数据中最大概率对应的目标维度属性数据作为解码器的输入数据。
步骤306,将目标维度属性数据输入解码器,输出与目标维度属性数据对应维度的排序结果,并将目标维度属性数据对应的概率设置为零。
服务器将与产品信息对应的多个维度的属性数据集,输入预先训练的排序模型之后,服务器利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。具体的,如图3B所示,为根据一个或多个实施例排序模型的网络结构示意图,其中RNN、RNN 1、RNN 2均表示循环神经网络。X 1~X n表示输入的n个维度的数据。X 1为通过注意力机制筛选出的最大概率对应的目标维度数据。y t=X 1为解码器根据目标维度X 1输出的当前步骤时刻t对应的最优维度y t。ω t表示当前步骤t时刻由RNN计算得到的特征数据。ω t+1表示下一个步骤t+1时刻由RNN计算得到的特征数据。ω t-1表示前一个步骤t-1时刻由RNN计算得到的特征数据。y t-1为解码器根据前一个目标维度输出的前一个步骤时刻t-1对应的最优维度y t-1。解码器根据当前步骤中的输入数据,计算得到的下一个步骤t+1时刻的对应的最优维度y t+1,该循环过程反复执行,直到得到了n个维度对应的序列。如图3B所示的例子中,第一个维度的数据x 1被选择为输出。一旦选择了y t,就意味着x 1对应的坐标轴已不再是有效的选项,通过采用蒙版机制,将无效的状态选项的对数概率设置为负无穷。排序模型将y t用作为解码器的输入数据,计算下一个步骤t+1的最优维度y t+1。该过程反复执行,直到获得了n个坐标轴的序列。该排序模型建立在编码器-解码器体系结构上,编码器使用循环神经网络对输入的多维数据点和类别标签信息进行编码。解码器也是由循环神经网络构成,解码器以当前利用注意力机制选取的最大概率对应的维度属性数据作为输入数据进行计算,输出下一 个维度对应的排序结果。服务器通过解码器将目标维度属性数据对应的概率设置为零,目的是为了使下一个循环计算过程中,排除已经输出的维度属性数据,对剩下的还未排序的维度进行计算。编码器和解码器共同决定了一组最佳坐标轴的序列
Figure PCTCN2020111320-appb-000001
当输入的是n维的数据时,该排序模型会重复执行n次,每次输出其中一个维度下标,最终的输出数据的长度也是n。
具体的,服务器将与产品信息对应的多个维度的属性数据集输入预先训练的排序模型中之后,服务器通过编码器计算属性数据集中每个类别属性数据对应的聚类中心,得到对应的待选维度属性数据。进一步的,服务器利用注意力机制计算待选维度属性数据的概率,并选取待选维度属性数据中最大概率对应的目标维度属性数据,作为解码器的输入数据。服务器将目标维度属性数据输入解码器,输出与目标维度属性数据对应维度的排序结果,并将目标维度属性数据对应的概率设置为零。例如,服务器将与汽车信息对应的多个维度的属性数据集输入预先训练的排序模型之后,服务器通过编码器计算不同类型汽车属性数据集中每个维度属性数据对应的聚类中心,得到对应的待选维度属性数据,比如:第一维度即X 1为动力性维度的属性数据、第二维度即X 2为燃油经济性维度的属性数据以及第三维度即X 3为制动性维度的属性数据等。进一步的,服务器利用注意力机制计算每个待选维度属性数据的概率,并选取待选维度属性数据中最大概率对应的目标维度属性数据,即选取动力性维度数据作为目标维度属性数,服务器将动力性维度数据作为解码器的输入数据。服务器将动力性维度数据输入解码器,输出与动力性维度对应的排序结果,并将动力性维度属性数据对应的概率设置为零,循环执行上述步骤,直至输出不同类型汽车对应的每个维度的排序结果。
本实施例中,实现了将循环网络神经排序模型循环执行n次,每一次输出一个维度的数据,每次输出的维度数据是从未输出的数据中选择的,并通过注意力机制计算每个待选维度数据的概率,选取概率最大的一个维度数据作为输出,从而实现了n次循环后,得到的维度序列具有更好的可视化效果。
在其中一个实施例中,利用注意力机制计算待选维度属性数据的概率,并选取待选维度属性数据中最大概率对应的目标维度属性数据作为解码器的输入 数据的步骤,包括:
利用注意力机制计算多个维度的属性数据集对应的有效的概率图,概率图的纵坐标用于表示概率大小,概率图的横坐标用于表示维度。
选取概率图中最大概率纵坐标对应的目标维度属性数据作为解码器的输入数据。
服务器通过编码器计算属性数据集中每个类别属性数据对应的聚类中心,得到对应的待选维度属性数据之后,服务器利用注意力机制计算待选维度属性数据的概率,并选取待选维度属性数据中最大概率对应的目标维度属性数据,作为解码器的输入数据。具体的,如图3B所示,为一个或多个实施例排序模型的网络结构示意图,服务器利用注意力机制计算多个维度的属性数据集对应的有效的概率图,概率图的纵坐标用于表示概率大小,概率图的横坐标用于表示维度。服务器选取概率图中最大概率纵坐标对应的目标维度属性数据,作为解码器的输入数据。例如,当输入的数据为m个n维数据点集{p i},编码器模块的输入表示为X={x i},其中,x i=[p i,c i]∈R m×2,表示m个数据点{p i}的第i个坐标轴的数值与这些数据点对应的类别信息C i,通过编码器计算K个类别的数据各自在n维空间上的聚类中心,每个数据点对应着它所在类的聚类中心,由此得到一个矩阵C∈R m×n,C i就是矩阵C的第i列数据。解码器将通过注意力机制选择的维度数据y t-1作为输入,在此基础上计算出下一个坐标轴。对于每个步骤t,使用注意力机制积累计算直到步骤t-1的信息,并输出所有有效维度的概率图,其中,拥有最大概率的维度会被选中作为输出y t。有效的概率图是指概率不为零的有效概率值,通过采用蒙板机制,记录已输出维度对应的下标数据,将无效的维度选项对应的对数概率设置为负无穷,即将其概率设置为零,能够保证循环神经网络模型不会输出重复的数据,从而提高输出排序结果的效率。
在其中一个实施例中,如图4所示,对排序模型进行训练的步骤,包括:
步骤402,将属性数据样本集输入初始排序模型中。
步骤404,获取属性数据样本集对应的第一函数,将第一函数作为目标函数,基于目标函数确定损失值。第一函数是根据距离预测模型输出的预测距离值计算生成的,用于评估多维数据集的全局指标。
步骤406,根据损失值调节初始排序模型的参数进行迭代训练,直到所确定的损失值达到训练停止条件,得到训练完成的排序模型。
在服务器根据用户发送的信息查询指令,获取对应的产品信息之前,服务器可以预先对排序模型进行训练。具体的,服务器可以将与产品信息对应的属性数据样本集输入初始排序模型中。服务器获取属性数据样本集对应的第一函数,将第一函数作为目标函数,基于目标函数确定损失值。第一函数是根据距离预测模型输出的预测距离值计算生成的,用于评估多维数据集的全局指标。服务器根据损失值调节初始排序模型的参数进行迭代训练,直到所确定的损失值达到训练停止条件,得到训练完成的排序模型。例如,服务器可以将与产品信息对应的属性数据样本集输入初始排序模型中,该属性数据样本集可以为多个维度的属性数据的集合,比如,星状图样本集。在信息可视化领域中,星状图作为一种高维数据的可视化方法,星状图的每个坐标轴对应着一个维度的数据。因此,服务器可以利用星状图样本集对排序模型进行训练。具体的,服务器可以将包含多个维度属性数据的星状图样本集,输入初始排序模型中,服务器获取星状图样本集对应的第一函数,将第一函数作为目标函数,基于目标函数确定损失值。第一函数可以是轮廓系数函数,定义为每个类别所有星状图形状的平均轮廓值中最大的数值,计算公式如下:
Figure PCTCN2020111320-appb-000002
其中,SC表示轮廓系数;
Figure PCTCN2020111320-appb-000003
表示类别k的形状的平均轮廓值;
对于带有不同类别标签的形状集合,我们定义了一个轮廓值S i来衡量形状S i与它所属的类的其他形状与其他类的形状的相似性,轮廓值S i计算公式如下:
Figure PCTCN2020111320-appb-000004
其中,a i是指形状S i与其他同一个类的形状的平均距离;b i是指形状S i与不同个类的所有形状的最小距离,计算公式如下:
Figure PCTCN2020111320-appb-000005
Figure PCTCN2020111320-appb-000006
其中,C i表示S i所在的类;
服务器将轮廓系数函数作为目标函数,基于目标函数确定损失值。轮廓系数函数是根据距离预测模型输出的预测距离值计算生成的,用于评估星状图集的全局指标。进一步的,服务器根据确定的损失值,调节初始排序模型的参数进行迭代训练,直到所确定的损失值达到训练停止条件,得到训练完成的排序模型,为了训练上述坐标轴排序网络,通过采用梯度策略,即采用强化学习的神经网络训练方式,为了衡量坐标轴排序后的星状图集视觉效果,将奖励函数定义为星状图集的轮廓系数SC,即轮廓系数SC越大,说明排序后的可视化效果越好,轮廓系数SC是结合预先训练的形状上下文距离预测模型,计算得到,以提高排序网络的训练效率。即服务器将轮廓系数函数作为目标函数,将星状图集的轮廓系数SC作为损失值,当损失值即轮廓系数SC对应的斜率趋于零时,即轮廓系数SC不再变化时,则停止训练,得到训练完成的排序模型,由此使得,通过将轮廓系数作为评估排序效果的指标,对神经网络排序模型进行强化学习训练,使得排序后绘制的星状图集可以让用户更好地区分不同类别的数据,可以处理不同数据量、维度数和类别量的数据,同时具有更好的排序效果。
在其中一个实施例中,属性数据集包括星状图集,将多个维度的星状图集输入预先训练的排序模型中,利用排序模型对每个维度的星状图集进行识别,直至按照预设的属性维度数量输出星状图集的多个维度对应的排序结果。
具体的,服务器预先利用星状图样本集对排序模型进行训练,得到完成训练的排序模型之后,服务器可以将包含多个维度数据的星状图集输入预先训练的排序模型中,利用排序模型对每个维度的星状图集进行识别,直至按照预设 的属性维度数量输出星状图集对应的排序结果,相对于初始输入的星状图集,通过利用预先训练的排序模型对星状图集进行识别之后,优化后的坐标轴排序的平均轮廓系数的数值高于初始的坐标轴排序的平均轮廓系数的数值,因此能够实现更好的排序效果,使得通过神经网络模型能够解决高维数据可视化中坐标轴排序的问题,从而解决了对不同类型产品信息中包含的多维数据进行可视化处理的问题,即使当大数据中存在大量的多维数据时,也能够确保排序后输出的多维信息有更好的排序效果,使得用户能够更直观的区分不同类别数据的特征。
在其中一个实施例中,如图5所示,对排序模型进行训练的步骤,包括:
步骤502,将散点图集输入初始排序模型中。
步骤504,将第二函数作为目标函数,基于目标函数确定损失值。其中,第二函数用于评估散点图的全局指标。
步骤506,根据损失值调节初始排序模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的排序模型。
在服务器根据用户发送的信息查询指获取对应的产品信息之前,服务器可以预先对排序模型进行训练。具体的,服务器可以将散点图集输入初始排序模型中。服务器可以将第二函数作为目标函数,基于目标函数确定损失值。第二函数用于评估散点图的全局指标。进一步的,服务器根据损失值调节初始排序模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的排序模型。例如,在信息可视化领域中,散点图也作为一种高维数据的可视化方法使用。在高维数据散点图中,径向坐标可视化(简称RadViz)是一种可视化高维数据的散点图,与星状图的坐标轴排序问题类似,径向坐标可视化也需要先定义好评估指标,再使用算法去优化排序,因此,将RadViz目标函数作为奖励函数训练网络,该奖励函数定义为原始数据点与映射到二维平面后的点的Davies-Bouldin指标的比值,该数值越大表示RadViz的可视化效果越好,由此使得能够有效解决径向坐标可视化的坐标轴排序问题。
在其中一个实施例中,如图6A所示,对距离预测模型的训练的步骤,包括:
步骤602,获取属性数据样本集中两个属性数据样本对应的采样点集。
步骤604,将采样点集输入初始距离预测模型中,得到对应的预测值。
步骤606,获取采样点集之间的距离的监督值,将预测值与监督值进行比较,得到对应的损失值。
步骤608,根据损失值调节初始距离预测模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的距离预测模型。
服务器利用星状图样本集对排序模型进行训练之前,服务器可以利用样本集先对距离预测模型进行训练,通过采用监督学习的训练方法训练该距离预测模型,其损失函数为预测值与监督值的均方误差,监督值即为真实值。具体的,服务器可以获取属性数据样本集中两个属性数据样本对应的采样点集。服务器将两个属性数据样本对应的采样点集输入初始距离预测模型中,得到对应的预测值。进一步的,服务器获取采样点集之间的距离的监督值,并将预测值与监督值进行比较,得到对应的损失值。服务器根据损失值调节初始距离预测模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的距离预测模型。如图6B所示,为一个或多个实施例形状上下文距离的预测模型结构示意图。该距离预测模型是由一个循环神经网络加上两层全连接层,最后经过Sigmoid激活层输出预测的距离数值,输入的数据为两个形状各自采样得到的点集。例如,服务器可以预先获取两个形状样本,并获取每个形状对应的80个采样点。服务器将两个形状样本对应的80个采样点输入初始距离预测模型中,得到对应的形状上下文距离预测值。用于预测输入的两个形状的形状上下文描述子S 1和S 2的距离数值,如图6B所示,其中“C”表示数据的串联操作,“FC”表示全连接层,RNN表示循环神经网络;ReLU表示激活函数;Sigmoid表示激活函数。由此使得,通过预先训练好的神经网络模型估算形状上下文距离,输出预测的形状上下文距离,避免了传统方式中需要反复进行大量的计算,从而有效的提高了计算效率。
应该理解的是,虽然图1-6的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1-6中的至少一部分步骤可以包括多个步骤或者多个阶段,这 些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图7所示,提供了一种产品信息可视化处理装置,包括:获取模块、提取模块和识别模块,其中:
获取模块702,用于获取产品信息。
提取模块704,用于提取产品信息对应的多个维度的属性数据集。
识别模块706,用于将多个维度的属性数据集输入预先训练好的排序模型中,利用排序模型对每个维度的属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
在一个实施例中,该装置还包括:计算模块、选取模块和输入模块。
计算模块用于通过编码器计算属性数据集中每个类别属性数据对应的聚类中心,得到对应的待选维度属性数据。选取模块用于利用注意力机制计算待选维度属性数据的概率,并选取待选维度属性数据中最大概率对应的目标维度属性数据作为解码器的输入数据。输入模块将目标维度属性数据输入解码器,输出与目标维度属性数据对应维度的排序结果,并将目标维度属性数据对应的概率设置为零。
在一个实施例中,计算模块还用于利用注意力机制计算多个维度的属性数据集对应的有效的概率图。选取模块还用于选取概率图中最大概率纵坐标对应的目标维度属性数据作为解码器的输入数据。
在一个实施例中,该装置还包括:训练模块。
输入模块还用于将属性数据样本集输入初始排序模型中。获取模块还用于获取属性数据样本集对应的第一函数,将第一函数作为目标函数,基于目标函数确定损失值,其中,第一函数是根据距离预测模型输出的预测距离值计算生成的,用于评估多维数据集的全局指标。训练模块用于根据损失值调节初始排序模型的参数进行迭代训练,直到所确定的损失值达到训练停止条件,得到训练完成的排序模型。
在一个实施例中,识别模块还用于将多个维度的星状图集输入预先训练好 的排序模型中,利用排序模型对每个维度的星状图集进行识别,直至按照预设的属性维度数量输出星状图集的多个维度对应的排序结果。
在一个实施例中,该装置还包括:确定模块。
输入模块还用于将散点图集输入初始排序模型中。确定模块用于将第二函数作为目标函数,基于目标函数确定损失值,其中,第二函数用于评估散点图的全局指标。训练模块还用于根据损失值调节初始排序模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的排序模型。
在一个实施例中,该装置还包括:比较模块。
获取模块还用于获取属性数据样本集中两个属性数据样本对应的采样点集;输入模块用于将采样点集输入初始距离预测模型中,得到对应的预测值。比较模块用于获取采样点集之间的距离的监督值,将预测值与监督值进行比较,得到对应的损失值。训练模块还用于根据损失值调节初始距离预测模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的距离预测模型。
关于产品信息可视化处理装置的具体限定可以参见上文中对于产品信息可视化处理方法的限定,在此不再赘述。上述产品信息可视化处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图8所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储产品信息可视化处理数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种产品信息可视化处理方法。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定, 具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
一种计算机设备,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现本申请任意一个实施例中提供的产品信息可视化处理方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种产品信息可视化处理方法,包括:
    获取产品信息;提取所述产品信息对应的多个维度的属性数据集;及将多个维度的所述属性数据集输入预先训练的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
  2. 根据权利要求1所述的方法,其中,所述利用所述排序模型对每个维度的所述属性数据集进行识别,包括:
    通过编码器计算所述属性数据集中每个类别属性数据对应的聚类中心,得到对应的待选维度属性数据;利用注意力机制计算所述待选维度属性数据的概率,并选取所述待选维度属性数据中最大概率对应的目标维度属性数据作为解码器的输入数据;及将所述目标维度属性数据输入解码器,输出与所述目标维度属性数据对应维度的排序结果,并将所述目标维度属性数据对应的概率设置为零。
  3. 根据权利要求2所述的方法,其中,所述利用注意力机制计算所述待选维度属性数据的概率,并选取所述待选维度属性数据中最大概率对应的目标维度属性数据作为解码器的输入数据,包括:
    利用注意力机制计算多个维度的所述属性数据集对应的有效的概率图;所述概率图的纵坐标用于表示概率大小,所述概率图的横坐标用于表示维度;及选取所述概率图中最大概率纵坐标对应的目标维度属性数据作为解码器的输入数据。
  4. 根据权利要求1所述的方法,其中,所述排序模型的训练方式,包括:
    将属性数据样本集输入初始排序模型中;获取所述属性数据样本集对应的第一函数,将所述第一函数作为目标函数,基于所述目标函数确定损失值;其中,所述第一函数是根据距离预测模型输出的预测距离值计算生成的,用于评估多维数据集的全局指标;及根据所述损失值调节所述初始排序模型的参数进行迭代训练,直到所确定的损失值达到训练停止条件,得到训练完成的排序模型。
  5. 根据权利要求1所述的方法,其中,所述属性数据集包括星状图集;
    将多个维度的所述星状图集输入预先训练好的排序模型中,利用所述排序模型对每个维度的所述星状图集进行识别,直至按照预设的属性维度数量输出所述星状图集的多个维度对应的排序结果。
  6. 根据权利要求4所述的方法,其中,所述属性数据样本集包括散点图集;
    将所述散点图集输入初始排序模型中;将第二函数作为目标函数,基于所述目标函数确定损失值;其中,所述第二函数用于评估散点图的全局指标;及根据所述损失值调节所述初始排序模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的排序模型。
  7. 根据权利要求4所述的方法,其中,所述距离预测模型的训练方式,包括:
    获取所述属性数据样本集中两个属性数据样本对应的采样点集;将所述采样点集输入初始距离预测模型中,得到对应的预测值;获取所述采样点集之间的距离的监督值,将所述预测值与所述监督值进行比较,得到对应的损失值;及根据所述损失值调节所述初始距离预测模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的距离预测模型。
  8. 一种产品信息可视化处理装置,包括:
    获取模块,用于获取产品信息;提取模块,用于提取所述产品信息对应的多个维度的属性数据集;及识别模块,用于将多个维度的所述属性数据集输入预先训练好的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
  9. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    获取产品信息;提取所述产品信息对应的多个维度的属性数据集;及将多个维度的所述属性数据集输入预先训练的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
  10. 根据权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:
    通过编码器计算所述属性数据集中每个类别属性数据对应的聚类中心,得到对应的待选维度属性数据;利用注意力机制计算所述待选维度属性数据的概率,并选取所述待选维度属性数据中最大概率对应的目标维度属性数据作为解码器的输入数据;及将所述目标维度属性数据输入解码器,输出与所述目标维度属性数据对应维度的排序结果,并将所述目标维度属性数据对应的概率设置为零。
  11. 根据权利要求10所述的计算机设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:
    利用注意力机制计算多个维度的所述属性数据集对应的有效的概率图;所述概率图的纵坐标用于表示概率大小,所述概率图的横坐标用于表示维度;及选取所述概率图中最大概率纵坐标对应的目标维度属性数据作为解码器的输入数据。
  12. 根据权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:
    将属性数据样本集输入初始排序模型中;获取所述属性数据样本集对应的第一函数,将所述第一函数作为目标函数,基于所述目标函数确定损失值;其中,所述第一函数是根据距离预测模型输出的预测距离值计算生成的,用于评估多维数据集的全局指标;及根据所述损失值调节所述初始排序模型的参数进行迭代训练,直到所确定的损失值达到训练停止条件,得到训练完成的排序模型。
  13. 根据权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:
    将多个维度的所述星状图集输入预先训练好的排序模型中,利用所述排序模型对每个维度的所述星状图集进行识别,直至按照预设的属性维度数量输出所述星状图集的多个维度对应的排序结果。
  14. 根据权利要求12所述的计算机设备,其中,所述处理器执行所述计算 机可读指令时还执行以下步骤:
    将所述散点图集输入初始排序模型中;将第二函数作为目标函数,基于所述目标函数确定损失值;其中,所述第二函数用于评估散点图的全局指标;及根据所述损失值调节所述初始排序模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的排序模型。
  15. 根据权利要求12所述的计算机设备,其中,所述处理器执行所述计算机可读指令时还执行以下步骤:
    获取所述属性数据样本集中两个属性数据样本对应的采样点集;将所述采样点集输入初始距离预测模型中,得到对应的预测值;获取所述采样点集之间的距离的监督值,将所述预测值与所述监督值进行比较,得到对应的损失值;及根据所述损失值调节所述初始距离预测模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的距离预测模型。
  16. 一个或多个存储有计算机可读指令的计算机存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    获取产品信息;提取所述产品信息对应的多个维度的属性数据集;及将多个维度的所述属性数据集输入预先训练的排序模型中,利用所述排序模型对每个维度的所述属性数据集进行识别,直至按照预设的属性维度数量输出多个维度对应的排序结果。
  17. 根据权利要求16所述的存储介质,其中,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    通过编码器计算所述属性数据集中每个类别属性数据对应的聚类中心,得到对应的待选维度属性数据;利用注意力机制计算所述待选维度属性数据的概率,并选取所述待选维度属性数据中最大概率对应的目标维度属性数据作为解码器的输入数据;及将所述目标维度属性数据输入解码器,输出与所述目标维度属性数据对应维度的排序结果,并将所述目标维度属性数据对应的概率设置为零。
  18. 根据权利要求16所述的存储介质,其中,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    将属性数据样本集输入初始排序模型中;获取所述属性数据样本集对应的第一函数,将所述第一函数作为目标函数,基于所述目标函数确定损失值;其中,所述第一函数是根据距离预测模型输出的预测距离值计算生成的,用于评估多维数据集的全局指标;及根据所述损失值调节所述初始排序模型的参数进行迭代训练,直到所确定的损失值达到训练停止条件,得到训练完成的排序模型。
  19. 根据权利要求18所述的存储介质,其中,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    将所述散点图集输入初始排序模型中;将第二函数作为目标函数,基于所述目标函数确定损失值;其中,所述第二函数用于评估散点图的全局指标;及根据所述损失值调节所述初始排序模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的排序模型。
  20. 根据权利要求18所述的存储介质,其中,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    获取所述属性数据样本集中两个属性数据样本对应的采样点集;将所述采样点集输入初始距离预测模型中,得到对应的预测值;获取所述采样点集之间的距离的监督值,将所述预测值与所述监督值进行比较,得到对应的损失值;及根据所述损失值调节所述初始距离预测模型的参数进行迭代训练,直到满足训练停止条件,得到训练完成的距离预测模型。
PCT/CN2020/111320 2020-08-24 2020-08-26 产品信息可视化处理方法、装置、计算机设备 WO2022040972A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/997,491 US20230162254A1 (en) 2020-08-24 2020-08-26 Product information visualization processing method and apparatus, and computer device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010856845.6 2020-08-24
CN202010856845.6A CN112287014A (zh) 2020-08-24 2020-08-24 产品信息可视化处理方法、装置、计算机设备

Publications (1)

Publication Number Publication Date
WO2022040972A1 true WO2022040972A1 (zh) 2022-03-03

Family

ID=74420866

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111320 WO2022040972A1 (zh) 2020-08-24 2020-08-26 产品信息可视化处理方法、装置、计算机设备

Country Status (3)

Country Link
US (1) US20230162254A1 (zh)
CN (1) CN112287014A (zh)
WO (1) WO2022040972A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434146A (zh) * 2023-04-21 2023-07-14 河北信服科技有限公司 一种三维可视化综合管理平台
CN117527449A (zh) * 2024-01-05 2024-02-06 之江实验室 一种入侵检测方法、装置、电子设备以及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191840A (zh) * 2021-04-25 2021-07-30 北京沃东天骏信息技术有限公司 物品信息显示方法、装置、电子设备和计算机可读介质
CN117453148B (zh) * 2023-12-22 2024-04-02 柏科数据技术(深圳)股份有限公司 基于神经网络的数据平衡方法、装置、终端及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299344A (zh) * 2018-10-26 2019-02-01 Oppo广东移动通信有限公司 排序模型的生成方法、搜索结果的排序方法、装置及设备
CN110007989A (zh) * 2018-12-13 2019-07-12 国网信通亿力科技有限责任公司 数据可视化平台系统
CN110704544A (zh) * 2019-08-20 2020-01-17 中国平安财产保险股份有限公司 一种基于大数据的对象处理方法、装置、设备及存储介质
CN110737805A (zh) * 2019-10-18 2020-01-31 网易(杭州)网络有限公司 图模型数据的处理方法、装置和终端设备
CN111160489A (zh) * 2020-01-02 2020-05-15 中冶赛迪重庆信息技术有限公司 基于大数据的多维对标分析服务器、系统、方法及电子终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299344A (zh) * 2018-10-26 2019-02-01 Oppo广东移动通信有限公司 排序模型的生成方法、搜索结果的排序方法、装置及设备
CN110007989A (zh) * 2018-12-13 2019-07-12 国网信通亿力科技有限责任公司 数据可视化平台系统
CN110704544A (zh) * 2019-08-20 2020-01-17 中国平安财产保险股份有限公司 一种基于大数据的对象处理方法、装置、设备及存储介质
CN110737805A (zh) * 2019-10-18 2020-01-31 网易(杭州)网络有限公司 图模型数据的处理方法、装置和终端设备
CN111160489A (zh) * 2020-01-02 2020-05-15 中冶赛迪重庆信息技术有限公司 基于大数据的多维对标分析服务器、系统、方法及电子终端

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434146A (zh) * 2023-04-21 2023-07-14 河北信服科技有限公司 一种三维可视化综合管理平台
CN116434146B (zh) * 2023-04-21 2023-11-03 河北信服科技有限公司 一种三维可视化综合管理平台
CN117527449A (zh) * 2024-01-05 2024-02-06 之江实验室 一种入侵检测方法、装置、电子设备以及存储介质

Also Published As

Publication number Publication date
CN112287014A (zh) 2021-01-29
US20230162254A1 (en) 2023-05-25

Similar Documents

Publication Publication Date Title
WO2022040972A1 (zh) 产品信息可视化处理方法、装置、计算机设备
CN105320764A (zh) 一种基于增量慢特征的3d模型检索方法及其检索装置
CN112529068B (zh) 一种多视图图像分类方法、系统、计算机设备和存储介质
CN116596095B (zh) 基于机器学习的碳排放量预测模型的训练方法及装置
CN111325237A (zh) 一种基于注意力交互机制的图像识别方法
CN115982597A (zh) 语义相似度模型训练方法及装置、语义匹配方法及装置
CN116662839A (zh) 基于多维智能采集的关联大数据聚类分析方法及装置
Wang et al. AIS ship trajectory clustering based on convolutional auto-encoder
Yanmin et al. Research on ear recognition based on SSD_MobileNet_v1 network
Ma et al. A novel 3D shape recognition method based on double-channel attention residual network
Zhang et al. NAS4FBP: Facial beauty prediction based on neural architecture search
CN116186297A (zh) 一种基于图流形学习的文献关系发现方法及系统
CN115310606A (zh) 基于数据集敏感属性重构的深度学习模型去偏方法及装置
CN116484067A (zh) 目标对象匹配方法、装置及计算机设备
CN111552827B (zh) 标注方法和装置、行为意愿预测模型训练方法和装置
Tan et al. Fuzzy retrieval algorithm for film and television animation resource database based on deep neural network
Zhang et al. Channel compression optimization oriented bus passenger object detection
CN115358379B (zh) 神经网络处理、信息处理方法、装置和计算机设备
Wang et al. An early warning method for abnormal behavior of college students based on multimodal fusion and improved decision tree
Cuzzocrea Multidimensional Clustering over Big Data: Models, Issues, Analysis, Emerging Trends
Xue Comparison of conventional and lightweight convolutional neural networks for Image Classification
CN117910479B (zh) 聚合新闻判断方法、装置、设备及介质
CN116910186B (zh) 一种文本索引模型构建方法、索引方法、系统和终端
CN115345257B (zh) 飞行轨迹分类模型训练方法、分类方法、装置及存储介质
Bi et al. Social Network Recommendation Algorithm based on ICIP

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950645

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20950645

Country of ref document: EP

Kind code of ref document: A1