CN117668372B - Virtual exhibition system on digital wisdom exhibition line - Google Patents

Virtual exhibition system on digital wisdom exhibition line Download PDF

Info

Publication number
CN117668372B
CN117668372B CN202410130817.4A CN202410130817A CN117668372B CN 117668372 B CN117668372 B CN 117668372B CN 202410130817 A CN202410130817 A CN 202410130817A CN 117668372 B CN117668372 B CN 117668372B
Authority
CN
China
Prior art keywords
data
exhibition
information
model
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410130817.4A
Other languages
Chinese (zh)
Other versions
CN117668372A (en
Inventor
丁浩
陈海辉
欧阳明珍
戴硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jingzhe Yundong Technology Co ltd
Original Assignee
Jiangsu Jingzhe Yundong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jingzhe Yundong Technology Co ltd filed Critical Jiangsu Jingzhe Yundong Technology Co ltd
Priority to CN202410130817.4A priority Critical patent/CN117668372B/en
Publication of CN117668372A publication Critical patent/CN117668372A/en
Application granted granted Critical
Publication of CN117668372B publication Critical patent/CN117668372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a digital intelligent exhibition online virtual exhibition system, relates to the technical field of intelligent exhibition, and aims to solve the problems of poor online exhibition watching effect and poor audience experience. The invention confirms the stay time of each work, determines the exhibition quantity of the user on the product according to the stay time, and acquires the gender and age of the audience according to the exhibition quantity of each work, thereby further improving the knowledge of artists on the audience group of the work, carrying out forward and backward propagation on exhibition model construction data, customizing and optimizing online exhibited works according to different application scenes and requirements, effectively improving the quality of the work pictures by carrying out skewness, definition and distortion treatment on the pictures, correcting the defects of the work pictures, and carrying out picture compression on the optimized work pictures, thereby being more convenient for storing, transmitting and sharing the work pictures.

Description

Virtual exhibition system on digital wisdom exhibition line
Technical Field
The invention relates to the technical field of intelligent exhibition, in particular to a virtual exhibition system on a digital intelligent exhibition line.
Background
The intelligent exhibition is to integrate new generation information technology applications such as internet of things and cloud computing by utilizing a mobile internet technology.
The Chinese patent with publication number CN117035929A discloses an on-line exhibition system based on artificial intelligence, which mainly transmits the parameters after privacy protection only through a local differential privacy mechanism during parameter aggregation, and avoids directly sharing original user data, thereby protecting the privacy of different tenant users in the cross-tenant collaborative training process, a general layer and a tenant exclusive layer, learning cross-tenant common knowledge, and fine tuning the exclusive layer through transfer learning, solving the problem of personalized model construction, providing a unified representation method of user behavior and exhibited characteristics, realizing data fusion among different tenants and different semantics through feature engineering, and solving the problem of cross-tenant dataset construction, wherein the patent also has the following problems in actual operation although solving the problem of on-line exhibition:
1. the preference degree of each user to the works is not further collected in time, so that an artist cannot know the audience of the works in time.
2. The works are not further model-built, so that the text data and the picture data in the online works cannot be fused better.
3. The better optimizing process of the work display is not carried out on the work, so that the showing experience effect is poor when the user performs online showing.
Disclosure of Invention
The invention aims to provide a virtual showcase system on a digital intelligent exhibition line, which confirms the residence time of each work, determines the exhibition quantity of a user on a product according to the residence time, acquires the gender and age of a spectator according to the exhibition quantity of each work, further improves the knowledge of artists on the audience group of the work, enables the exhibition model construction data to be forward and backward propagated, enables online showcases to be customized and optimized according to different application scenes and requirements, effectively improves the quality of the work picture by processing the skewness, the definition and the distortion degree of the picture, corrects the defect of the work picture, compresses the optimized work picture, is more convenient for storing, transmitting and sharing the work picture, and solves the problems in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A virtual showcase system on a digital wisdom exhibition line, comprising:
An exhibition data confirmation unit for:
recording online exhibition data and exhibition personnel data, wherein the exhibition data and the exhibition personnel data are respectively and independently stored, and standard exhibition data and standard exhibition personnel data are obtained after independent storage;
the data optimization processing unit is used for:
Based on the standard exhibition data acquired in the exhibition data confirmation unit, respectively extracting the text data and the picture data in the standard exhibition data, respectively carrying out optimization processing on the text data and the picture data, and obtaining the standard text data and the standard picture data after the optimization processing;
An exhibition plan creation unit configured to:
Based on the standard text data and the standard picture data obtained in the data optimization processing unit, importing the standard text data and the standard picture data into a neural network model for model construction, and obtaining exhibition model data after the model construction, wherein the neural network model is called from a model database;
The user interaction communication unit is used for:
Importing the exhibition model data and the standard exhibition staff data into an online exhibition system, confirming and logging by the exhibition staff according to the standard exhibition staff data, and watching and exchanging the exhibition model data after confirmation by the exhibition staff;
the exhibition data statistics unit is used for:
and acquiring the display quantity and the stay time of each display model data, and carrying out preference statistics on the display model data according to the standard exhibitor data in the display quantity and the stay time in each display model data.
Preferably, the exhibition data confirmation unit includes:
The exhibition data confirming module is used for:
the exhibition data comprises exhibition basic information, artist information and work information;
the basic information of the exhibition is title, time and place information of the exhibition; artist information is basic information of artists in the exhibitions; the work information is title, creation time, material and size information of the exhibited work;
Corresponding each artist and corresponding works and exhibition basic information in the exhibition of the artist, and carrying out unique coding marks on each artist and the works and exhibition basic information of the artist to obtain standard exhibition data after the unique coding marks;
the exhibitor data confirmation module is used for:
the exhibition personnel data comprise the exhibition times, residence time and user basic information of the user;
The user inputs own user basic information through the online exhibition terminal, wherein the user basic information comprises the name, the gender and the nationality of the user;
and after the data of the exhibitors are confirmed, standard data of the exhibitors are obtained.
Preferably, the data optimization processing unit includes:
The text data processing module is used for:
confirming the text information of each exhibited work;
after confirming the text information, preprocessing the text information;
the data preprocessing firstly removes irrelevant characters and blank characters in the text information;
Removing the repeated data of the text information after the irrelevant characters and blank characters are removed;
after the repeated data are removed, the text information is subjected to format unification, the morphology in the text information is restored, and then the restored morphology is restored to a text basic format, so that target text data are obtained;
and removing stop words and filter words from the target text data, extracting word features in the target text data after removing the stop words and the filter words, and obtaining standard text data after extracting the word features.
Preferably, the data optimization processing unit further includes:
The picture data processing module is used for:
confirming the picture information of each exhibited work;
Performing skewness, definition and distortion degree processing on the picture data;
firstly, edge information detection is carried out on the picture data in an edge detection mode, and the skewness of the picture data is calculated according to the edge information of the picture data;
When the calculated skewness value is inconsistent with the standard skewness value of the picture data, the calculated skewness value is adjusted to be consistent with the standard skewness value;
And the definition and distortion degree adjustment are sequentially carried out on the picture data with the skew angle adjustment completed;
The definition and distortion degree adjusting process comprises the following steps:
Firstly, carrying out definition detection on the picture data subjected to skewness adjustment, and adjusting the detected definition value to be consistent with the standard definition value when the detected definition value is inconsistent with the standard definition value;
Detecting the distortion degree of the picture data with the definition adjusted, and adjusting the detected distortion degree value to be consistent with the standard distortion degree value when the detected distortion degree value is inconsistent with the standard distortion degree value;
obtaining target picture data after finishing adjustment of the skewness, definition and distortion of the picture;
and carrying out picture compression on the target picture data, unifying the sizes of the pictures, and obtaining standard picture data after unifying the sizes.
Preferably, edge information detection is performed on the picture data by an edge detection method, and the skewness of the picture data is calculated according to the edge information of the picture data, including:
Extracting gray values of all pixel points in the picture data;
Extracting edge information of the picture data according to the gray value of each pixel point and the gray change relation in the surrounding area range;
Refining, denoising and connecting the extracted edge information to obtain the edge profile of the picture data;
Intercepting a target image area in the picture data according to the edge contour;
obtaining the offset amplitude and the direction of each pixel point in the target image area through calculation, wherein the offset amplitude and the direction of the pixel point are obtained through the following formula:
wherein G represents the gradient magnitude; g x and G y represent a gray scale change rate of the pixel point in the x direction and a gray scale change rate of the pixel point in the y direction, respectively; θ represents the gradient direction; s represents the area of the picture data; s m denotes the area of the target image area; λ represents a gray coefficient; n represents the number of pixels included in the edge contour of the target image area; h i represents an average value of the x-direction gray scale rate and the y-direction gray scale rate of the ith pixel point included in the edge profile; h pi represents an average value of the x-direction gray scale change rate and the y-direction gray scale change rate of the pixel points of the non-target image area connected to the i-th pixel point included in the edge profile.
Preferably, the exhibition plan creating unit is further configured to:
performing data correspondence on the standard text data and the standard picture data, and obtaining exhibition model construction data after the data correspondence is completed;
importing exhibition model construction data into a neural network model for model operation;
The model operation flow is as follows:
Firstly, forward propagation is carried out on exhibition model construction data, and forward propagation result data are confirmed;
When the forward propagation result does not accord with the standard forward propagation result, the forward propagation result is propagated in the direction;
The forward propagation result accords with the standard forward propagation result or the backward propagation result is used as model construction data;
And carrying out model construction on the model construction data, and obtaining exhibition model data after model construction.
Preferably, the user interaction communication unit is further configured to:
The user logs in through the online showcase terminal, and the online showcase terminal confirms the login permission of the user according to the data of the showcase personnel;
after the login permission is confirmed to be safe, the user performs product exhibition on the online exhibition terminal;
when the user performs product exhibition, the user can comment and share the product.
Preferably, the exhibition data statistics unit is further configured to:
acquiring exhibition data of each product in the exhibition model data;
firstly, confirming the display quantity data of the products, and confirming the stay time of each display according to the display quantity of the products;
Confirming the accurate display quantity of the product according to the residence time of the display;
when the residence time of the product is within a preset time range, the display confirmed by the product is an accurate display quantity;
Acquiring user information of accurate exhibition quantity, and confirming the gender and age layer of the user information;
counting the user population, gender and age of each product;
And obtaining user preference data corresponding to each display product after statistics is completed.
Preferably, the exhibition data statistics unit includes:
the learning subunit is used for acquiring the model type corresponding to each display model data, learning the model type and determining the type feature vector of the model type corresponding to each display model data;
An information matrix construction subunit for:
reading information data of M favorites corresponding to each exhibition model data, determining a plurality of information dimensions corresponding to the information data of each favorites, and simultaneously obtaining information feature vectors corresponding to each information dimension based on the information data;
Constructing an information matrix of corresponding favorites in the current exhibition model data based on the information feature vector corresponding to each information dimension;
A classification subunit for:
Calculating target similarity among information matrixes of M favorites, and acquiring a similarity threshold;
Dividing information matrixes corresponding to the similarity of the targets being equal to or greater than a similarity threshold into the same class;
picking a first information matrix set with the largest number of objects in the same category based on the dividing result, and taking the rest information matrix as a second information matrix set;
the comprehensive recommendation network construction subunit is configured to:
Associating the type feature vector of the model type with a first information matrix set corresponding to the type feature vector of the model type to obtain a first association relation, and associating the type feature vector of the model type with a second information matrix set corresponding to the type feature vector of the model type to obtain a second association relation;
Constructing sub-recommendation networks corresponding to each type of exhibition model data based on the first association relationship and the second association relationship, integrating a plurality of sub-recommendation networks, and constructing an integrated recommendation network for recommending the exhibition model data;
And the recommending subunit is used for transmitting information data of the exhibitors to the comprehensive recommending network when the exhibitors log in information, and outputting recommending exhibition model data based on the comprehensive recommending network.
Preferably, the exhibition data statistics unit further includes performing satisfaction evaluation, specifically:
the evaluation index acquisition subunit is used for acquiring the number of the exhibitors virtually exhibitors on line and acquiring an evaluation index for evaluating the satisfaction degree of the virtual exhibitors on line;
a first computing subunit for:
Obtaining a target scoring value for evaluating corresponding evaluation indexes of the exhibitors of the online virtual exhibitors, and calculating a satisfaction score of the online virtual exhibitors according to the target scoring value and the number of the exhibitors of the online virtual exhibitors;
wherein, Representing satisfaction scores for virtual exhibitions on the line; /(I)Representing the total number of evaluation indexes; /(I)A serial number value representing an evaluation index; /(I)Representing the number of the exhibition staff virtually exhibited on line; /(I)Representing the number value of the exhibitor; /(I)Representing a target grading value corresponding to the ith evaluation index by the jth person; /(I)An influence weight of the ith index on the satisfaction score is represented; /(I)Representing the error factor and taking the value (0.01,0.03);
a second computing subunit for:
Obtaining satisfaction threshold Judging whether the virtual exhibition on line is qualified or not according to the following formula;
Wherein/> Representing a qualification evaluation coefficient;
When (when) When the virtual exhibition on line is judged to be qualified;
When (when) And if the virtual exhibition on the local line is not qualified, judging that the virtual exhibition on the local line is not qualified, and performing alarm operation.
Compared with the prior art, the invention has the following beneficial effects:
1. According to the virtual exhibition system on the digital intelligent exhibition line, the text introduction information of each work is processed through the text data processing module, and the work can be searched according to the keyword retrieval, so that convenience for users to watch different works is facilitated, the quality of the work picture can be effectively improved through the skewness, definition and distortion degree processing of the picture, the defect of the work picture can be corrected, and the optimized work picture is compressed, so that the storage, transmission and sharing of the work picture are facilitated.
2. According to the digital intelligent exhibition online virtual exhibition system provided by the invention, the exhibition model construction data are transmitted forward and backward, online exhibited works can be customized and optimized according to different application scenes and requirements, and meanwhile, the neural network model also has good parallel computing capability and can perform distributed computation on a plurality of computing nodes, so that the process of large-scale data processing is accelerated.
3. According to the digital intelligent exhibition online virtual exhibition system, identity authentication is carried out on each user through the user interaction communication unit, and each work can be reviewed and shared when the user carries out online exhibition, so that the interestingness of the user during the online exhibition is further improved, the residence time of each work is confirmed, the exhibition quantity of the user on products is determined according to the residence time, the gender and age of the audience are obtained according to the exhibition quantity of each work, and therefore the knowledge of artists on the audience group of the work can be further improved, and the convenience and the completeness of online exhibition are improved.
4. By determining the association relation between the type feature vector of the model type and the corresponding first information matrix set, the construction of the comprehensive recommendation network is effectively realized, information data of seemingly discrete favorites are associated, the recommendation standard of the exhibition model data can be clearly described, the effect of the information data of each favorites on the recommendation of the exhibition model recommendation data is relatively static, but the dynamic association feature is generated while the information matrix of the favorites is associated with the type feature vector of the exhibition model recommendation data, so that the recommendation of the exhibitors based on the comprehensive recommendation network can be effectively realized, the recommendation efficiency and intelligence are improved, and the experience of the exhibitors is also improved.
5. The online virtual exhibition can be effectively evaluated whether the online virtual exhibition is qualified or not by calculating the satisfaction score of the exhibition staff on the online virtual exhibition, and then the alarm operation is carried out when the online virtual exhibition is unqualified.
Drawings
FIG. 1 is a schematic diagram of an overall display module of the present invention;
fig. 2 is a schematic diagram of the overall exhibition flow of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the problem that in the prior art, the data information of the exhibitors and exhibition artists is obtained too singly, thereby resulting in inaccurate information collection, please refer to fig. 1 and 2, the present embodiment provides the following technical scheme:
A virtual showcase system on a digital wisdom exhibition line, comprising:
An exhibition data confirmation unit for:
recording online exhibition data and exhibition personnel data, wherein the exhibition data and the exhibition personnel data are respectively and independently stored, and standard exhibition data and standard exhibition personnel data are obtained after independent storage;
the data optimization processing unit is used for:
Based on the standard exhibition data acquired in the exhibition data confirmation unit, respectively extracting the text data and the picture data in the standard exhibition data, respectively carrying out optimization processing on the text data and the picture data, and obtaining the standard text data and the standard picture data after the optimization processing;
An exhibition plan creation unit configured to:
Based on the standard text data and the standard picture data obtained in the data optimization processing unit, importing the standard text data and the standard picture data into a neural network model for model construction, and obtaining exhibition model data after the model construction, wherein the neural network model is called from a model database;
The user interaction communication unit is used for:
Importing the exhibition model data and the standard exhibition staff data into an online exhibition system, confirming and logging by the exhibition staff according to the standard exhibition staff data, and watching and exchanging the exhibition model data after confirmation by the exhibition staff;
the exhibition data statistics unit is used for:
and acquiring the display quantity and the stay time of each display model data, and carrying out preference statistics on the display model data according to the standard exhibitor data in the display quantity and the stay time in each display model data.
Specifically, the comprehensiveness of collecting the exhibited work can be further improved through the exhibited data confirming unit, convenience of watching the work by a user during exhibition is facilitated, the quality of the work picture can be effectively improved through the data optimizing processing unit for processing the skewness, the definition and the distortion degree of the picture, the defect of the work picture can be corrected, the on-line work is built through the exhibited planning creating unit for a neural network model, the large-scale data processing process is accelerated, multi-mode data such as texts and images of the exhibited work can be processed, accordingly, the data of different types can be mutually fused, comprehensive utilization efficiency of the data is improved, comments and sharing are carried out on each work through the user interaction exchanging unit, interestingness of the user during exhibition is further improved, the stay time of each work is confirmed through the exhibited data statistics unit, the exhibited quantity of the product by the user is determined according to the stay time, and convenience and integrity of on-line exhibition are improved.
An exhibition data confirmation unit comprising:
The exhibition data confirming module is used for:
the exhibition data comprises exhibition basic information, artist information and work information;
the basic information of the exhibition is title, time and place information of the exhibition; artist information is basic information of artists in the exhibitions; the work information is title, creation time, material and size information of the exhibited work;
And corresponding the works and the exhibition basic information corresponding to each artist and the exhibition of the artist, and carrying out unique coding marks on the works and the exhibition basic information of each artist and the artist, wherein standard exhibition data are obtained after the unique coding marks.
The exhibitor data confirmation module is used for:
the exhibition personnel data comprise the exhibition times, residence time and user basic information of the user;
The user inputs own user basic information through the online exhibition terminal, wherein the user basic information comprises the name, the gender and the nationality of the user;
and after the data of the exhibitors are confirmed, standard data of the exhibitors are obtained.
Specifically, firstly, show artists and the works that the artists are showing and the detailed information of the works are collected through the showing data confirmation module, according to the collection of the show artists and the works that the artists are showing and the detailed information of the works, the comprehensiveness of the collection of the show works can be further improved, convenience of the users in watching the works when the users are showing is facilitated, the information of the users is recorded through the showing staff data confirmation module, the users register personal information, the personal information comprises names, sexes, ages, registered accounts, registered passwords and the like of the users, and according to the showing times and residence time of the users when the users are showing on line in the later stage, the favorites of the shows are collected, and therefore the problem of attraction degree of different people can be better known.
In order to solve the problem that in the prior art, better work display optimization processing is not performed on works, so that the exhibition experience effect is poor when users conduct online exhibition, please refer to fig. 1 and 2, the present embodiment provides the following technical scheme:
a data optimization processing unit comprising:
The text data processing module is used for:
confirming the text information of each exhibited work;
after confirming the text information, preprocessing the text information;
the data preprocessing firstly removes irrelevant characters and blank characters in the text information;
Removing the repeated data of the text information after the irrelevant characters and blank characters are removed;
after the repeated data are removed, the text information is subjected to format unification, the morphology in the text information is restored, and then the restored morphology is restored to a text basic format, so that target text data are obtained;
and removing stop words and filter words from the target text data, extracting word features in the target text data after removing the stop words and the filter words, and obtaining standard text data after extracting the word features.
The picture data processing module is used for:
confirming the picture information of each exhibited work;
Performing skewness, definition and distortion degree processing on the picture data;
firstly, edge information detection is carried out on the picture data in an edge detection mode, and the skewness of the picture data is calculated according to the edge information of the picture data;
When the calculated skewness value is inconsistent with the standard skewness value of the picture data, the calculated skewness value is adjusted to be consistent with the standard skewness value;
And the definition and distortion degree adjustment are sequentially carried out on the picture data with the skew angle adjustment completed;
The definition and distortion degree adjusting process comprises the following steps:
Firstly, carrying out definition detection on the picture data subjected to skewness adjustment, and adjusting the detected definition value to be consistent with the standard definition value when the detected definition value is inconsistent with the standard definition value;
Detecting the distortion degree of the picture data with the definition adjusted, and adjusting the detected distortion degree value to be consistent with the standard distortion degree value when the detected distortion degree value is inconsistent with the standard distortion degree value;
obtaining target picture data after finishing adjustment of the skewness, definition and distortion of the picture;
and carrying out picture compression on the target picture data, unifying the sizes of the pictures, and obtaining standard picture data after unifying the sizes.
Specifically, when the works are initially recorded, the picture information of each work and the corresponding text introduction information of each work are recorded, the text introduction information of each work is processed through a text data processing module, when the text introduction information of each work is processed, irrelevant characters and blank characters in the text introduction information are removed, repeated data are removed, word shapes are restored to a text basic format, stop words and filter words are removed based on the text basic format, so that only key words of each work are reserved, later users can search the works according to the key word search when carrying out online exhibition, convenience for viewing different works is facilitated, picture information of each work is optimized through a picture data processing module, the quality of the picture can be effectively improved by processing the picture in a skewness, a definition and a distortion degree, the picture of the work is compressed, the picture is uniform in size, the picture is compressed and the uniform size, the picture is saved, the network is saved, and the network is saved.
Specifically, edge information detection is performed on the picture data in an edge detection mode, and the skewness of the picture data is calculated according to the edge information of the picture data, including:
Extracting gray values of all pixel points in the picture data;
Extracting edge information of the picture data according to the gray value of each pixel point and the gray change relation in the surrounding area range;
Refining, denoising and connecting the extracted edge information to obtain the edge profile of the picture data;
Intercepting a target image area in the picture data according to the edge contour;
the offset magnitude and direction of each pixel point in the target image area are obtained by calculation, wherein,
The offset amplitude and the offset direction of the pixel point are obtained through the following formula:
Wherein G represents gradient amplitude and is used for representing significant change of gray values of pixel points and pixels in the neighborhood of the pixel points; g x and G y represent a gray scale change rate of the pixel point in the x direction and a gray scale change rate of the pixel point in the y direction, respectively; θ represents a gradient direction for characterizing an edge direction at a pixel point; s represents the area of the picture data; s m denotes the area of the target image area; λ represents a gray coefficient; n represents the number of pixels included in the edge contour of the target image area; h i represents an average value of the x-direction gray scale rate and the y-direction gray scale rate of the ith pixel point included in the edge profile; h pi represents an average value of the x-direction gray scale change rate and the y-direction gray scale change rate of the pixel points of the non-target image area connected to the i-th pixel point included in the edge profile.
The technical effects of the technical scheme are as follows: the gray values of all pixels are extracted from the input picture data, which is the basic step of edge detection, since edges are typically associated with drastic changes in gray values. And according to the gray value of each pixel point and the gray change relation in the surrounding area range, the system can extract the edge information of the image data. This typically involves calculation of a first or second derivative.
Then, the extracted edge information is subjected to thinning, denoising and connection processing in order to remove noise, smooth an image and connect the broken edges, thereby obtaining a more accurate edge profile. And cutting out the target image area from the original picture data according to the edge contour. This step facilitates subsequent skewness calculations, as skewness is generally associated with a particular image region. The system can then calculate the magnitude and direction of the offset for each pixel point in the target image region. Finally, the system can evaluate the skewness of the image data based on the magnitude and direction of the pixel shift.
On the other hand, the gradient magnitude of each pixel in the image is calculated, which represents the significant change of the gray value of the pixel and the neighborhood pixels. The gradient magnitude provides a measure of edge intensity, helping to determine whether a pixel is located at an edge and the intensity of the edge. When calculating the offset amplitude, the method not only considers the gray value of the pixel point, but also considers the gray change relation in the surrounding area range. This helps to capture more complex edge features and improves the accuracy of the offset magnitude calculation. The method also takes into account the total area (S) of the image and the area (Sm) of the target image area in the calculation process. The introduction of such area information serves to enable the adjustment of the offset amplitude to accommodate image areas of different sizes.
In addition, the method also considers the gray coefficient (lambda) and the number (n) of pixels contained in the edge contour of the target image area. For adjusting or enhancing the accuracy of the calculation of the offset amplitude. Meanwhile, for each pixel included in the edge profile, the system calculates an average value (Hi) of gray scale change rates in the x-direction and the y-direction thereof. This helps capture edge direction information at the pixel points. In addition, the method also considers an average value (Hp) of gray-scale change rates of pixel points of a non-target image area connected to pixel points included in an edge contour of the target image area. To exclude background noise and smooth the edge profile.
A number of factors are integrated: finally, the method combines the factors and the calculation results, and calculates the offset amplitude of each pixel point through a formula. This way of calculating the integrated multiple factors helps to improve the accuracy and robustness of the offset magnitude calculation to cope with different types and complexity of edge situations.
In summary, by the method for acquiring the offset amplitude, the system can more accurately measure and analyze the skew of the picture data, thereby providing more accurate image processing and analysis results.
In order to solve the problem that in the prior art, no further model construction is performed on a work, so that text data and picture data in an online work cannot be fused better, referring to fig. 1 and 2, the present embodiment provides the following technical scheme:
The exhibition plan creation unit is further configured to:
performing data correspondence on the standard text data and the standard picture data, and obtaining exhibition model construction data after the data correspondence is completed;
importing exhibition model construction data into a neural network model for model operation;
The model operation flow is as follows:
Firstly, forward propagation is carried out on exhibition model construction data, and forward propagation result data are confirmed;
When the forward propagation result does not accord with the standard forward propagation result, the forward propagation result is propagated in the direction;
The forward propagation result accords with the standard forward propagation result or the backward propagation result is used as model construction data;
And carrying out model construction on the model construction data, and obtaining exhibition model data after model construction.
Specifically, the forward propagation is to propagate exhibition model construction data from a low level to a high level, the backward propagation is to propagate errors from the high level to a bottom level, forward and backward propagation is performed on exhibition model construction data, online exhibited works can be customized and optimized according to different application scenes and requirements, meanwhile, the neural network model also has good parallel computing capability, distributed computation can be performed on a plurality of computing nodes, and therefore a large-scale data processing process is accelerated, and meanwhile, multi-mode data such as texts and images of the exhibited works can be processed, so that different types of data can be mutually fused, and comprehensive utilization efficiency of the data is improved.
In order to solve the problem that in the prior art, when users conduct online exhibition, the preference degree of each user for works is not timely collected further, so that an artist cannot know the audience crowd of own works in time, please refer to fig. 1 and 2, the embodiment provides the following technical scheme:
The user interaction communication unit is further used for:
The user logs in through the online showcase terminal, and the online showcase terminal confirms the login permission of the user according to the data of the showcase personnel;
after the login permission is confirmed to be safe, the user performs product exhibition on the online exhibition terminal;
when the user performs product exhibition, the user can comment and share the product.
The exhibition data statistics unit is further used for:
acquiring exhibition data of each product in the exhibition model data;
firstly, confirming the display quantity data of the products, and confirming the stay time of each display according to the display quantity of the products;
Confirming the accurate display quantity of the product according to the residence time of the display;
when the residence time of the product is within a preset time range, the display confirmed by the product is an accurate display quantity;
Acquiring user information of accurate exhibition quantity, and confirming the gender and age layer of the user information;
counting the user population, gender and age of each product;
And obtaining user preference data corresponding to each display product after statistics is completed.
Specifically, identity authentication is performed on each user through the user interaction communication unit, after the identity authentication is passed, the user can conduct online exhibition, and when the user conducts online exhibition, each work can be commented and shared, so that the interestingness of the user during the online exhibition is further improved, the residence time of each work is confirmed through the exhibition data statistics unit, the exhibition quantity of the user to the product is confirmed according to the residence time, the work is confirmed to be one exhibition quantity when the residence time is within a preset time range, for example, if the user conducts online exhibition, the residence time of the work is one minute, the preset time range is one minute for thirty seconds, the watching quantity of the user is not one exhibition quantity, if the user conducts online exhibition, the residence time of the work is two minutes, the preset time range is one minute for thirty seconds, the watching of the user is one exhibition quantity, the audience and the age are obtained according to the exhibition quantity of each work, the gender and the age of each work are counted, and the convenience of the audience is further improved, and the integrity of the audience can be further improved.
Specifically, in the exhibition data statistics unit, after performing preference statistics on the exhibition model data, the exhibition data statistics unit includes:
the learning subunit is used for acquiring the model type corresponding to each display model data, learning the model type and determining the type feature vector of the model type corresponding to each display model data;
An information matrix construction subunit for:
reading information data of M favorites corresponding to each exhibition model data, determining a plurality of information dimensions corresponding to the information data of each favorites, and simultaneously obtaining information feature vectors corresponding to each information dimension based on the information data;
Constructing an information matrix of corresponding favorites in the current exhibition model data based on the information feature vector corresponding to each information dimension;
A classification subunit for:
Calculating target similarity among information matrixes of M favorites, and acquiring a similarity threshold;
Dividing information matrixes corresponding to the similarity of the targets being equal to or greater than a similarity threshold into the same class;
picking a first information matrix set with the largest number of objects in the same category based on the dividing result, and taking the rest information matrix as a second information matrix set;
the comprehensive recommendation network construction subunit is configured to:
Associating the type feature vector of the model type with a first information matrix set corresponding to the type feature vector of the model type to obtain a first association relation, and associating the type feature vector of the model type with a second information matrix set corresponding to the type feature vector of the model type to obtain a second association relation;
Constructing sub-recommendation networks corresponding to each type of exhibition model data based on the first association relationship and the second association relationship, integrating a plurality of sub-recommendation networks, and constructing an integrated recommendation network for recommending the exhibition model data;
And the recommending subunit is used for transmitting information data of the exhibitors to the comprehensive recommending network when the exhibitors log in information, and outputting recommending exhibition model data based on the comprehensive recommending network.
In this embodiment, the model type feature vector may be displayed in a vector form, so as to determine crowd conditions corresponding to different exhibition models.
In this embodiment, the information data may be information of preference type, name, age, sex, etc. of the preference person for the exhibition model.
In this embodiment, the plurality of information dimensions may be specific information types and corresponding specific parameter contents, such as names and specific names corresponding to the names, included in the information data of the favorites.
In this embodiment, the information feature vector may be a data feature corresponding to different information data, and is displayed in a vector form, in order to determine the matching degree with the exhibition model.
In this embodiment, the information matrix may be constructed according to the information feature vector of each information dimension, so as to comprehensively consider the like parameters of different favorites on the same exhibition model under different information data, that is, the range of people corresponding to different exhibition models.
In this embodiment, the target similarity is a similarity between information data representing different favorites.
In this embodiment, the similarity threshold is set in advance, and is a measurement criterion for measuring whether different preference persons can be classified into one category, and can be adjusted.
In this embodiment, the number of objects refers to the number of favorites contained in the same class, where the first information matrix set is the class with the largest number of favorites.
In this embodiment, the second set of information matrices may be other sets of categories than the one with the greatest number of favorites.
In this embodiment, the first association is a matching correspondence between the characterization model type and the first information matrix set.
In this embodiment, the second association is a matching correspondence between the characterization model type and the second information matrix set.
In this embodiment, the sub-recommendation network is constructed according to the first association relationship and the second association relationship, and is a mechanism for recommending appropriate exhibition models to different favorite people.
In this embodiment, the integrated recommendation network may be obtained by summarizing sub recommendation networks corresponding to different types of exhibition models, that is, a total recommendation mechanism corresponding to all the exhibition models.
The working principle and the beneficial effects of the technical scheme are as follows: by determining the association relation between the type feature vector of the model type and the corresponding first information matrix set, the construction of the comprehensive recommendation network is effectively realized, information data of seemingly discrete favorites are associated, the recommendation standard of the exhibition model data can be clearly described, the effect of the information data of each favorites on the recommendation of the exhibition model recommendation data is relatively static, but the dynamic association feature is generated while the information matrix of the favorites is associated with the type feature vector of the exhibition model recommendation data, so that the recommendation of the exhibitors based on the comprehensive recommendation network can be effectively realized, the recommendation efficiency and intelligence are improved, and the experience of the exhibitors is also improved.
Specifically, the exhibition data statistics unit further includes performing satisfaction evaluation, specifically:
the evaluation index acquisition subunit is used for acquiring the number of the exhibitors virtually exhibitors on line and acquiring an evaluation index for evaluating the satisfaction degree of the virtual exhibitors on line;
a first computing subunit for:
Obtaining a target scoring value for evaluating corresponding evaluation indexes of the exhibitors of the online virtual exhibitors, and calculating a satisfaction score of the online virtual exhibitors according to the target scoring value and the number of the exhibitors of the online virtual exhibitors;
wherein, Representing satisfaction scores for virtual exhibitions on the line; /(I)Representing the total number of evaluation indexes; /(I)A serial number value representing an evaluation index; /(I)Representing the number of the exhibition staff virtually exhibited on line; /(I)Representing the number value of the exhibitor; /(I)Representing a target grading value corresponding to the ith evaluation index by the jth person; /(I)An influence weight of the ith index on the satisfaction score is represented; /(I)Representing the error factor and taking the value (0.01,0.03);
a second computing subunit for:
Obtaining satisfaction threshold Judging whether the virtual exhibition on line is qualified or not according to the following formula;
Wherein/> Representing a qualification rate, the qualification rate being equal to or greater than 1, i.e. when/>When the virtual exhibition on line is judged to be qualified;
When (when) And if the virtual exhibition on the local line is not qualified, judging that the virtual exhibition on the local line is not qualified, and performing alarm operation.
In this embodiment, the evaluation index of the satisfaction evaluation may be an evaluation index including: evaluation indexes of the visual satisfaction degree of the exhibition model data, the hobby satisfaction degree of the exhibition model data and the fluency satisfaction degree when the exhibition model data is watched and communicated.
In this embodiment, the satisfaction threshold may be a verification criterion that is set in advance to measure whether the virtual exhibition on the line is qualified.
In this embodiment, the alarm operation may be one or more of sound, light, and vibration.
The working principle and the beneficial effects of the technical scheme are as follows: the online virtual exhibition can be effectively evaluated whether the online virtual exhibition is qualified or not by calculating the satisfaction score of the exhibition staff on the online virtual exhibition, and then the alarm operation is carried out when the online virtual exhibition is unqualified.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. A virtual showcase system on a digital wisdom exhibition line, comprising:
An exhibition data confirmation unit for:
recording online exhibition data and exhibition personnel data, wherein the exhibition data and the exhibition personnel data are respectively and independently stored, and standard exhibition data and standard exhibition personnel data are obtained after independent storage;
the data optimization processing unit is used for:
Based on the standard exhibition data acquired in the exhibition data confirmation unit, respectively extracting the text data and the picture data in the standard exhibition data, respectively carrying out optimization processing on the text data and the picture data, and obtaining the standard text data and the standard picture data after the optimization processing;
An exhibition plan creation unit configured to:
Based on the standard text data and the standard picture data obtained in the data optimization processing unit, importing the standard text data and the standard picture data into a neural network model for model construction, and obtaining exhibition model data after the model construction, wherein the neural network model is called from a model database;
The user interaction communication unit is used for:
Importing the exhibition model data and the standard exhibition staff data into an online exhibition system, confirming and logging by the exhibition staff according to the standard exhibition staff data, and watching and exchanging the exhibition model data after confirmation by the exhibition staff;
the exhibition data statistics unit is used for:
Acquiring the display quantity and the stay time of each display model data, and carrying out preference statistics on the display model data according to the standard exhibitor data in the display quantity and the stay time in each display model data;
The exhibition plan creation unit is further configured to:
performing data correspondence on the standard text data and the standard picture data, and obtaining exhibition model construction data after the data correspondence is completed;
importing exhibition model construction data into a neural network model for model operation;
The model operation flow is as follows:
Firstly, forward propagation is carried out on exhibition model construction data, and forward propagation result data are confirmed;
When the forward propagation result does not accord with the standard forward propagation result, the forward propagation result is propagated in the direction;
The forward propagation result accords with the standard forward propagation result or the backward propagation result is used as model construction data;
And carrying out model construction on the model construction data, and obtaining exhibition model data after model construction.
2. The virtual showcase system on a digital wisdom exhibition line of claim 1, wherein: the exhibition data confirmation unit includes:
The exhibition data confirming module is used for:
the exhibition data comprises exhibition basic information, artist information and work information;
the basic information of the exhibition is title, time and place information of the exhibition; artist information is basic information of artists in the exhibitions; the work information is title, creation time, material and size information of the exhibited work;
Corresponding each artist and corresponding works and exhibition basic information in the exhibition of the artist, and carrying out unique coding marks on each artist and the works and exhibition basic information of the artist to obtain standard exhibition data after the unique coding marks;
the exhibitor data confirmation module is used for:
the exhibition personnel data comprise the exhibition times, residence time and user basic information of the user;
The user inputs own user basic information through the online exhibition terminal, wherein the user basic information comprises the name, the gender and the nationality of the user;
and after the data of the exhibitors are confirmed, standard data of the exhibitors are obtained.
3. The virtual showcase system on a digital wisdom exhibition line of claim 2, wherein: the data optimization processing unit comprises:
The text data processing module is used for:
confirming the text information of each exhibited work;
after confirming the text information, preprocessing the text information;
the data preprocessing firstly removes irrelevant characters and blank characters in the text information;
Removing the repeated data of the text information after the irrelevant characters and blank characters are removed;
after the repeated data are removed, the text information is subjected to format unification, the morphology in the text information is restored, and then the restored morphology is restored to a text basic format, so that target text data are obtained;
and removing stop words and filter words from the target text data, extracting word features in the target text data after removing the stop words and the filter words, and obtaining standard text data after extracting the word features.
4. A digital intelligent on-line virtual showcase system according to claim 3, wherein: the data optimization processing unit further includes:
The picture data processing module is used for:
confirming the picture information of each exhibited work;
Performing skewness, definition and distortion degree processing on the picture data;
firstly, edge information detection is carried out on the picture data in an edge detection mode, and the skewness of the picture data is calculated according to the edge information of the picture data;
When the calculated skewness value is inconsistent with the standard skewness value of the picture data, the calculated skewness value is adjusted to be consistent with the standard skewness value;
And the definition and distortion degree adjustment are sequentially carried out on the picture data with the skew angle adjustment completed;
The definition and distortion degree adjusting process comprises the following steps:
Firstly, carrying out definition detection on the picture data subjected to skewness adjustment, and adjusting the detected definition value to be consistent with the standard definition value when the detected definition value is inconsistent with the standard definition value;
Detecting the distortion degree of the picture data with the definition adjusted, and adjusting the detected distortion degree value to be consistent with the standard distortion degree value when the detected distortion degree value is inconsistent with the standard distortion degree value;
obtaining target picture data after finishing adjustment of the skewness, definition and distortion of the picture;
and carrying out picture compression on the target picture data, unifying the sizes of the pictures, and obtaining standard picture data after unifying the sizes.
5. The virtual showcase system on a digital wisdom show line of claim 4, wherein: edge information detection is carried out on the picture data in an edge detection mode, and the skewness of the picture data is calculated according to the edge information of the picture data, and the method comprises the following steps:
Extracting gray values of all pixel points in the picture data;
Extracting edge information of the picture data according to the gray value of each pixel point and the gray change relation in the surrounding area range;
Refining, denoising and connecting the extracted edge information to obtain the edge profile of the picture data;
Intercepting a target image area in the picture data according to the edge contour;
obtaining the offset amplitude and the direction of each pixel point in the target image area through calculation, wherein the offset amplitude and the direction of the pixel point are obtained through the following formula:
wherein G represents the gradient magnitude; g x and G y represent a gray scale change rate of the pixel point in the x direction and a gray scale change rate of the pixel point in the y direction, respectively; θ represents the gradient direction; s represents the area of the picture data; s m denotes the area of the target image area; λ represents a gray coefficient; n represents the number of pixels included in the edge contour of the target image area; h i represents an average value of the x-direction gray scale rate and the y-direction gray scale rate of the ith pixel point included in the edge profile; h pi represents an average value of the x-direction gray scale change rate and the y-direction gray scale change rate of the pixel points of the non-target image area connected to the i-th pixel point included in the edge profile.
6. The virtual showcase system on a digital wisdom show line of claim 5, wherein: the user interaction communication unit is further configured to:
The user logs in through the online showcase terminal, and the online showcase terminal confirms the login permission of the user according to the data of the showcase personnel;
after the login permission is confirmed to be safe, the user performs product exhibition on the online exhibition terminal;
when the user performs product exhibition, the user can comment and share the product.
7. The virtual showcase system on a digital wisdom show line of claim 6, wherein: the exhibition data statistics unit is further configured to:
acquiring exhibition data of each product in the exhibition model data;
firstly, confirming the display quantity data of the products, and confirming the stay time of each display according to the display quantity of the products;
Confirming the accurate display quantity of the product according to the residence time of the display;
when the residence time of the product is within a preset time range, the display confirmed by the product is an accurate display quantity;
Acquiring user information of accurate exhibition quantity, and confirming the gender and age layer of the user information;
counting the user population, gender and age of each product;
And obtaining user preference data corresponding to each display product after statistics is completed.
8. The system according to claim 7, wherein the exhibition data statistics unit performs preference statistics on the exhibition model data, and then comprises:
the learning subunit is used for acquiring the model type corresponding to each display model data, learning the model type and determining the type feature vector of the model type corresponding to each display model data;
An information matrix construction subunit for:
reading information data of M favorites corresponding to each exhibition model data, determining a plurality of information dimensions corresponding to the information data of each favorites, and simultaneously obtaining information feature vectors corresponding to each information dimension based on the information data;
Constructing an information matrix of corresponding favorites in the current exhibition model data based on the information feature vector corresponding to each information dimension;
A classification subunit for:
Calculating target similarity among information matrixes of M favorites, and acquiring a similarity threshold;
Dividing information matrixes corresponding to the similarity of the targets being equal to or greater than a similarity threshold into the same class;
picking a first information matrix set with the largest number of objects in the same category based on the dividing result, and taking the rest information matrix as a second information matrix set;
the comprehensive recommendation network construction subunit is configured to:
Associating the type feature vector of the model type with a first information matrix set corresponding to the type feature vector of the model type to obtain a first association relation, and associating the type feature vector of the model type with a second information matrix set corresponding to the type feature vector of the model type to obtain a second association relation;
Constructing sub-recommendation networks corresponding to each type of exhibition model data based on the first association relationship and the second association relationship, integrating a plurality of sub-recommendation networks, and constructing an integrated recommendation network for recommending the exhibition model data;
And the recommending subunit is used for transmitting information data of the exhibitors to the comprehensive recommending network when the exhibitors log in information, and outputting recommending exhibition model data based on the comprehensive recommending network.
9. The virtual showcase system on a digital wisdom show line of claim 8, wherein the show data statistics unit further comprises a satisfaction evaluation, in particular:
the evaluation index acquisition subunit is used for acquiring the number of the exhibitors virtually exhibitors on line and acquiring an evaluation index for evaluating the satisfaction degree of the virtual exhibitors on line;
a first computing subunit for:
Obtaining a target scoring value for evaluating corresponding evaluation indexes of the exhibitors of the online virtual exhibitors, and calculating a satisfaction score of the online virtual exhibitors according to the target scoring value and the number of the exhibitors of the online virtual exhibitors;
Wherein Z represents a satisfaction score for the online virtual show; n represents the total number of evaluation indexes; i represents the serial number value of the evaluation index; m represents the number of exhibitors virtually exhibited on line; j represents the number value of the exhibitor; x ij represents a target score value corresponding to the ith evaluation index by the jth person; k i represents the impact weight of the ith index on the satisfaction score; k represents an error factor and takes a value of (0.01,0.03);
a second computing subunit for:
Obtaining a satisfaction threshold Z max, and judging whether the virtual exhibition on the line is qualified according to the following formula;
τ=z/Z max, where τ represents a qualification evaluation coefficient;
When tau is more than or equal to 1, judging that the virtual exhibition on the line is qualified;
and when tau is less than 1, judging that virtual exhibition on the local line is unqualified, and performing alarm operation.
CN202410130817.4A 2024-01-31 2024-01-31 Virtual exhibition system on digital wisdom exhibition line Active CN117668372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410130817.4A CN117668372B (en) 2024-01-31 2024-01-31 Virtual exhibition system on digital wisdom exhibition line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410130817.4A CN117668372B (en) 2024-01-31 2024-01-31 Virtual exhibition system on digital wisdom exhibition line

Publications (2)

Publication Number Publication Date
CN117668372A CN117668372A (en) 2024-03-08
CN117668372B true CN117668372B (en) 2024-04-19

Family

ID=90077273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410130817.4A Active CN117668372B (en) 2024-01-31 2024-01-31 Virtual exhibition system on digital wisdom exhibition line

Country Status (1)

Country Link
CN (1) CN117668372B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347920A (en) * 2020-11-06 2021-02-09 苏州金螳螂文化发展股份有限公司 Intelligent people flow statistics and acquisition system based on neural network exhibition hall region
CN112600888A (en) * 2020-12-04 2021-04-02 南京乐之飞科技有限公司 Digital multifunctional exhibition hall intelligent center control cloud platform based on data diversification processing
CN114627434A (en) * 2022-03-30 2022-06-14 今日汽车信息技术有限公司 Automobile sales exhibition room passenger flow identification system based on big data
CN115981516A (en) * 2023-03-17 2023-04-18 北京点意空间展览展示有限公司 Interaction method based on network virtual exhibition hall
KR20230077560A (en) * 2021-11-25 2023-06-01 한국로봇융합연구원 Appartus of providing service customized on exhibit hall and controlling method of the same
CN116308678A (en) * 2023-04-04 2023-06-23 北京农夫铺子技术研究院 Meta-universe electronic commerce platform and entity store interactive intelligent shopping system
CN116541576A (en) * 2023-07-06 2023-08-04 浙江档科信息技术有限公司 File data management labeling method and system based on big data application
CN116931732A (en) * 2023-07-21 2023-10-24 上海极度智慧展览股份有限公司 Vehicle exhibition implementation method, system, equipment and medium based on cloud exhibition
CN117035929A (en) * 2023-08-14 2023-11-10 王燕翎 Online exhibition system based on artificial intelligence

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347920A (en) * 2020-11-06 2021-02-09 苏州金螳螂文化发展股份有限公司 Intelligent people flow statistics and acquisition system based on neural network exhibition hall region
CN112600888A (en) * 2020-12-04 2021-04-02 南京乐之飞科技有限公司 Digital multifunctional exhibition hall intelligent center control cloud platform based on data diversification processing
KR20230077560A (en) * 2021-11-25 2023-06-01 한국로봇융합연구원 Appartus of providing service customized on exhibit hall and controlling method of the same
CN114627434A (en) * 2022-03-30 2022-06-14 今日汽车信息技术有限公司 Automobile sales exhibition room passenger flow identification system based on big data
CN115981516A (en) * 2023-03-17 2023-04-18 北京点意空间展览展示有限公司 Interaction method based on network virtual exhibition hall
CN116308678A (en) * 2023-04-04 2023-06-23 北京农夫铺子技术研究院 Meta-universe electronic commerce platform and entity store interactive intelligent shopping system
CN116541576A (en) * 2023-07-06 2023-08-04 浙江档科信息技术有限公司 File data management labeling method and system based on big data application
CN116931732A (en) * 2023-07-21 2023-10-24 上海极度智慧展览股份有限公司 Vehicle exhibition implementation method, system, equipment and medium based on cloud exhibition
CN117035929A (en) * 2023-08-14 2023-11-10 王燕翎 Online exhibition system based on artificial intelligence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Adaptive PID controller based on lyapunov function neural network for time delay temperature control;Muhammad Saleheen 等;《2015 IEEE 8th GCC conference & exhibition》;20150316;第1-2页 *
元宇宙视域下的图书馆虚拟服务;杨新涯 等;《图书馆论坛》;20220530;第42卷(第7期);第18-24页 *

Also Published As

Publication number Publication date
CN117668372A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN107870896B (en) Conversation analysis method and device
CN104715023A (en) Commodity recommendation method and system based on video content
CN108416314B (en) Picture important face detection method
WO2020134102A1 (en) Article recognition method and device, vending system, and storage medium
CN111639970A (en) Method for determining price of article based on image recognition and related equipment
CN112287175B (en) Video highlight segment prediction method and system
CN109377288A (en) Analysis method and device are launched in a kind of advertisement
CN110502694A (en) Lawyer's recommended method and relevant device based on big data analysis
CN113569129A (en) Click rate prediction model processing method, content recommendation method, device and equipment
CN116894711A (en) Commodity recommendation reason generation method and device and electronic equipment
CN115063084A (en) Inventory checking method and system for cigarette retail merchants
CN111680577A (en) Face detection method and device
CN107291774A (en) Error sample recognition methods and device
CN109582859B (en) Insurance pushing method and device, computer equipment and storage medium
CN117668372B (en) Virtual exhibition system on digital wisdom exhibition line
CN115982473B (en) Public opinion analysis arrangement system based on AIGC
CN117152815A (en) Student activity accompanying data analysis method, device and equipment
CN116739654A (en) Information acquisition device and system
CN116980665A (en) Video processing method, device, computer equipment, medium and product
CN115147705A (en) Face copying detection method and device, electronic equipment and storage medium
EP3985591A2 (en) Preference evaluation method and system
Goree et al. Correct for whom? subjectivity and the evaluation of personalized image aesthetics assessment models
Theerthagiri et al. Deepfake Face Detection Using Deep InceptionNet Learning Algorithm
KR20200092630A (en) Method for providing cleaning academy service turning authenticated sanitary worker out using systematized and formalized education
CN116824459B (en) Intelligent monitoring and evaluating method, system and storage medium for real-time examination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant