CN117332179B - Webpage display method of ultra-large data curve - Google Patents

Webpage display method of ultra-large data curve Download PDF

Info

Publication number
CN117332179B
CN117332179B CN202311630590.1A CN202311630590A CN117332179B CN 117332179 B CN117332179 B CN 117332179B CN 202311630590 A CN202311630590 A CN 202311630590A CN 117332179 B CN117332179 B CN 117332179B
Authority
CN
China
Prior art keywords
image
feature map
shallow
deep
compressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311630590.1A
Other languages
Chinese (zh)
Other versions
CN117332179A (en
Inventor
彭尊
黄泽杰
杨威
徐肖伟
陶晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jhby Technology Co ltd
Original Assignee
Beijing Jhby Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jhby Technology Co ltd filed Critical Beijing Jhby Technology Co ltd
Priority to CN202311630590.1A priority Critical patent/CN117332179B/en
Publication of CN117332179A publication Critical patent/CN117332179A/en
Application granted granted Critical
Publication of CN117332179B publication Critical patent/CN117332179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a webpage display method of an ultra-large data curve, which relates to the technical field of Internet, and the method logs in a website through a browser and completes setting of drawing images; the calculation instruction is sent to the calculation service through the WebAPI to obtain a calculation result; the calculation result is stored in a distributed database; according to the set content of the drawing image in the calculation instruction, calling drawing service to finish drawing of the drawing image; carrying out binary-based serialization processing on the drawn image to obtain a serialization image to be compressed; compressing the to-be-compressed serialized image to obtain compressed data; storing the compressed data in the distributed database; the compressed data is returned to the browser through the WebAPI, the browser analyzes the compressed data to obtain the serialized image, and deserializes the serialized image to obtain and display a final picture, so that the drawing efficiency of the drawing plug-in can be improved.

Description

Webpage display method of ultra-large data curve
Technical Field
The application relates to the technical field of Internet, and more particularly, to a webpage display method of an ultra-large data curve.
Background
Generally, when performing scientific research or industrial process analysis, a large amount of data is drawn. Two-dimensional line graphs, scatter plots, and the like are used to describe the variation of a dependent variable with an independent variable. Client software such as matlab, excel, R, origin and the like can conveniently draw curves by using a two-bit array or 2-digit numbers. The time for the curve drawing generally becomes longer as the data amount increases.
At present, scientific research services and data analysis services are all developed towards networking and cloud. Traditional scientific drawing software is difficult to integrate with web page presentation. At present, a relatively mature mode is also available for directly drawing a graph on a webpage, and the drawing plug-in is mainly combined with a query script. The data is read once or periodically through the script, and then the plug-in is used for drawing.
However, when drawing plug-ins draw tens of thousands or even hundreds of thousands of points on a web page at a time, there is a serious efficiency problem, and even the plug-ins cannot respond directly. And the scientific drawing client with quick response is difficult to deeply integrate with the network service.
Therefore, an optimized web page presentation method for a huge amount of data curves is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a webpage display method of an ultra-large number of data curves, which can improve the drawing efficiency of drawing plug-ins.
According to one aspect of the present application, there is provided a web page presentation method of an ultra-large number of data curves, including:
logging in a website through a browser and finishing setting of drawing images;
the calculation instruction is sent to the calculation service through the WebAPI to obtain a calculation result;
the calculation result is stored in a distributed database;
according to the set content of the drawing image in the calculation instruction, calling drawing service to finish drawing of the drawing image;
carrying out binary-based serialization processing on the drawn image to obtain a serialization image to be compressed;
compressing the serialized image to be compressed to obtain compressed data, which comprises:
acquiring the to-be-compressed serialized image;
extracting shallow layer features and deep layer features of the to-be-compressed serialized image to obtain an image shallow layer feature map and an image deep layer feature map;
fusing the image shallow feature map and the image deep feature map to obtain a semantic information masking image shallow feature map;
and generating the compressed data based on the semantic information masking image shallow feature map;
storing the compressed data in the distributed database;
the compressed data is returned to the browser through the WebAPI, the browser analyzes the compressed data to obtain the serialized image, and the serialized image is deserialized to obtain a final picture;
and displaying the final picture.
Compared with the prior art, the webpage display method of the ultra-large data curve provided by the application logs in a website through a browser and completes setting of drawing images; the calculation instruction is sent to the calculation service through the WebAPI to obtain a calculation result; the calculation result is stored in a distributed database; according to the set content of the drawing image in the calculation instruction, calling drawing service to finish drawing of the drawing image; carrying out binary-based serialization processing on the drawn image to obtain a serialization image to be compressed; compressing the to-be-compressed serialized image to obtain compressed data; storing the compressed data in the distributed database; the compressed data is returned to the browser through the WebAPI, the browser analyzes the compressed data to obtain the serialized image, and the serialized image is deserialized to obtain a final picture; and displaying the final picture. In this way, the drawing efficiency of the drawing plug-in can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly introduced below, which are not intended to be drawn to scale in terms of actual dimensions, with emphasis on illustrating the gist of the present application.
FIG. 1 is a flow chart of a web page display method for an ultra-large volume data curve according to an embodiment of the present application.
Fig. 2 is a flowchart of substep S160 of a web page presentation method of an ultra-large volume data curve according to an embodiment of the present application.
Fig. 3 is a schematic diagram of the sub-step S160 of the web page displaying method of the ultra-large data curve according to the embodiment of the present application.
Fig. 4 is a flowchart of substep S162 of the web page presentation method of the ultra-large volume data curve according to the embodiment of the present application.
Fig. 5 is a flowchart of substep S163 of the web page presentation method of the ultra-large volume data curve according to the embodiment of the present application.
Fig. 6 is a flowchart of sub-step S164 of the web page presentation method of the ultra-large volume data curve according to an embodiment of the present application.
FIG. 7 is a block diagram of a web page presentation system of an ultra-large volume data curve according to an embodiment of the present application.
Fig. 8 is an application scenario diagram of a web page display method of an ultra-large data curve according to an embodiment of the present application.
FIG. 9 is a flow chart of a method for web page presentation of an ultra-large volume data curve according to another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are also within the scope of the present application.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
The application provides a webpage display method of an ultra-large number of data curves, and fig. 1 is a flow chart of the webpage display method of the ultra-large number of data curves according to an embodiment of the application. As shown in fig. 1, a web page displaying method for an ultra-large number of data curves according to an embodiment of the present application includes the steps of: s110, logging in a website through a browser and finishing setting of drawing images; s120, sending a calculation instruction to a calculation service through a WebAPI to obtain a calculation result; s130, storing the calculation result into a distributed database; s140, according to the set content of the drawing image in the calculation instruction, calling drawing service to finish drawing of the drawing image; s150, carrying out binary-based serialization processing on the drawn image to obtain a serialization image to be compressed; s160, compressing the to-be-compressed serialized image to obtain compressed data; s170, storing the compressed data into the distributed database; s180, returning the compressed data to the browser through the WebAPI, analyzing the compressed data by the browser to obtain the serialized image, and performing deserialization on the serialized image to obtain a final picture; and S190, displaying the final picture.
In particular, in the technical solution of the present application, it is expected that the compression is performed by an encoder during the compression process, and the decoding is performed by a matched decoder during the decoding stage. In this way, on the one hand, data can be effectively compressed, storage is reduced, and an effective encryption mechanism can be formed through matching of the encoder and the decoder.
Based on this, fig. 2 is a flowchart of sub-step S160 of the web page presentation method of the ultra-large data curve according to the embodiment of the present application. Fig. 3 is a schematic diagram of the sub-step S160 of the web page displaying method of the ultra-large data curve according to the embodiment of the present application. As shown in fig. 2 and fig. 3, according to an embodiment of the present application, a web page displaying method for an ultra-large data curve compresses the serialized image to be compressed to obtain compressed data, including: s161, acquiring the to-be-compressed serialized image; s162, extracting shallow layer features and deep layer features of the to-be-compressed serialized image to obtain an image shallow layer feature map and an image deep layer feature map; s163, fusing the image shallow feature map and the image deep feature map to obtain a semantic information masking image shallow feature map; and S164, masking an image shallow feature map based on the semantic information to generate the compressed data.
It should be understood that in step S161, the acquired serialized image data to be compressed may be original image data or image data that has undergone some kind of encoding or serialization processing. In step S162, shallow features and deep features are extracted from the serialized image to be compressed. Shallow features generally refer to low-level features of an image, such as edges, colors, textures, and the like. Deep features refer to high-level semantic information of images, such as objects, scenes, semantic segmentation and the like. In step S163, the shallow feature map and the deep feature map of the image are fused to generate a semantic information masked image shallow feature map. The purpose of this step is to combine low-level features with high-level semantic information in order to better utilize the semantic information of the image in subsequent steps. In step S164, the shallow feature map of the image is masked with semantic information as input, the image is compressed, and corresponding compressed data is generated, and the specific compression algorithm and method may be different according to the application scenario and requirement, and a conventional image compression method or a compression method based on deep learning may be used. In summary, this process involves extracting features from the original image and using those features to generate compressed data, and by fusing low-level features and high-level semantic information of the image, important information of the image can be better preserved in the compression process, thereby achieving more efficient image compression.
Specifically, in the technical scheme of the application, firstly, a serialization image to be compressed is obtained; and then, the to-be-compressed serialized image passes through an image shallow feature extractor based on a first convolutional neural network model to obtain an image shallow feature map. That is, the image shallow feature extractor is constructed by using the first convolutional neural network model to capture shallow feature information, such as edges, shapes, colors, etc., contained in the serialized image to be compressed, i.e., representing the original graph. These features may reflect the basic structure and style of the graph.
And then, the image shallow feature map passes through an image deep feature extractor based on a second convolution neural network model to obtain an image deep feature map. Here, the deep feature extractor can extract deep feature information, such as texture patterns, semantic information, and the like, from the image shallow feature map. These features may reflect the high-level semantic information and the inherent meaning of the graph.
Accordingly, as shown in fig. 4, in step S162, extracting the shallow features and the deep features of the serialized image to be compressed to obtain an image shallow feature map and an image deep feature map, which includes: s1621, passing the to-be-compressed serialized image through an image shallow feature extractor based on a first convolutional neural network model to obtain the image shallow feature map; and S1622, passing the image shallow feature map through an image deep feature extractor based on a second convolutional neural network model to obtain the image deep feature map.
Wherein in step S1621, the serialized image to be compressed is passed through an image shallow feature extractor based on a first convolutional neural network model to obtain the image shallow feature map, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transmission of layers by using each layer of the image shallow feature extractor based on the first convolutional neural network model so as to output the image shallow feature map by the shallow layer of the image shallow feature extractor based on the first convolutional neural network model, wherein the input of the first layer of the image shallow feature extractor based on the first convolutional neural network model is the serialized image to be compressed.
Wherein in step S1622, passing the image shallow feature map through an image deep feature extractor based on a second convolutional neural network model to obtain the image deep feature map, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transmission of layers by using each layer of the image deep feature extractor based on the second convolutional neural network model so as to output the image deep feature map from the deep layer of the image deep feature extractor based on the second convolutional neural network model, wherein the input of the first layer of the image deep feature extractor based on the second convolutional neural network model is the image shallow feature map.
It is worth mentioning that convolutional neural network (Convolutional Neural Network, CNN) is a deep learning model, mainly used for processing tasks with grid structure data. The convolutional neural network is mainly characterized in that the characteristic representation of an image is automatically learned through components such as a convolutional layer, a pooling layer and a full-connection layer, local characteristics in the image are captured through convolution operation on a local receptive field, the dimensionality of a characteristic diagram is reduced through pooling operation, and finally tasks such as classification or regression are performed through the full-connection layer. The convolution layer uses a convolution kernel (also called a filter) to convolve the input image to extract features in the image. The convolution operation may capture spatial locality of the image, such as edges, textures, etc. The output of the convolution layer is called a feature map. The pooling layer is used to reduce the size of the feature map and retain the primary feature information. Common Pooling operations include Max Pooling and Average Pooling, which select the maximum or Average value in the receptive field as output, respectively. The full connection layer converts the feature map into a final output form, such as a class label or regression value. Neurons in the fully connected layer are connected with all neurons in the previous layer, and information mapping and conversion are performed by learning weights and biases. The convolutional neural network gradually extracts high-level feature representations of the image through multi-layer convolution and pooling operations. Because the weight sharing and the parameter quantity of the convolution layer are less, the convolution neural network can effectively learn and extract the spatial local features in the image, and has better feature extraction capability and generalization performance.
Further, a residual information enhancement fusion module is used for fusing the image shallow feature map and the image deep feature map to obtain a semantic information masking image shallow feature map. Here, the residual information enhancement fusion module aims to enhance different information expressed between different layers and keep target feature information as far as possible. Specifically, the module mainly uses the difference between the large-scale characteristic diagram and the small-scale characteristic diagram to realize the enhancement of the information of the small-scale characteristic diagram.
Accordingly, in step S163, fusing the image shallow feature map and the image deep feature map to obtain a semantic information masked image shallow feature map, including: and fusing the image shallow feature map and the image deep feature map by using a residual information enhancement fusion module to obtain the semantic information masking image shallow feature map.
It is worth mentioning that the residual information enhancement fusion module is a module for fusing the shallow feature map and the deep feature map of the image, and aims to improve the performance and the expression capability of the fusion result. In deep neural networks, residual connection is a cross-layer connection method that passes learned residual information directly to subsequent layers by summing the output of the previous layer with the input of the subsequent layers. The connection mode can help the network to learn and transfer information better, effectively solves the problems of gradient disappearance, gradient explosion and the like, and improves the training effect and performance of the network. The residual information enhancement fusion module fuses the shallow feature map and the deep feature map of the image by utilizing the idea of residual connection. The fusion mode can be simple addition operation or splicing operation. By fusing the shallow feature map and the deep feature map, the module can utilize the high-level semantic information of the low-level features and the deep feature map of the shallow feature map to improve the expression capacity of the image features and the reservation degree of the semantic information. The use of the residual information enhanced fusion module can help to improve the effect of image compression. By fusing the shallow layer features and the deep layer features, important information of the image can be better reserved, and compression quality and reconstruction effect of the image are improved.
Specifically, as shown in fig. 5, the fusing the image shallow feature map and the image deep feature map by using a residual information enhancement fusion module to obtain the semantic information masked image shallow feature map includes: s1631, performing up-sampling and convolution processing on the image deep feature map to obtain a reconstructed image deep feature map; s1632, calculating a difference value according to positions between the reconstructed image deep feature map and the image shallow feature map to obtain a difference feature map; s1633, performing nonlinear activation processing on the difference feature map based on the Sigmoid function to obtain a mask feature map; s1634, performing point multiplication on the image shallow feature map and the mask feature map to obtain a fusion feature map; and S1635, performing attention-based PMA pooling operation on the fusion feature map to obtain the semantic information masking image shallow feature map.
It will be appreciated that in step S1631, the image depth map is up-sampled and convolved to obtain a reconstructed image depth map, which is intended to be scaled to the same size as the shallow map for subsequent fusion operations. In step S1632, by calculating the difference of the two feature maps, a minute change and difference between them can be captured. In step S1633, the values of the difference feature map are mapped to a range of 0 to 1 by applying a Sigmoid function, resulting in a mask feature map, which can be regarded as an attention mechanism for indicating which positions in the shallow feature map need to be emphasized or suppressed. In step S1634, the shallow feature map may be weighted by applying the mask feature map to the shallow feature map to highlight or suppress a particular feature, so that semantic information may be introduced into the shallow feature map to enhance its semantic expressive power. In step S1635, the PMA (Positional Maximum Attention) pooling operation is an attention mechanism, which selects the most representative feature according to the position information in the feature map, and through the PMA pooling operation, semantic information in the image can be further extracted and enhanced, so as to obtain a shallow feature map of the semantic information masked image. Through the steps, the shallow layer characteristics and the deep layer characteristics of the image can be fused together by using the residual information enhancement fusion module, so that a shallow layer characteristic diagram which is richer and has semantic information is obtained and is used for subsequent image compression or other tasks.
And then, carrying out global pooling on the semantic information mask image shallow feature map along the channel dimension to obtain a semantic information mask image shallow feature matrix as compressed data. The global pooling operation can perform operations such as average or maximum value on all pixels on each channel, so as to obtain a feature matrix containing global information, namely, the semantic information mask image shallow feature matrix. This can greatly reduce the amount of data while retaining sufficient information.
Meanwhile, in the technical scheme of the application, more specifically, in a decoding stage, the shallow feature matrix of the semantic information mask image is decoded by a decoder to generate the serialized image.
Accordingly, in step S180, the compressed data is returned to the browser through the WebAPI, and the browser parses the compressed data to obtain the serialized image, and deserializes the serialized image to obtain a final picture, including: and decoding the optimized semantic information mask image shallow feature matrix through a decoder to generate the serialized image.
It is worth mentioning that the decoder is a component or algorithm for converting compressed data into a visual image, in image compression, the decoder is responsible for the process of decoding the compressed data and restoring to the original image. In particular to the decoding of the optimized semantic information mask image shallow feature matrix, the decoder may employ a specific algorithm or model to restore the serialized image. The design of the decoder generally corresponds to the encoder of the compression algorithm or model to ensure interoperability of the compression and decompression processes. The function of the decoder is to reverse the shallow feature matrix of the optimized semantic information mask image according to the information in the compressed data and the coding rule so as to restore the sequence image. This may involve steps of decoding the compressed feature representation, decoding the quantized information, deconvolution operations, anti-pooling operations, etc., to recover the detail and quality of the original image. The implementation of a particular decoder may be determined according to a particular compression algorithm or model. Common image compression algorithms include JPEG, PNG, HEVC, etc., each with a corresponding decoder implementation. In addition, some deep learning models, such as self-encoder, generation of countermeasure network (GAN), etc., may also be used as decoders to restore the image. In summary, a decoder is a component or algorithm for converting compressed data into a visual image.
Here, the image shallow feature map and the image deep feature map respectively express the shallow and deep image semantic features of the to-be-compressed serialized image, after the image shallow feature map and the image deep feature map are fused by using a residual information enhancement fusion module, the obtained semantic information masking image shallow feature map expresses the shallow and deep image semantics of the to-be-compressed serialized image and also contains interlayer residual image semantics extracted by the residual information enhancement fusion module, and enhances the image semantic expression dimension of the to-be-compressed serialized image and also because of the semantic feature distribution difference of the image semantic cross dimension, and the semantic information masking image shallow feature map is caused to have local feature distribution sparsification corresponding to semantic feature distribution of each dimension, namely, sub-manifold is thinned out of distribution relative to the whole high-dimensional feature manifold, so that when the semantic information masking image shallow feature matrix obtained by carrying out global pooling on the semantic information masking image shallow feature map along the channel dimension carries out probability regression mapping through a decoder, the convergence from the semantic information masking image shallow feature matrix to a preset regression probability category representation in a probability space is poor, and the image quality of the serialized image generated by decoding is affected. Therefore, the semantic information mask image shallow feature matrix is preferably optimized for position-by-position feature values.
Accordingly, in step S164, as shown in fig. 6, the compressed data is generated based on the semantic information masked image shallow feature map, including: s1641, carrying out global pooling on the semantic information mask image shallow feature map along the channel dimension to obtain a semantic information mask image shallow feature matrix; and S1642, performing feature distribution optimization on the semantic information mask image shallow feature matrix to obtain an optimized semantic information mask image shallow feature matrix as the compressed data.
It should be appreciated that global pooling is an operation in step S1641 that obtains a vector representing the entire feature map by summing the features on each channel, which is done to reduce the spatial dimension of the feature map to a fixed size vector for subsequent processing. In step S1642, feature distribution optimization is an operation to achieve better compression effect or better representation capability by adjusting the distribution and weight of features in the feature matrix, and the purpose of this step is to optimize the semantic information mask image shallow feature matrix so that it is more suitable for compression and can restore the quality and detail of the original image at decoding time. Through the steps, compressed data generated based on the semantic information mask image shallow feature map comprises a semantic information mask image shallow feature matrix obtained through global pooling and an optimized semantic information mask image shallow feature matrix after feature distribution optimization. These compressed data may be further transmitted or stored and used in a decoder to restore the original image.
In step S1642, performing feature distribution optimization on the shallow feature matrix of the semantic information mask image to obtain an optimized shallow feature matrix of the semantic information mask image as the compressed data, including: performing feature distribution optimization on the semantic information mask image shallow feature matrix by using the following optimization formula to obtain the optimized semantic information mask image shallow feature matrix; wherein, the optimization formula is:
wherein,is the shallow feature matrix of the semantic information mask image,>is the first +.>Personal characteristic value->An exponential operation representing a value of a natural exponential function value raised to a power by the value, +>Is the +.o. of the shallow feature matrix of the optimized semantic information mask image>And characteristic values.
That is, sparse distribution in high-dimensional feature space is processed by regularization based on re-probability to activate the semantic information mask image shallow feature matrixThe natural distribution transfer of geometric manifold to probability space in high-dimensional feature space, thereby passing through theThe semantic information mask image shallow feature matrix +.>The distribution sparse sub-manifold of the high-dimensional feature manifold is subjected to a smooth regularization mode based on the representational, so that the convergence of the complex high-dimensional feature manifold with high space sparsity under the preset regression probability is improved, and the shallow feature matrix of the semantic information mask image is improved>And decoding the image quality of the generated serialized image.
Notably, heavy probability based regularization (Probabilistic Regularization) is a technique commonly used in machine learning and optimization to handle sparse distributions in high-dimensional feature space and improve the generalization performance of the model. In image compression, regularization based on the re-probability may be applied to the semantic information mask image shallow feature matrix to improve the image quality of the serialized image generated by decoding. Specifically, the heavy probability-based regularization facilitates feature convergence onto a sparse sub-manifold at a predetermined regression probability by mapping the high-dimensional feature manifold to a natural distribution of probability space and introducing a smooth regularization over the high-dimensional feature manifold. The purpose of this regularization is to improve the generalization ability and image quality of the model by constraining the distribution of features so that the features are smoother and denser in high-dimensional space. By introducing probability distribution and regularization terms, the features can be smoothed and constrained, so that the features are more uniformly distributed in a high-dimensional space, and the influence of sparsity and noise is reduced.
Regularization based on heavy probabilities can be achieved by a variety of methods, such as using a probability map model, a maximum entropy model, a gaussian distribution, and so on. In summary, re-probability based regularization is a technique that deals with sparse distributions in high-dimensional feature space by mapping high-dimensional feature manifolds to probability space and introducing smooth regularization. In image compression, it can be used to improve the image quality of the semantic information mask image shallow feature matrix.
In summary, a web page display method for an ultra-large amount of data curves according to the embodiments of the present application is illustrated, which can improve the drawing efficiency of the drawing plug-in.
FIG. 7 is a block diagram of a web page presentation system 100 of an ultra-large volume data curve according to an embodiment of the present application. As shown in fig. 7, a web page display system 100 of an ultra-large volume data curve according to an embodiment of the present application includes: a drawing image setting module 110 for logging in a website through a browser and completing setting of drawing images; a sending module 120, configured to send the calculation instruction to a calculation service through WebAPI to obtain a calculation result; a calculation result storing module 130, configured to store the calculation result in a distributed database; a drawing service calling module 140, configured to call a drawing service to complete drawing of the drawing image according to the set content of the drawing image in the calculation instruction; a serialization processing module 150, configured to perform binary-based serialization processing on the drawn image to obtain a serialized image to be compressed; a compression module 160, configured to compress the serialized image to be compressed to obtain compressed data; a compressed data storing module 170, configured to store the compressed data into the distributed database; the deserializing module 180 is configured to return the compressed data to the browser through the WebAPI, and the browser parses the compressed data to obtain the serialized image, and deserializes the serialized image to obtain a final picture; and a final picture display module 190 for displaying the final picture.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective modules in the above-described super-huge data curve web page display system 100 have been described in detail in the above description of the super-huge data curve web page display method with reference to fig. 1 to 6, and thus, repetitive descriptions thereof will be omitted.
As described above, the web page presentation system 100 of the ultra-large data profile according to the embodiment of the present application may be implemented in various wireless terminals, for example, a server or the like of a web page presentation algorithm having the ultra-large data profile. In one example, the web page presentation system 100 of the ultra-large volume data curve according to embodiments of the present application may be integrated into a wireless terminal as one software module and/or hardware module. For example, the web page presentation system 100 of the superscalar data curve may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the web page display system 100 of the ultra-large data curve may also be one of a plurality of hardware modules of the wireless terminal.
Alternatively, in another example, the ultra-large data curve web page presentation system 100 and the wireless terminal may be separate devices, and the ultra-large data curve web page presentation system 100 may be connected to the wireless terminal through a wired and/or wireless network and transmit interactive information in a agreed data format.
Fig. 8 is an application scenario diagram of a web page display method of an ultra-large data curve according to an embodiment of the present application. As shown in fig. 8, in this application scenario, first, a to-be-compressed serialized image (e.g., D illustrated in fig. 8) is acquired, and then, the to-be-compressed serialized image is input to a server (e.g., S illustrated in fig. 8) where a web page display algorithm of an ultra-large number of data curves is deployed, where the server can process the to-be-compressed serialized image using the web page display algorithm of the ultra-large number of data curves to obtain a final picture.
It should be understood that the webpage display method of the ultra-large data curve is used for solving the problems that when the drawing plug-in draws tens of thousands or even hundreds of thousands of points on a webpage at one time, serious inefficiency exists, even the response is not direct, and a scientific drawing client with quick response is difficult to deeply integrate with network service.
In one example of the present application, in conjunction with fig. 9, a web page displaying method for an ultra-large data curve includes the steps of: logging in a website through a browser, performing calculation after setting, sending a calculation instruction to a related calculation service through a WebAPI, obtaining a calculation result, and storing the calculation result into a distributed database; after the calculation is completed, according to the setting content of the drawing image in the instruction, calling drawing service to complete drawing of the image; the serialized images are binary numbers, compressed in size and stored in a distributed database; the image serialization data is returned to the browser through the WebAPI; and the browser analyzes the data, deserializes the data and displays the picture.
The distributed database is adopted, so that short response time is guaranteed when a large amount of pictures are accessed, the characteristics of high drawing efficiency and convenience in webpage access of the integrated drawing client are achieved, pictures to be transmitted are compressed in a serialization and anti-serialization mode, and the transmission efficiency is improved.
Correspondingly, a large amount of data is plotted at the server side, so that the data reading time is reduced, and the plotting time of more data is very fast. The factor affecting the display of the picture is mainly the transmission speed of the picture. The generated pictures are directly stored in a distributed memory database and are subjected to serialization processing, so that time-consuming hard disk reading and writing can be omitted, the image data volume is greatly reduced, and the transmission pressure is reduced. Most of the webpage drawing plug-ins cannot process drawing of hundreds of thousands of points, and the method can be used for efficiently, stably and highly-concurrent near-real-time image drawing of a large amount of data at the webpage end.
It should be understood that the drawn image may be a common database, or may be directly transmitted in an image transmission or other compression manner without serialization/deserialization; the data volume can be reduced by sampling, averaging and other methods on the dense data points, and the data volume can be directly drawn on the webpage, but the graphic deformation and distortion can be caused.
Furthermore, those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present application and is not to be construed as limiting thereof. Although a few exemplary embodiments of this application have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this application. Accordingly, all such modifications are intended to be included within the scope of this application as defined in the claims. It is to be understood that the foregoing is illustrative of the present application and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The application is defined by the claims and their equivalents.

Claims (3)

1. A webpage display method of an ultra-large data curve is characterized by comprising the following steps:
logging in a website through a browser and finishing setting of drawing images;
the calculation instruction is sent to the calculation service through the WebAPI to obtain a calculation result;
the calculation result is stored in a distributed database;
according to the set content of the drawing image in the calculation instruction, calling drawing service to finish drawing of the drawing image;
carrying out binary-based serialization processing on the drawn image to obtain a serialization image to be compressed;
compressing the serialized image to be compressed to obtain compressed data, which comprises:
acquiring the to-be-compressed serialized image;
extracting shallow layer features and deep layer features of the to-be-compressed serialized image to obtain an image shallow layer feature map and an image deep layer feature map;
fusing the image shallow feature map and the image deep feature map to obtain a semantic information masking image shallow feature map;
and generating the compressed data based on the semantic information masking image shallow feature map;
storing the compressed data in the distributed database;
the compressed data is returned to the browser through the WebAPI, the browser analyzes the compressed data to obtain the serialized image, and the serialized image is deserialized to obtain a final picture;
displaying the final picture;
the extracting shallow layer features and deep layer features of the to-be-compressed serialized image to obtain an image shallow layer feature map and an image deep layer feature map comprises the following steps:
passing the to-be-compressed serialized image through an image shallow feature extractor based on a first convolutional neural network model to obtain the image shallow feature map;
and passing the image shallow feature map through an image deep feature extractor based on a second convolutional neural network model to obtain the image deep feature map;
the method for compressing the serialized image to be compressed through an image shallow feature extractor based on a first convolutional neural network model to obtain the image shallow feature map comprises the following steps:
using each layer of the image shallow feature extractor based on the first convolutional neural network model, respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transmission of the layers to output the image shallow feature map by the shallow layer of the image shallow feature extractor based on the first convolutional neural network model, wherein the input of the first layer of the image shallow feature extractor based on the first convolutional neural network model is the serialized image to be compressed;
wherein, the image shallow feature map is passed through an image deep feature extractor based on a second convolutional neural network model to obtain the image deep feature map, comprising:
using each layer of the image deep feature extractor based on the second convolutional neural network model, respectively performing convolution processing, pooling processing and nonlinear activation processing on input data in forward transmission of the layers to output the image deep feature map from the deep layer of the image deep feature extractor based on the second convolutional neural network model, wherein the input of the first layer of the image deep feature extractor based on the second convolutional neural network model is the image shallow feature map;
the step of fusing the image shallow feature map and the image deep feature map to obtain a semantic information masking image shallow feature map comprises the following steps:
fusing the image shallow feature map and the image deep feature map by using a residual information enhancement fusion module to obtain the semantic information masking image shallow feature map;
the method for fusing the image shallow feature map and the image deep feature map by using a residual information enhancement fusion module to obtain the semantic information masking image shallow feature map comprises the following steps:
performing up-sampling and convolution processing on the image deep feature map to obtain a reconstructed image deep feature map;
calculating a difference value according to positions between the reconstructed image deep feature map and the image shallow feature map to obtain a difference feature map;
performing nonlinear activation processing on the difference feature map based on a Sigmoid function to obtain a mask feature map;
performing point multiplication on the image shallow feature map and the mask feature map to obtain a fusion feature map;
and performing attention-based PMA pooling operation on the fusion feature map to obtain the semantic information masked image shallow feature map.
2. The method of claim 1, wherein generating the compressed data based on the semantic information masked image shallow feature map comprises:
global pooling is carried out on the semantic information mask image shallow feature map along the channel dimension so as to obtain a semantic information mask image shallow feature matrix;
and performing feature distribution optimization on the semantic information mask image shallow feature matrix to obtain an optimized semantic information mask image shallow feature matrix as the compressed data.
3. The method for displaying a web page with an ultra-large number of data curves according to claim 2, wherein the compressed data is returned to the browser through the WebAPI, the browser parsing the compressed data to obtain the serialized image and deserializing the serialized image to obtain a final picture, comprising:
and decoding the optimized semantic information mask image shallow feature matrix through a decoder to generate the serialized image.
CN202311630590.1A 2023-12-01 2023-12-01 Webpage display method of ultra-large data curve Active CN117332179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311630590.1A CN117332179B (en) 2023-12-01 2023-12-01 Webpage display method of ultra-large data curve

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311630590.1A CN117332179B (en) 2023-12-01 2023-12-01 Webpage display method of ultra-large data curve

Publications (2)

Publication Number Publication Date
CN117332179A CN117332179A (en) 2024-01-02
CN117332179B true CN117332179B (en) 2024-02-06

Family

ID=89277821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311630590.1A Active CN117332179B (en) 2023-12-01 2023-12-01 Webpage display method of ultra-large data curve

Country Status (1)

Country Link
CN (1) CN117332179B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404018A (en) * 2002-09-29 2003-03-19 西安交通大学 Intelligent scene drawing system and drawing & processing method in computer network environment
CN110427446A (en) * 2019-08-02 2019-11-08 武汉中地数码科技有限公司 A kind of huge image data service release quickly and browsing method and system
WO2022199143A1 (en) * 2021-03-26 2022-09-29 南京邮电大学 Medical image segmentation method based on u-shaped network
CN116189179A (en) * 2023-04-28 2023-05-30 北京航空航天大学杭州创新研究院 Circulating tumor cell scanning analysis equipment
CN116343015A (en) * 2023-04-10 2023-06-27 阜外华中心血管病医院 Medical food water content measurement system based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404018A (en) * 2002-09-29 2003-03-19 西安交通大学 Intelligent scene drawing system and drawing & processing method in computer network environment
CN110427446A (en) * 2019-08-02 2019-11-08 武汉中地数码科技有限公司 A kind of huge image data service release quickly and browsing method and system
WO2022199143A1 (en) * 2021-03-26 2022-09-29 南京邮电大学 Medical image segmentation method based on u-shaped network
CN116343015A (en) * 2023-04-10 2023-06-27 阜外华中心血管病医院 Medical food water content measurement system based on artificial intelligence
CN116189179A (en) * 2023-04-28 2023-05-30 北京航空航天大学杭州创新研究院 Circulating tumor cell scanning analysis equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于浅层与深层特征融合的胃癌前疾病识别;潘燕七;陈睿;张旭;章鑫森;刘济全;胡伟玲;段会龙;姒建敏;;中国生物医学工程学报(第04期);全文 *

Also Published As

Publication number Publication date
CN117332179A (en) 2024-01-02

Similar Documents

Publication Publication Date Title
Zhao et al. Invertible image decolorization
WO2020155614A1 (en) Image processing method and device
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN112950471A (en) Video super-resolution processing method and device, super-resolution reconstruction model and medium
CN111881920A (en) Network adaptation method of large-resolution image and neural network training device
CN116168197A (en) Image segmentation method based on Transformer segmentation network and regularization training
CN115082306A (en) Image super-resolution method based on blueprint separable residual error network
CN115022637A (en) Image coding method, image decompression method and device
CN114612681A (en) GCN-based multi-label image classification method, model construction method and device
CN113256744B (en) Image coding and decoding method and system
CN117332179B (en) Webpage display method of ultra-large data curve
Bai et al. Survey of learning based single image super-resolution reconstruction technology
WO2023174256A1 (en) Data compression method and related device
CN117333409A (en) Big data analysis method based on image
CN116912268A (en) Skin lesion image segmentation method, device, equipment and storage medium
CN114677545B (en) Lightweight image classification method based on similarity pruning and efficient module
CN113727050B (en) Video super-resolution processing method and device for mobile equipment and storage medium
Wang et al. Exploring fine-grained sparsity in convolutional neural networks for efficient inference
CN115512100A (en) Point cloud segmentation method, device and medium based on multi-scale feature extraction and fusion
CN114692715A (en) Sample labeling method and device
Li et al. A research and strategy of remote sensing image denoising algorithms
Li et al. Human detection via image denoising for 5G-enabled intelligent applications
You et al. Efficient and Generic Point Model for Lossless Point Cloud Attribute Compression
CN117197727B (en) Global space-time feature learning-based behavior detection method and system
CN115631115B (en) Dynamic image restoration method based on recursion transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant