CN115659175A - Multi-mode data analysis method, device and medium for micro-service resources - Google Patents

Multi-mode data analysis method, device and medium for micro-service resources Download PDF

Info

Publication number
CN115659175A
CN115659175A CN202211258044.5A CN202211258044A CN115659175A CN 115659175 A CN115659175 A CN 115659175A CN 202211258044 A CN202211258044 A CN 202211258044A CN 115659175 A CN115659175 A CN 115659175A
Authority
CN
China
Prior art keywords
data
text
model
image data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211258044.5A
Other languages
Chinese (zh)
Inventor
乔林
陈硕
曲睿婷
雷振江
王飞
胡楠
齐俊
教传铭
李冬
刘江
宋跃明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202211258044.5A priority Critical patent/CN115659175A/en
Publication of CN115659175A publication Critical patent/CN115659175A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a micro-service resource-oriented multi-modal data analysis method, a device and a medium, which comprises the steps of obtaining multi-modal data of different micro-service component resources, wherein the multi-modal data comprises text data and image data; respectively coding the image data and the text data through a ResNet model and a Transformer model to obtain high-level feature representation of the original image data and the original text data; training a CLIP model by using the high-level feature representation of the obtained image data and text data, carrying out data annotation, and aligning the features of the image data and the text data in a space represented by the high-level feature representation; and classifying the image data and the text data through a cross entropy loss function to obtain analyzed multi-modal data. According to the method, the ResNet model and the Transformer model are utilized to encode the image data and the text data, loss in data vectorization is reduced, the CLIP model is used for aligning high-level feature representation of the multi-modal data, and accuracy of multi-modal data alignment is improved.

Description

Multi-mode data analysis method, device and medium for micro-service resources
Technical Field
The invention relates to the technical field of data analysis, in particular to a method, a device and a medium for multi-mode data analysis for micro-service resources.
Background
With the development of information technology, cloud data centers of many large-scale companies apply technologies such as cloud computing, virtualization, micro-services and the like on a large scale. Compared with the traditional data center, the cloud data center basically adopts an internet micro-service architecture for operation and maintenance work application. A business application system may be composed of a plurality of micro-services, each micro-service node is associated with each other, so that the business application relationship is more complex, and the micro-services refer to some open-source components and software, which causes faults, has higher processing technical requirements and greater operation and maintenance difficulty.
Under the traditional architecture, a physical server and the like provide operation resources for business application, the resource supply is relatively fixed, when the business application resources are insufficient, the resource scheduling arrangement is comprehensively carried out by information scheduling, and the consumed time is measured by days. In a cloud platform environment, a large number of business applications are distributed to a resource pool consisting of underlying cloud platform computing resources in a distributed mode, flexible and automatic horizontal and vertical expansion capacity of the resources can be achieved, time consumption can reach the level of seconds or minutes, but the cloud platform resource pool is formed by massive hardware resources, resource configuration difference is large, and requirements for accurate identification and intelligent scheduling strategies of the resources are high.
The disadvantage of the prior art is that the robustness and flexibility of the traditional architecture application are weak due to the strong association of resources and business applications. In a cloud environment, a micro-service architecture brings better expandability, independent upgradability, easy maintenance, service robustness and other capabilities, a large service is split into independent service application modules according to function responsibility, however, with the continuous development of the scale of the power grid service, the number of corresponding service is also increased, the calling relationship among services is more and more complicated, and greater challenges are brought to fault discovery and rapid positioning.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and adopts a method, a device and a medium for multi-mode data analysis facing micro service resources to solve the problems in the background technology.
In a first aspect, an embodiment of the present invention provides a multimodal data analysis method for micro service resources, including:
s1, obtaining multi-modal data of different micro-service component resources, wherein the multi-modal data comprises text data and image data;
s2, respectively coding the image data and the text data through a ResNet model and a Transformer model to obtain high-level feature representation of the original image data and the original text data;
s3, training a CLIP model by using the high-level feature representation of the obtained image data and text data, carrying out data annotation, and aligning the features of the image data and the text data in a space represented by the high-level feature representation;
and S4, classifying the image data and the text data through a cross entropy loss function to obtain analyzed multi-modal data.
As a further aspect of the invention: the specific steps in step S2 include:
coding the image data based on the improved ResNet model to obtain high-level feature representation of the image data; and
and coding the text data based on a Transformer model to obtain high-level feature representation of the text data.
As a further aspect of the invention: the specific steps of encoding image data based on the improved ResNet model include:
according to image data extracted from the obtained multi-modal data, carrying out picture preprocessing, setting picture input resolution, cutting the picture by adopting a center cutting method on the basis of picture scaling, and carrying out normalization processing on the scaled and cut picture;
the method comprises the steps of forming a feature set by extracting features of different dimensions of image data after normalization processing; selecting sample points and extracting M-dimensional characteristics of the sample points, wherein the characteristics of each sample are an M multiplied by N matrix, and the original image data is enhanced by using a random erasing and contrast ratio conversion mode; splitting the data set into a training set and a test set according to a ratio, converting all the training sets into binary files, adding sample labels, and inputting the TFRrecords files obtained through conversion as ResNet model data;
improving the ResNet model convolution layer by a projection shortcut, wherein the projection shortcut replaces the original projection shortcut by a 3 multiplied by 3 maximum pooling layer with a step length of 2 and a 1 multiplied by 1 convolution layer with a step length of 1, and is used for adding the characteristics of different feature sizes before the feature dimension of the residual error network is changed; then, sparsity is automatically introduced into the ResNet model by utilizing a sparsity activation function ReLu;
and training a ResNet model to obtain high-level feature representation of the image data.
As a further aspect of the invention: the specific steps of encoding the text data based on the Transformer model include:
performing text preprocessing by a word segmentation and word removal method and by adopting Bert model processing to obtain text vectorization expression;
and constructing a description text of each category for text data vectorized and represented by the text according to the classification label of the task, and performing feature extraction on the text data by taking an encoder of a Transformer model as a feature extractor to obtain the internal information of the text data and obtain the high-level feature representation of the text data.
As a further scheme of the invention: the specific steps in step S3 include:
respectively extracting Text features and Image features by taking a ResNet model as an Image Encoder model in a CLIP model and a Transformer model as a Text Encoder model in the CLIP model, and performing comparison learning on the extracted Text features and the extracted Image features by the CLIP model;
for a training batch containing N text and image pairs, combining N text features and N image features pairwise, the CLIP model predicts N 2 The similarity of each possible text to the image pair; directly calculating cosine similarity of the text features and the image features according to the similarity; the training target of the CLIP model is the similarity of the maximum N positive samples, while the similarity of the N negative samples is minimized.
As a further aspect of the invention: the specific steps in the step S4 include:
adding a weight coefficient W on the basis of the traditional cross entropy loss function n Then the expression of the modified cross entropy loss function is:
Figure BDA0003888969790000031
in the formula, N represents the total number of samples, p n,i Representing the probability that the nth sample class is i;
and classifying the image data and the text data by using the improved cross entropy loss function to obtain the analyzed multi-modal data.
In a second aspect, an embodiment of the present invention provides a multimodal data analysis apparatus for micro service resources, further including:
the data acquisition module is used for acquiring multi-mode data of different micro-service component resources, wherein the multi-mode data comprises text data and image data;
the data processing module is used for coding the image data and the text data through a ResNet model and a Transformer model respectively to obtain high-level feature representation of the original image data and the original text data;
the characteristic analysis module is used for training a CLIP model by utilizing the high-level characteristic representation of the obtained image data and text data, carrying out data annotation and aligning the characteristics of the image data and the text data in a space represented by the high-level characteristic representation;
and the data classification module is used for classifying the image data and the text data through a cross entropy loss function to obtain the analyzed multi-modal data.
As a further scheme of the invention: the data acquisition module further comprises a first acquisition unit and a second acquisition unit:
the first acquisition unit is used for acquiring text data in the multi-modal data;
the second acquisition unit is used for acquiring the image data in the multi-modal data.
As a further scheme of the invention: the characteristic analysis module is connected to the data output end of the data acquisition module and used for analyzing the multi-modal data after data coding.
In a third aspect, embodiments of the present invention provide a storage medium, in which processor-executable instructions are stored, and when executed by a processor, the processor-executable instructions are configured to implement a micro-service resource-oriented multi-modal data analysis method as described in any one of the above.
Compared with the prior art, the invention has the following technical effects:
by adopting the technical scheme, the image and text data can be effectively coded by the improved ResNet algorithm and the transform model, and the loss of the image and text data in vectorization is reduced. Meanwhile, the CLIP model is used for aligning the high-level feature representation of the multi-modal data, so that the picture and text group can be effectively obtained, and the accuracy of the multi-modal data alignment is improved. The characteristics of micro-service resources can be effectively used, and the high-efficiency analysis of multi-modal data is realized.
The specific implementation case can be multi-mode operation and maintenance data under multi-level and diversified service scenes such as cloud platform host equipment, platform software, an information system, micro-services and the like which are operated by associated micro-services, and through the content of the invention, the high-level feature representation of the multi-mode data (mainly comprising texts and images) is obtained aiming at fault scenes which occur in the layers such as the host equipment, the platform software, the information system, the micro-services and the like, so that the alignment of the multi-mode data is realized. And a foundation is laid for the establishment of a subsequent abnormal knowledge base, the active early warning of the fault, the analysis of the fault reason and the auxiliary decision of scheduling.
Drawings
The following detailed description of embodiments of the invention refers to the accompanying drawings in which:
FIG. 1 is a schematic illustration of steps of a multi-modal data analysis method according to some embodiments of the present disclosure;
FIG. 2 is a block flow diagram of a multi-modal data analysis method according to some embodiments of the present disclosure;
FIG. 3 is a diagram of an improved ResNet model according to some embodiments of the present disclosure;
FIG. 4 is a diagram of a transform encoder architecture according to some embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and fig. 2, in an embodiment of the present invention, a method for multimodal data analysis facing micro service resources includes:
the method includes the steps that S1, multi-mode data of different micro-service component resources are obtained, wherein the multi-mode data comprise text data and image data;
specifically, text data and image data of multi-modal data of different micro-service component resources are extracted for subsequent data analysis;
s2, respectively coding the image data and the text data through a ResNet model and a Transformer model to obtain high-level feature representations of the original image data and the original text data, wherein the method specifically comprises the following steps:
s21, coding the image data based on the improved ResNet model to obtain high-level feature representation of the image data; in this embodiment, the specific steps include:
according to image data extracted from the obtained multi-modal data, carrying out picture preprocessing, setting picture input resolution, cutting the picture by adopting a center cutting method on the basis of picture scaling, and carrying out normalization processing on the scaled and cut picture;
the feature set is formed by extracting features of different dimensions of the image data after normalization processing; selecting sample points and extracting M-dimensional characteristics of the sample points, wherein the characteristics of each sample are an M multiplied by N matrix, and the original image data is enhanced by using a random erasing and contrast ratio conversion mode; splitting the data set into a training set and a test set according to a ratio, converting all the training sets into binary files, adding sample labels, and inputting the TFRrecords files obtained through conversion as ResNet model data;
as shown in fig. 3, the diagram is an improved ResNet model diagram, and then the ResNet model convolutional layer is improved by a projection shortcut, wherein the projection shortcut replaces the original projection shortcut by a 3 × 3 maximal pooling layer with a step length of 2 and a 1 × 1 convolutional layer with a step length of 1, so that information loss is reduced, and characteristics of different feature sizes are added before the feature dimension of the residual error network is changed; then, sparsity is automatically introduced by utilizing a sparsity activation function ReLu in a ResNet model, so that the gradient disappearance phenomenon is relieved;
and finally, training a ResNet model to obtain high-level feature representation of the image data.
And S22, coding the text data based on the Transformer model to obtain high-level feature representation of the text data.
In this embodiment, the specific steps include:
as shown in fig. 4, the diagram is a structure diagram of a transform encoder, and text preprocessing is performed by a method of word segmentation and word removal and by adopting a Bert model process to obtain a text vectorization representation;
and constructing a description text of each category for the text data expressed by the text vectorization according to the classification label of the task, and performing feature extraction on the text data by taking an encoder of the Transformer model as a feature extractor to obtain the internal information of the text data so as to obtain the high-level feature expression of the text data.
S3, training a CLIP model by using the high-level feature representation of the obtained image data and text data, labeling the data, and aligning the features of the image data and the text data in a space represented by the high-level feature representation, wherein the method specifically comprises the following steps:
respectively extracting Text features and Image features by taking a ResNet model as an Image Encoder model in a CLIP model and a Transformer model as a Text Encoder model in the CLIP model, and performing comparison learning on the extracted Text features and the extracted Image features by the CLIP model;
for a training batch containing N text and image pairs, combining N text features and N image features pairwise, the CLIP model predicts N 2 The similarity of each possible text to the image pair; directly calculating cosine similarity of the text features and the image features according to the similarity; the training target of the CLIP model is the similarity of the maximum N positive samples, and the similarity of the minimum N negative samples.
S4, classifying the image data and the text data through a cross entropy loss function to obtain analyzed multi-modal data, and specifically comprising the following steps:
adding weight coefficients to the conventional cross entropy loss functionW n Then the expression of the improved cross entropy loss function is:
Figure BDA0003888969790000071
in the formula, N represents the total number of samples, p n,i Representing the probability that the nth sample class is i;
and classifying the image data and the text data by using the improved cross entropy loss function to obtain analyzed multi-modal data.
The embodiment of the invention also provides a multi-mode data analysis device facing micro service resources, which further comprises:
the data acquisition module is used for acquiring multi-mode data of different micro-service component resources, wherein the multi-mode data comprises text data and image data;
the data processing module is used for coding the image data and the text data through a ResNet model and a Transformer model respectively to obtain high-level feature representation of the original image data and the original text data;
the characteristic analysis module is used for training a CLIP model by utilizing the high-level characteristic representation of the obtained image data and text data, carrying out data annotation and aligning the characteristics of the image data and the text data in a space represented by the high-level characteristic representation;
and the data classification module is used for classifying the image data and the text data through a cross entropy loss function to obtain the analyzed multi-modal data.
In this embodiment, resources of multimodal data in different micro-service components are acquired through the data acquisition module, the obtained image data and text data are encoded through the data processing module to obtain high-level feature representation of the data, the high-level feature representation of the data is input into the feature analysis module to align features of the image data and the text data in a space represented by the high-level feature representation, and finally the data are classified through the data classification module to obtain analyzed multimodal data.
In this embodiment, the data acquiring module further includes a first acquiring unit and a second acquiring unit:
the first acquisition unit is used for acquiring text data in the multi-modal data;
the second acquisition unit is used for acquiring image data in the multi-modal data;
the first acquisition unit and the second acquisition unit are used for acquiring and acquiring different types of data.
In this embodiment, the feature analysis module is connected to the data output end of the data acquisition module, and is configured to analyze the multi-modal data after data encoding.
An embodiment of the present invention further provides a storage medium, in which processor-executable instructions are stored, and when executed by a processor, the processor-executable instructions are configured to implement a method for multimodal data analysis for microservice resources as described in any one of the above items, where the method includes:
the method includes the steps that S1, multi-mode data of different micro-service component resources are obtained, wherein the multi-mode data comprise text data and image data;
s2, respectively coding the image data and the text data through a ResNet model and a Transformer model to obtain high-level feature representation of the original image data and the original text data;
s3, training a CLIP model by using the high-level feature representation of the obtained image data and text data, carrying out data annotation, and aligning the features of the image data and the text data in a space represented by the high-level feature representation;
and S4, classifying the image data and the text data through a cross entropy loss function to obtain analyzed multi-modal data.
As will be appreciated by one skilled in the art, the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The scheme in the embodiment of the invention can be realized by adopting various computer languages, such as object-oriented programming language Java and transliteration scripting language JavaScript.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A multi-mode data analysis method for micro-service resources is characterized by comprising the following specific steps:
s1, obtaining multi-modal data of different micro-service component resources, wherein the multi-modal data comprises text data and image data;
s2, respectively coding the image data and the text data through a ResNet model and a Transformer model to obtain high-level feature representation of the original image data and the original text data;
s3, training a CLIP model by using the high-level feature representation of the obtained image data and text data, carrying out data annotation, and aligning the features of the image data and the text data in a space represented by the high-level feature representation;
and S4, classifying the image data and the text data through a cross entropy loss function to obtain analyzed multi-modal data.
2. The multimodal data analysis method facing micro service resources as claimed in claim 1, wherein the specific steps in the step S2 include:
coding the image data based on the improved ResNet model to obtain high-level feature representation of the image data; and
and coding the text data based on a Transformer model to obtain high-level feature representation of the text data.
3. The method for multimodal data analysis facing microservice resources, according to claim 2, wherein the specific step of encoding the image data based on the improved ResNet model comprises:
according to image data extracted from the obtained multi-modal data, carrying out picture preprocessing, setting picture input resolution, cutting the picture by adopting a center cutting method on the basis of picture scaling, and carrying out normalization processing on the scaled and cut picture;
the method comprises the steps of forming a feature set by extracting features of different dimensions of image data after normalization processing; selecting sample points and extracting M-dimensional characteristics of the sample points, wherein the characteristics of each sample are an M multiplied by N matrix, and the original image data is enhanced by using a random erasing and contrast ratio conversion mode; splitting a data set into a training set and a test set according to a proportion, converting all the training set and the test set into binary files, adding sample labels, and inputting the TFRrecords files obtained through conversion as ResNet model data;
improving the ResNet model convolution layer by a projection shortcut, wherein the projection shortcut replaces the original projection shortcut by a 3 multiplied by 3 maximum pooling layer with a step length of 2 and a 1 multiplied by 1 convolution layer with a step length of 1, and is used for adding the characteristics of different feature sizes before the feature dimension of the residual error network is changed; then, sparsity is automatically introduced into the ResNet model by utilizing a sparsity activation function ReLu;
and training a ResNet model to obtain high-level feature representation of the image data.
4. The micro-service resource-oriented multimodal data analysis method according to claim 2, wherein the step of encoding the text data based on the Transformer model comprises:
performing text preprocessing by a word segmentation and word removal method and by adopting Bert model processing to obtain text vectorization expression;
and constructing a description text of each category for the text data expressed by the text vectorization according to the classification label of the task, and performing feature extraction on the text data by taking an encoder of the Transformer model as a feature extractor to obtain the internal information of the text data so as to obtain the high-level feature expression of the text data.
5. The multimodal data analysis method facing micro service resources as claimed in claim 1, wherein the specific steps in the step S3 include:
respectively extracting Text features and Image features by taking a ResNet model as an Image Encoder model in a CLIP model and a Transformer model as a Text Encoder model in the CLIP model, and performing comparison learning on the extracted Text features and the extracted Image features by the CLIP model;
for a training batch containing N text and image pairs, combining N text features and N image features pairwise, the CLIP model predicts N 2 The similarity of each text and the image pair; directly calculating cosine similarity of the text features and the image features according to the similarity; the training target of the CLIP model is the similarity of the maximum N positive samples, while the similarity of the N negative samples is minimized.
6. The method for multimodal data analysis facing micro service resources as claimed in claim 1, wherein the specific steps in the step S4 include:
adding a weight coefficient W to the conventional cross entropy loss function n Then the expression of the improved cross entropy loss function is:
Figure FDA0003888969780000021
in the formula, N represents the total number of samples, p n,i Representing the probability that the nth sample class is i;
and classifying the image data and the text data by using the improved cross entropy loss function to obtain the analyzed multi-modal data.
7. A multi-modal data analysis device for micro-service resources is characterized by further comprising:
the data acquisition module is used for acquiring multi-mode data of different micro-service component resources, and the multi-mode data comprises text data and image data;
the data processing module is used for coding the image data and the text data through a ResNet model and a Transformer model respectively to obtain high-level feature representation of the original image data and the original text data;
the characteristic analysis module is used for training a CLIP model by utilizing the high-level characteristic representation of the obtained image data and text data, carrying out data annotation and aligning the characteristics of the image data and the text data in a space represented by the high-level characteristic representation;
and the data classification module is used for classifying the image data and the text data through a cross entropy loss function to obtain the analyzed multi-modal data.
8. The multimodal data analysis apparatus facing micro service resources as claimed in claim 7, wherein the data obtaining module further comprises a first obtaining unit and a second obtaining unit:
the first acquisition unit is used for acquiring text data in the multi-modal data;
the second acquisition unit is used for acquiring the image data in the multi-modal data.
9. The micro-service resource-oriented multimodal data analysis device as claimed in claim 7, wherein the feature analysis module is connected to the data output end of the data acquisition module, and is configured to analyze the multimodal data after data encoding.
10. A storage medium having stored therein instructions executable by a processor, the storage medium comprising: the processor-executable instructions, when executed by the processor, are for implementing a micro-service resource oriented multimodal data analysis method as claimed in any one of claims 1-6.
CN202211258044.5A 2022-10-13 2022-10-13 Multi-mode data analysis method, device and medium for micro-service resources Pending CN115659175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211258044.5A CN115659175A (en) 2022-10-13 2022-10-13 Multi-mode data analysis method, device and medium for micro-service resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211258044.5A CN115659175A (en) 2022-10-13 2022-10-13 Multi-mode data analysis method, device and medium for micro-service resources

Publications (1)

Publication Number Publication Date
CN115659175A true CN115659175A (en) 2023-01-31

Family

ID=84988259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211258044.5A Pending CN115659175A (en) 2022-10-13 2022-10-13 Multi-mode data analysis method, device and medium for micro-service resources

Country Status (1)

Country Link
CN (1) CN115659175A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578734A (en) * 2023-05-20 2023-08-11 重庆师范大学 Probability embedding combination retrieval method based on CLIP
CN116796251A (en) * 2023-08-25 2023-09-22 江苏省互联网行业管理服务中心 Poor website classification method, system and equipment based on image-text multi-mode
CN116881335A (en) * 2023-07-24 2023-10-13 郑州华商科技有限公司 Multi-mode data intelligent analysis system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578734A (en) * 2023-05-20 2023-08-11 重庆师范大学 Probability embedding combination retrieval method based on CLIP
CN116578734B (en) * 2023-05-20 2024-04-30 重庆师范大学 Probability embedding combination retrieval method based on CLIP
CN116881335A (en) * 2023-07-24 2023-10-13 郑州华商科技有限公司 Multi-mode data intelligent analysis system and method
CN116796251A (en) * 2023-08-25 2023-09-22 江苏省互联网行业管理服务中心 Poor website classification method, system and equipment based on image-text multi-mode

Similar Documents

Publication Publication Date Title
CN111898696B (en) Pseudo tag and tag prediction model generation method, device, medium and equipment
CN115659175A (en) Multi-mode data analysis method, device and medium for micro-service resources
CN113128478B (en) Model training method, pedestrian analysis method, device, equipment and storage medium
CN110874591B (en) Image positioning method, device, equipment and storage medium
CN111124487A (en) Code clone detection method and device and electronic equipment
CN115618269B (en) Big data analysis method and system based on industrial sensor production
CN112905868A (en) Event extraction method, device, equipment and storage medium
CN104951791A (en) Data classification method and apparatus
CN113947095A (en) Multilingual text translation method and device, computer equipment and storage medium
CN117036778A (en) Potential safety hazard identification labeling method based on image-text conversion model
CN114299304A (en) Image processing method and related equipment
CN114708436B (en) Training method of semantic segmentation model, semantic segmentation method, semantic segmentation device and semantic segmentation medium
CN110750637A (en) Text abstract extraction method and device, computer equipment and storage medium
CN114782720A (en) Method, device, electronic device, medium, and program product for determining matching of document
CN113705749A (en) Two-dimensional code identification method, device and equipment based on deep learning and storage medium
CN115238805B (en) Training method of abnormal data recognition model and related equipment
CN117668237B (en) Sample data processing method and system for intelligent model training and intelligent model
CN115841677B (en) Text layout analysis method and device, electronic equipment and storage medium
CN117874508A (en) Model training method, data extraction method and related equipment
CN116627761B (en) PHM modeling and modeling auxiliary system and method based on big data frame
CN113139187B (en) Method and device for generating and detecting pre-training language model
Wang et al. Behavior prediction for industrial control system
Yan et al. An Efficient Object Detection Network for Ammeter Inspection Task on Mobile Devices
CN117633282A (en) Query method and device for financial products, storage medium and electronic equipment
CN117523244A (en) Multi-view clustering method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination