CN116842141B - Alarm smoke linkage based digital information studying and judging method - Google Patents

Alarm smoke linkage based digital information studying and judging method Download PDF

Info

Publication number
CN116842141B
CN116842141B CN202311083689.4A CN202311083689A CN116842141B CN 116842141 B CN116842141 B CN 116842141B CN 202311083689 A CN202311083689 A CN 202311083689A CN 116842141 B CN116842141 B CN 116842141B
Authority
CN
China
Prior art keywords
feature
training
text
vector
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311083689.4A
Other languages
Chinese (zh)
Other versions
CN116842141A (en
Inventor
林添梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongan Technology Development Co ltd
Original Assignee
Beijing Zhongan Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongan Technology Development Co ltd filed Critical Beijing Zhongan Technology Development Co ltd
Priority to CN202311083689.4A priority Critical patent/CN116842141B/en
Publication of CN116842141A publication Critical patent/CN116842141A/en
Application granted granted Critical
Publication of CN116842141B publication Critical patent/CN116842141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a digital information studying and judging method based on smoke alarm linkage, which comprises the following steps: the pre-training is to describe images and characters in a database, encode and decode the images and characters to obtain corresponding pre-training characteristics, encode and decode the images and/or characters to be studied and judged in the recognition process to obtain corresponding characteristics, recognize the characteristics in the pre-training database, judge the information related to smoke, acquire the corresponding pre-training characteristics through pre-training, and judge the information related to smoke through the recognition process, so that a more accurate result is obtained.

Description

Alarm smoke linkage based digital information studying and judging method
Technical Field
The invention mainly relates to a digital information research and judgment method, in particular to a digital information research and judgment method based on smoke alarm linkage.
Background
In the Internet era, online shopping gradually replaces offline transactions to become a mainstream consumption mode, and smoke-related staff focus on the wide Internet. In recent years, the transaction mode of the Internet and logistics delivery increases the difficulty of studying and judging the task conditions of smoke-related personnel, the communication relationship of the smoke-related personnel, the sojourn relationship of the smoke-related personnel, the loss of regulation of related personnel, the activity area of related personnel, the smoke-related vehicles, the transportation of the smoke-related vehicles, key bayonets and the like, and the traditional obstacle of tobacco monopoly management work is heavy.
Disclosure of Invention
The invention aims to solve the problem that the prior art cannot carry out research and judgment on the task conditions of smoke-related personnel, the communication relation of the smoke-related personnel, the sojourn relation of the smoke-related personnel, the smoke-related vehicles, the transportation of the smoke-related vehicles, key bayonets and the like, and provides a digital information research and judgment method based on smoke-related linkage.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a method for studying and judging digital intelligence based on smoke linkage includes: the pre-training and recognition process is performed,
the pre-training is to describe the images and characters in the database, encode and decode the images and characters to obtain the corresponding pre-training characteristics, the recognition process obtains the corresponding characteristics by encoding and decoding the images and/or characters to be studied and judged, recognizes the characteristics in the pre-training database, judges the information related to smoke,
wherein the pre-training comprises the steps of:
step S100, given a task, performing word segmentation processing and attention on a text by adopting a multi-head attention mechanism and a gating circulation unit mechanism;
step S200, given a task, identifying an image by adopting an LSTM neural network, and comparing the acquired characteristics with the characteristics of a characteristic library;
step S300, associating the text and the image after the pre-training to form a text-image pair training set containing the characteristics.
The identification process comprises the following steps:
step S400, identifying the input text and/or image to be predicted, and obtaining corresponding characteristics;
step S500, comparing the feature to be predicted and the text-image pair containing the feature obtained by pre-training, and if the feature is within the threshold range, identifying and obtaining the task type.
Further, in step S100, the attention mechanism based on the deep neural network assists the network model in screening the information with the highest degree of association with the current stage task from the complex input information. The gating circulation unit can enable the neural network to not only memorize past information, but also selectively delete unimportant information. Meanwhile, in order to increase the accuracy of pre-training, a multi-head attention mechanism is adopted to conduct comparison learning on the same text information, and the situation of text mismatch is reduced.
The specific process of focusing attention on the text of the word segmentation by adopting the attention mechanism in the step S100 includes:
step S101, adopting an ebedding model pairTreating to obtain->Wherein->Representing a vector corresponding to a word or phrase of the text, < >>In order to distribute the attention of the person,Unumber of single and phrases in the text;
step S102, obtaining the query vector in the multi-graph attention mechanismKey vector->Sum vector->
Wherein,、/>、/>is a matrix of U×T->,/>T is the number of network channels where the multi-head attention mechanism is located;
step S103, each query vectorMultiplying all key vectors to obtain corresponding score +.>
Step S104, obtaining each query vector by using Softmax functionAttention probability distribution +.>I.e. weight coefficient
Step S105, for each query vectorCorresponding value vector +.>Weighted summation is carried out to obtain the attention output after comparison learning>
Step S105, the attention output obtained by all the query vectors is spliced to obtain contrast learningAttention to laterb
Further, in the step S100, the text of interest is also different for different tasks. For a smoke-related person studying and judging task, the concerned text vector comprises { personnel basic information, trip data, call data and express data }; for a smoke-related person communication relation research and judgment task, the concerned text vector comprises { the name, sex, age of a member, the interaction frequency between the member and the social network, and the relation density }; the smoke-related personnel living relation study and judgment task should pay attention to travel records, and the text vector of attention comprises { shift, carriage, adjacent seat, departure place, purpose, check in and check out }; for related personnel out-of-specification research and judgment tasks, the concerned text vector comprises { type of out-of-specification behavior, frequency, time, place, call, related personnel, environment }; for a specific activity research and judgment task of a smoke-related person, the concerned text vector comprises { person name, identity card number, location, travel information, contact person condition, travel history, communication record, social network behavior }; for a relevant personnel activity area research task, the concerned text vector comprises { the historical activity range of personnel, time node }; for the smoke-related vehicle research and judgment task, the text vector concerned comprises { vehicle basic information, bayonet data, high-speed data and time sequence data }; for the tobacco-related vehicle transportation research and judgment task, the text vector concerned comprises { high-speed toll station data, high-speed bayonet data, truck passing data, a running behavior mode of the tobacco-related vehicle, full load weight, weight change condition before unloading and after unloading }.
Further, in the step S200, the characteristics of the image obtained by encoding the image using an LSTM neural network (long-short-term memory network), the image encoding is performed using the LSTM neural network, the dimension is first reduced, and then the modeling is performed,
based on the LSTM neural network, the specific process of training the image in step S200 includes:
step S201, describing set of image IXPerforming multi-layer convolution to generate successive encoded vectorsZ
Wherein,is onedThe vector of the vector is the one that,Zis onem×m×dIs a vector of (2);
step S202, adopting an ebedding model for eachPerforming adjacent search to obtain corresponding encoding table vectorZ q The method comprises the steps of carrying out a first treatment on the surface of the Wherein the ebedding model contains the coding table +.>By proximity search, willZMap to thisKOne of the vectors, i.e
Step S203, adopting decoder model pairZ q Reconstructing to obtain an imageIIs encoded image of (a)
Step S204, setting an objective functionFor coded pictures->Training is performed
Wherein,、/>is a superparameter and->
Further, in step S200, the trained image acquires features, and compares the features with the features of the database to determine the feature type.
Further, in the step S400, for the text portion to be predicted, the feature of interest is obtained by performing encoding and recognition in the same manner as in the step S100; for the image part to be predicted, the same method as in step S200 is used for encoding and identifying to obtain the features, but the comparison of the feature library is not performed.
Further, in step S500, the obtained characteristics of the text portion to be predicted and the text characteristics in the training set are respectively input into two twin branches in the LSTM neural network for training. The specific process is as follows:
step S501, respectively inputting the features to be predicted and the features of the training set into two twin branches in the LSTM neural network for training;
step S502, obtaining the trained loss, and determining a first hidden variable feature of feature codes to be predicted and a second hidden variable feature of feature codes of a training set based on the loss;
and step S503, performing iterative training on the neural network according to the feature codes to be predicted and the corresponding first hidden variable features, and the feature codes of the training set and the corresponding second hidden variable features until the loss is minimum, and obtaining a similarity detection model.
Further, the loss function in step S502 is
Wherein m is a sampleThe number of the product is the number,for training set feature vector, ++>Is the feature vector to be predicted.
Further, the first hidden variable in step S502 is characterized by a loss function pairDeriving, the second hidden variable is characterized by a loss function pair +.>And (5) deriving.
Further, in step S503, the corresponding first hidden variable feature is superimposed according to the feature code to be predicted, so as to obtain an updated first feature code; superposing corresponding second hidden variable features according to the feature codes of the training set to obtain updated second feature codes; and performing iterative training on the neural network by adopting the updated first feature codes and the updated second feature codes.
Compared with the prior art, the invention has the following advantages: (1) In the invention, the images and the characters in the database are described and encoded and decoded to obtain corresponding pre-training characteristics, thereby achieving better information acquisition effect and obtaining corresponding characteristics; (2) In the identification process, the image and/or the text to be researched and judged are encoded and decoded to obtain corresponding features, the corresponding features are identified with the features in the pre-training database, and the information about the smoke is judged, so that more accurate information research and judgment is obtained.
The invention is further described below with reference to the drawings.
Drawings
FIG. 1 is a schematic diagram of the method of the present invention.
Detailed Description
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. In the embodiment, the behavior of revealing personal information and the phenomenon of infringing personal privacy do not exist, and the national legal regulations are met.
Referring to fig. 1, a method for studying and judging digital information based on smoke alarm linkage includes pre-training and recognition processes. And describing images and characters in the database through pre-training, and encoding and decoding to obtain corresponding pre-training characteristics. The identification process is to encode and decode the image and/or text to be studied and judged to obtain the corresponding characteristics, and to identify the characteristics in the pre-training database to judge the information related to the smoke.
In the embodiment, the method and the device can be used for studying and judging the task situations of the tobacco involving staff, the communication relation of the tobacco involving staff, the living relation of the tobacco involving staff, the loss of regulation of related staff, the specific activities of the tobacco involving staff, the activity areas of related staff, the tobacco involving vehicles, the transportation of the tobacco involving vehicles, key bayonets and the like.
Wherein the pre-training comprises the steps of:
step S100, given a task, performing word segmentation processing and attention on a text by adopting a multi-head attention mechanism and a gating circulation unit mechanism;
step S200, given a task, identifying an image by adopting an LSTM neural network, and comparing the acquired characteristics with the characteristics of a characteristic library;
step S300, associating the text and the image after the pre-training to form a text-image pair training set containing the characteristics.
The identification process comprises the following steps:
step S400, identifying the input text and/or image to be predicted, and obtaining corresponding characteristics;
step S500, comparing the feature to be predicted and the text-image pair containing the feature obtained by pre-training, and if the feature is within the threshold range, identifying and obtaining the task type.
In step S100, when a person views an image or text containing a lot of information, the person does not pay attention to all the information at the same time, but first sees the key area, so in order to simulate the cognitive attention of the person, the attention mechanism based on the deep neural network assists the network model in screening out the information with the highest degree of association with the task at the present stage from the complex input information. The gating circulation unit can enable the neural network to not only memorize past information, but also selectively delete unimportant information. Meanwhile, in order to increase the accuracy of pre-training, the embodiment adopts a multi-head attention mechanism to carry out comparison learning on the same text information, and reduces the situation of text mismatch.
Based on the above principle, the specific process of focusing attention on the text of the word segmentation in step S100 by adopting the attention mechanism includes:
step S101, adopting an ebedding model pairTreating to obtain->Wherein->Representing a vector corresponding to a word or phrase of the text, < >>In order to distribute the attention of the person,Unumber of single and phrases in the text;
step S102, obtaining the query vector in the multi-graph attention mechanismKey vector->Sum vector->
Wherein,、/>、/>is a matrix of U×T->,/>T is the number of network channels where the multi-head attention mechanism is located;
step S103, each query vectorMultiplying all key vectors to obtain corresponding score +.>
Step S104, obtaining each query vector by using Softmax functionAttention probability distribution +.>I.e. weight coefficient
Step S105, for each query vectorCorresponding value vector +.>Weighted summation is carried out to obtain the attention output after comparison learning>
Step S105, the attention output obtained by all the query vectors is spliced to obtain the attention after the contrast learningb
In step S100, the text of interest is also different for different tasks. For a smoke-related person studying and judging task, the concerned text vector comprises { personnel basic information, trip data, call data and express data }; for a smoke-related person communication relation research and judgment task, the concerned text vector comprises { the name, sex, age of a member, the interaction frequency between the member and the social network, and the relation density }; the smoke-related personnel living relation study and judgment task should pay attention to travel records, and the text vector of attention comprises { shift, carriage, adjacent seat, departure place, purpose, check in and check out }; for related personnel out-of-specification research and judgment tasks, the concerned text vector comprises { type of out-of-specification behavior, frequency, time, place, call, related personnel, environment }; for a specific activity research and judgment task of a smoke-related person, the concerned text vector comprises { person name, identity card number, location, travel information, contact person condition, travel history, communication record, social network behavior }; for a relevant personnel activity area research task, the concerned text vector comprises { the historical activity range of personnel, time node }; for the smoke-related vehicle research and judgment task, the text vector concerned comprises { vehicle basic information, bayonet data, high-speed data and time sequence data }; for the tobacco-related vehicle transportation research and judgment task, the text vector concerned comprises { high-speed toll station data, high-speed bayonet data, truck passing data, a running behavior mode of the tobacco-related vehicle, full load weight, weight change condition before unloading and after unloading }.
In step S200, the image is encoded using an LSTM neural network (long short term memory network) to obtain features. The existing autoregressive method is used for coding and identifying the images, and because the images are generated pixel by pixel, each pixel needs to be randomly sampled, and the operation is slow. And the LSTM neural network is adopted for image coding, firstly dimension reduction is carried out, and then modeling is carried out. Based on the LSTM neural network, the specific process of training the image in step S200 includes:
step S201, describing set of image IXPerforming multi-layer convolution to generate successive encoded vectorsZ
Wherein,is onedThe vector of the vector is the one that,Zis onem×m×dIs a vector of (2);
step S202, adopting an ebedding model for eachPerforming adjacent search to obtain corresponding encoding table vectorZ q The method comprises the steps of carrying out a first treatment on the surface of the Wherein the ebedding model contains the coding table +.>By proximity search, willZMap to thisKOne of the vectors, i.e
Step S203, adopting decoder model pairZ q Reconstructing to obtain an imageIIs encoded image of (a)
Step S204, setting an objective functionFor coded pictures->Training is performed
Wherein,、/>is a superparameter and->
Further, in step S200, the trained image acquires features, and compares the features with the features of the database to determine the feature type.
In step S400, for the text portion to be predicted, coding recognition is performed by the same method as in step S100 to obtain a feature of interest; for the image part to be predicted, the same method as in step S200 is used for encoding and identifying to obtain the features, but the comparison of the feature library is not performed.
In step S500, the obtained characteristics of the text portion to be predicted and the text characteristics in the training set are respectively input into two twin branches in the LSTM neural network for training. The specific process is as follows:
step S501, respectively inputting the features to be predicted and the features of the training set into two twin branches in the LSTM neural network for training;
step S502, obtaining the trained loss, and determining a first hidden variable feature of feature codes to be predicted and a second hidden variable feature of feature codes of a training set based on the loss;
and step S503, performing iterative training on the neural network according to the feature codes to be predicted and the corresponding first hidden variable features, and the feature codes of the training set and the corresponding second hidden variable features until the loss is minimum, and obtaining a similarity detection model.
The loss function in step S502 is
Wherein m is the number of samples,for training set feature vector, ++>Is the feature vector to be predicted.
The first hidden variable in step S502 is characterized by a loss function pairDeriving, the second hidden variable is characterized by a loss function pair +.>And (5) deriving.
In step S503, according to the feature code to be predicted, the corresponding first hidden variable feature is superimposed, so as to obtain an updated first feature code; superposing corresponding second hidden variable features according to the feature codes of the training set to obtain updated second feature codes; and performing iterative training on the neural network by adopting the updated first feature codes and the updated second feature codes.

Claims (3)

1. A method for studying and judging digital information based on smoke alarm linkage is characterized by comprising a pre-training and recognition process, wherein
The pre-training comprises the following steps:
step S100, given a task, performing word segmentation processing and attention on a text by adopting a multi-head attention mechanism and a gating circulation unit mechanism;
step S200, given a task, identifying an image by adopting an LSTM neural network, and comparing the acquired characteristics with the characteristics of a characteristic library;
step S300, associating the text and the image after pre-training to form a text-image pair training set containing characteristics;
the identification process comprises the following steps:
step S400, identifying the input text and/or image to be predicted, and obtaining corresponding characteristics;
step S500, comparing the feature to be predicted and the text-image pair containing the feature obtained by pre-training, and if the feature is within the threshold range, identifying and obtaining the task type;
the specific process of focusing attention on the text of the word segmentation by adopting the attention mechanism in the step S100 includes:
step S101, adopting an ebedding model pairProcessing to obtainWherein->Representing a vector corresponding to a word or phrase of the text, < >>In order to distribute the attention of the person,Uis the number of words and phrases in the text;
step S102, obtaining the query vector in the multi-head attention mechanismKey vector->Sum vector-> Wherein (1)>、/>、/>Is a matrix of U×T->,/>TThe number of channels of the neural network where the multi-head attention mechanism is located;
step S103, each query vectorMultiplying all key vectors to obtain corresponding score +.>
Step S104, obtaining each query vector by using Softmax functionAttention probability distribution +.>I.e. weight coefficient
Step S105 +.>Corresponding value vector +.>Weighted summation is carried out to obtain the attention output after comparison learning> Step S106, the attention output obtained by all the query vectors is spliced to obtain the attention after the contrast learningb
The specific process of training the image in step S200 includes:
step S201, describing set of image IXPerforming multi-layer convolution to generate successive encoded vectorsZ
Wherein (1)>Is onedThe vector of the dimensions is used to determine,Zis onem×m×dIs a vector of (2);
step S202, adopting an ebedding model for eachPerforming adjacent search to obtain corresponding encoding table vectorZ q The method comprises the steps of carrying out a first treatment on the surface of the Wherein the ebedding model contains the coding table +.>By proximity search, willZMap to thisKOne of the vectors, i.e
Step S203, adopting decoder model pairZ q Reconstructing to obtain an imageICoded picture +.> Step S204, set the objective function +.>For coded pictures->Training is performed
Wherein (1)>、/>Is a superparameter and->
In the step S500, the obtained characteristics of the text portion to be predicted and the text characteristics in the training set are respectively input into two twin branches in the LSTM neural network for training; the specific process is as follows:
step S501, respectively inputting the features to be predicted and the features of the training set into two twin branches in the LSTM neural network for training;
step S502, obtaining the trained loss, and determining a first hidden variable feature of feature codes to be predicted and a second hidden variable feature of feature codes of a training set based on the loss;
step S503, carrying out iterative training on the neural network according to the feature codes to be predicted and the corresponding first hidden variable features, and the feature codes of the training set and the corresponding second hidden variable features until the loss is minimum, and obtaining a similarity detection model;
the loss function in the step S502 is
Wherein m is the number of samples, +.>For training set feature vector, ++>The feature vector is to be predicted;
the first hidden variable in the step S502 is characterized by a loss function pairDeriving, the second hidden variable is characterized by a loss function pairSeeking a derivative;
in step S503, according to the feature code to be predicted, the corresponding first hidden variable feature is superimposed, so as to obtain an updated first feature code; superposing corresponding second hidden variable features according to the feature codes of the training set to obtain updated second feature codes; and performing iterative training on the neural network by adopting the updated first feature codes and the updated second feature codes.
2. The method for studying and judging digitized information based on smoke alarm linkage according to claim 1, wherein in the step S200, the trained image acquires the features and compares the features with the features of the database to judge the feature type.
3. The method for studying and judging digitized information based on smoke linkage according to claim 2, wherein in the step S400, coding and recognizing the text part to be predicted by the same method as the step S100 to obtain the feature of interest; for the image part to be predicted, the same method as in step S200 is used for encoding and identifying to obtain the features, but the comparison of the feature library is not performed.
CN202311083689.4A 2023-08-28 2023-08-28 Alarm smoke linkage based digital information studying and judging method Active CN116842141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311083689.4A CN116842141B (en) 2023-08-28 2023-08-28 Alarm smoke linkage based digital information studying and judging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311083689.4A CN116842141B (en) 2023-08-28 2023-08-28 Alarm smoke linkage based digital information studying and judging method

Publications (2)

Publication Number Publication Date
CN116842141A CN116842141A (en) 2023-10-03
CN116842141B true CN116842141B (en) 2023-11-07

Family

ID=88165482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311083689.4A Active CN116842141B (en) 2023-08-28 2023-08-28 Alarm smoke linkage based digital information studying and judging method

Country Status (1)

Country Link
CN (1) CN116842141B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257445A (en) * 2020-10-19 2021-01-22 浙大城市学院 Multi-modal tweet named entity recognition method based on text-picture relation pre-training
CN115134559A (en) * 2022-01-12 2022-09-30 北京环球森林科技有限公司 Bayonet cigarette end identification and suspicious behavior intelligent studying and judging method for forest fire prevention
CN115239937A (en) * 2022-09-23 2022-10-25 西南交通大学 Cross-modal emotion prediction method
CN115982350A (en) * 2022-12-07 2023-04-18 南京大学 False news detection method based on multi-mode Transformer
WO2023093574A1 (en) * 2021-11-25 2023-06-01 北京邮电大学 News event search method and system based on multi-level image-text semantic alignment model
CN116611021A (en) * 2023-04-19 2023-08-18 齐鲁工业大学(山东省科学院) Multi-mode event detection method and system based on double-transducer fusion model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257445A (en) * 2020-10-19 2021-01-22 浙大城市学院 Multi-modal tweet named entity recognition method based on text-picture relation pre-training
WO2023093574A1 (en) * 2021-11-25 2023-06-01 北京邮电大学 News event search method and system based on multi-level image-text semantic alignment model
CN115134559A (en) * 2022-01-12 2022-09-30 北京环球森林科技有限公司 Bayonet cigarette end identification and suspicious behavior intelligent studying and judging method for forest fire prevention
CN115239937A (en) * 2022-09-23 2022-10-25 西南交通大学 Cross-modal emotion prediction method
CN115982350A (en) * 2022-12-07 2023-04-18 南京大学 False news detection method based on multi-mode Transformer
CN116611021A (en) * 2023-04-19 2023-08-18 齐鲁工业大学(山东省科学院) Multi-mode event detection method and system based on double-transducer fusion model

Also Published As

Publication number Publication date
CN116842141A (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN111666588B (en) Emotion differential privacy protection method based on generation countermeasure network
CN112015901A (en) Text classification method and device and warning situation analysis system
CN112016313A (en) Spoken language element identification method and device and alarm situation analysis system
CN113343640B (en) Method and device for classifying customs commodity HS codes
CN114091462B (en) Case fact mixed coding based criminal case risk mutual learning assessment method
CN113836896A (en) Patent text abstract generation method and device based on deep learning
CN117993499B (en) Multi-mode knowledge graph construction method for four pre-platforms for flood control in drainage basin
CN116304984A (en) Multi-modal intention recognition method and system based on contrast learning
CN116610818A (en) Construction method and system of power transmission and transformation project knowledge base
CN115269836A (en) Intention identification method and device
CN111815485A (en) Sentencing prediction method and device based on deep learning BERT model
CN115063612A (en) Fraud early warning method, device, equipment and storage medium based on face-check video
CN117251685B (en) Knowledge graph-based standardized government affair data construction method and device
CN114461760A (en) Method and device for matching case fact with law bar
CN116842141B (en) Alarm smoke linkage based digital information studying and judging method
CN118035440A (en) Enterprise associated archive management target knowledge feature recommendation method
CN116578734B (en) Probability embedding combination retrieval method based on CLIP
CN117314623A (en) Loan fraud prediction method, device and storage medium integrating external knowledge
CN117037990A (en) Intelligent classification storage method and device based on medical records quality
CN116205350A (en) Reinforcement personal risk analysis and prediction system and method based on legal documents
CN117037017A (en) Video emotion detection method based on key frame erasure
KR102556450B1 (en) Customized wine recommendation method based on artificial intelligence and operating server for the method
CN115982388A (en) Case quality control map establishing method, case document quality testing method, case quality control map establishing equipment and storage medium
CN117935784A (en) Voice processing model training method, voice recognition method and device
CN117077680A (en) Question and answer intention recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: Room 119, 1st Floor, Building 3, No. 20 Yong'an Road, Shilong Economic Development Zone, Mentougou District, Beijing, 102300

Patentee after: Beijing Zhongan Technology Development Co.,Ltd.

Address before: A502, No. 28 Dongjiaomin Lane, Dongcheng District, Beijing, 100010

Patentee before: Beijing Zhongan Technology Development Co.,Ltd.