CN116863368A - Artificial intelligent identification terminal - Google Patents

Artificial intelligent identification terminal Download PDF

Info

Publication number
CN116863368A
CN116863368A CN202310664770.5A CN202310664770A CN116863368A CN 116863368 A CN116863368 A CN 116863368A CN 202310664770 A CN202310664770 A CN 202310664770A CN 116863368 A CN116863368 A CN 116863368A
Authority
CN
China
Prior art keywords
target
information
tracking target
module
entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310664770.5A
Other languages
Chinese (zh)
Inventor
张欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Apocalypse Intelligent Technology Co ltd
Original Assignee
Shenzhen Apocalypse Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Apocalypse Intelligent Technology Co ltd filed Critical Shenzhen Apocalypse Intelligent Technology Co ltd
Priority to CN202310664770.5A priority Critical patent/CN116863368A/en
Publication of CN116863368A publication Critical patent/CN116863368A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • G06F16/1744Redundancy elimination performed by the file system using compression, e.g. sparse files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an artificial intelligent recognition terminal, which belongs to the field of intelligent recognition and comprises image acquisition equipment, a control terminal, a management platform, an analysis and comparison module, a cascade matching module, an alarm module, a performance optimization module and a data server; the image acquisition equipment is used for acquiring surrounding environment image information; according to the invention, the target to be identified can be efficiently processed by constructing the comparison map, the data processing capacity of the manager is effectively reduced, meanwhile, the full cross-camera matching of the newly added determined tracking target can be ensured, the condition of identification omission is avoided, the memory of the control platform can be subjected to large-granularity compression, the connection stability of the control terminal and the control platform is effectively ensured, the operation smoothness of the manager is ensured, and the use experience is improved.

Description

Artificial intelligent identification terminal
Technical Field
The invention relates to the field of intelligent identification, in particular to an artificial intelligent identification terminal.
Background
Computer artificial intelligence recognition technology was developed in the context of people's demands for automated office and intelligent production, and simply comes from the teaching of the field of speech recognition. In the development process of the smart phone, the voice control is a popular technology, and in practice, the voice control is also a voice recognition technology, and is a technology based on voice recording and comparison. Therefore, people hope to realize artificial intelligent recognition in different fields, so that the burden of people in the working process is reduced, and the intelligent life is comprehensively realized. The artificial intelligence has made breakthrough progress in scientific research and industrial application at present, and is widely applied to the fields of data statistics and analysis, intelligent image analysis, live-action virtual interaction, voice recognition and the like. Because of the excellent image processing capability of artificial intelligence, the artificial intelligence recognition technology becomes an important technical means for guaranteeing the safety of residents.
The existing artificial intelligent recognition terminal has low efficiency of processing the target to be recognized, the data processing capacity of the management personnel is large, and recognition omission is easy to occur; in addition, the stability that current artificial intelligence identification terminal's control terminal is connected with control platform is poor, appears operating the condition of blocking easily, reduces the use experience, and for this reason, we propose an artificial intelligence identification terminal.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an artificial intelligent identification terminal.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an artificial intelligent identification terminal comprises image acquisition equipment, a control terminal, a management platform, an analysis and comparison module, a cascade matching module, an alarm module, a performance optimization module and a data server;
the image acquisition equipment is used for acquiring surrounding environment image information;
the control terminal is used for issuing an operation instruction by a manager and carrying out retrieval and calling on related information;
the management platform is used for verifying login manager information and feeding back related data to managers for checking;
the contrast analysis module is used for constructing a contrast map and carrying out contrast identification on related targets according to the acquired image information;
the cascade matching module is used for performing cross-camera matching on the identified targets;
the alarm module is used for alarming the target with the identification abnormality and feeding back related target information to the manager in real time;
the performance optimization module is used for performing optimized compression on the connection performance of the management platform;
the data server is used for storing the data collected by the identification terminal for the manager to call and check.
As a further scheme of the invention, the image acquisition equipment specifically comprises a camera, a monitor and a video recorder; the control terminal specifically comprises a smart phone, a tablet personal computer, a desktop personal computer and a notebook personal computer.
As a further scheme of the invention, the analysis comparison module constructs a comparison map by the following specific steps:
step one: the analysis and comparison module selects data stored in the cloud as a knowledge range, then simultaneously acquires related knowledge data, extracts each group of characteristic keywords, and simultaneously converts each characteristic keyword into a word vector;
step two: constructing a TransD model to receive related data, using a transfer vector to represent an origin to information embedded vector in a space, and measuring the preference degree of specific information through transfer and Euclidean distance of the information;
step three: the method comprises the steps of constructing a topic text knowledge subgraph by a TransD model, extracting the relation between the subgraph and an entity according to the entity, learning by adopting a knowledge graph embedding model, taking a learned entity vector as the input of a CNN layer, outputting a corresponding entity table and a relation table, installing and configuring a Neo4j database, starting Neo4j service and importing data to complete comparison graph construction.
As a further scheme of the invention, the training and learning specific steps of the knowledge graph embedded model are as follows:
step (1): inputting the topic feature vector, the entity vector with the word-entity alignment and the related entity context vector into a knowledge graph embedding model, mapping the entity vector and the context entity vector from the entity space to the word vector space through a relation-entity alignment conversion function, and connecting the features together as input;
step (2): inputting the topic description text into a Softmax classifier, normalizing to obtain the output probability of target information on the kth topic, minimizing an objective function by adopting an adaptive moment estimation algorithm, and updating various parameters of each round of iterative process network through back propagation until the model meets the fitting requirement.
As a further scheme of the invention, the comparison analysis module compares and identifies the specific steps as follows:
step I: the contrast analysis module is used for preprocessing the acquired image information to acquire corresponding picture information, then carrying out scale normalization processing on each picture information, carrying out feature extraction, and carrying out feature fusion on each extracted group of feature information;
step II: classifying and regressing the fusion result to output a detection frame and a category, collecting target detection frame information in the picture information, generating corresponding detection frame coordinates, performing enlarged cutting on related picture information, and filtering out simple negative samples belonging to the background in each group of cut pictures through RPN;
step III: and comparing the cut target picture with the comparison map, if the target picture is consistent with the comparison map, feeding back the target related information to a manager, and if the target picture is not consistent with the comparison map, feeding back the target image data to the manager for manual verification, and receiving a verification result of the manager to update the comparison map.
As a further scheme of the invention, the cascade matching module cross-camera matching specifically comprises the following steps:
step (1): the cascade matching module calculates the interval time of the actual video frames of each image information, records the calculated interval time of the actual video frames, establishes a motion model through a Kalman filtering theory, and simultaneously acquires the motion state of a tracking target in real time through the established motion model;
step (2): allocating a unique number for each tracking target, defining the motion state of the tracking target in a video frame by a motion model according to the linear motion assumption of the tracking target, collecting the motion state of the tracking target in the current video frame, and constructing a prediction equation to estimate the motion state of each tracking target in the next video frame;
step (3): marking a set of motion states of the current video frames of the tracking targets of each image acquisition device as a characterization position and a characterization covariance matrix, and then collecting detection results of all targets in the current video frames of each image acquisition device calculated by a multi-target real-time detection algorithm;
step (4): calculating the cosine distance between the detection result and the tracking target, filtering by using a specified threshold value to obtain a cost matrix of cosine distance representation, filtering the cost matrix of cosine distance representation again according to the Markov distance matrix and related constraint conditions, and then carrying out binary matching solution on the cost matrix of cosine distance representation by adopting a Hungary algorithm to obtain a matching result set;
step (5): aiming at the detection result of unsuccessful matching, if the undetermined tracking target in the continuous multiple groups of video frames fails to be successfully matched with a certain detection result, the undetermined tracking target is considered to be a newly added determined tracking target.
As a further aspect of the present invention, the motion state in step (2) is specifically defined as follows:
where S represents the motion state of the tracking target, and (x, y, w, h) represents the center point coordinates and width and height of the bounding box of the tracking target boundary,representing a corresponding tracking target speed value;
the specific calculation formula of the prediction equation in the step (2) is as follows:
in the method, in the process of the invention,representing the mean value of the motion state of the tracking target predicted by the linear motion model in the next video frame,representing the best estimated value of the motion state of the tracking target in the current video frame, A k+1 Representing the state transition matrix at time k+1, P k+1,k Representing the predicted motion state covariance matrix of the tracked object in the next video frame, P k,k Covariance matrix representing motion state of tracking target in current video frame, Q k+1 Representing the motion model noise matrix at time k + 1.
As a further scheme of the invention, the specific steps of optimizing compression by the performance optimizing module are as follows:
the first step: the performance optimization module generates a starting linked list for each group of application interfaces of management platforms of different systems, and further links each group of starting linked lists according to the access times of management personnel from less to more in sequence of the LRU linked list;
and a second step of: according to the interactive information of each group of application interfaces, updating data of each group of pages in each group of starting linked lists in real time, sequentially selecting the application interface starting linked list with the least access times from the head of the LRU linked list to select the victim page, and stopping until enough victim pages are recovered;
and a third step of: combining the selected victim page into a block and marking, waking up a compression driver program to analyze the marked block, obtaining a physical page belonging to the block, copying the physical page into a buffer area, then calling a compression algorithm to compress the physical page in the buffer area into a compression block, and storing the compression block into a compression area of a performance optimization module.
Compared with the prior art, the invention has the beneficial effects that:
1. the artificial intelligent recognition terminal selects data stored in a cloud as a knowledge range through an analysis comparison module, then acquires related knowledge data and carries out related processing, then builds a TransD model to receive related data for preference calculation, then adopts a knowledge graph embedded model for learning, takes a learned entity vector as input of a CNN layer, outputs a corresponding entity table and a relation table, then installs and configures a Neo4j database, simultaneously starts Neo4j service and imports data to build a related comparison graph, after target recognition is completed, calculates the interval time of actual video frames of each image information through a cascade matching module, records the calculated interval time of actual video frames, builds a motion model through a Kalman filtering theory, simultaneously acquires the motion state of a tracking target through the built motion model in real time, carries out learning on detection results of all targets in the current video frames of image acquisition equipment through a multi-target real-time detection algorithm, processes the data and carries out filtering, then carries out binary matching solution on the data to obtain a matching result set, and can fully prevent the situation that the targets to be successfully tracked are successfully matched from being detected by a camera when a plurality of groups of targets to be successfully tracked are successfully detected, the targets are successfully identified by a high-contrast graph is fully determined, and the situation that the target recognition can not be successfully detected is well is ensured by a camera.
2. According to the invention, a starting chain table is generated for each set of application interfaces of management platforms of different systems through a performance optimization module, the starting chain tables of each set are further linked according to the access times of management personnel from less to more, each set of pages in each set of starting chain table are updated in real time according to the interactive information of each set of application interfaces, the starting chain table of the application interface with the least access times is sequentially selected from the head of the LRU chain table to select a victim page, the victim page is stopped after enough victim pages are recovered, the selected victim page is combined into a block and marked, then a compression driver is awakened to analyze the marked block, a physical page belonging to the block is obtained, the physical page is copied into a buffer zone, then a compression algorithm is called to compress the physical page in the buffer zone into a compression block, and the compression block is stored into a compression zone of the performance optimization module, so that the memory of the control platform can be compressed in large granularity, the stability of connection between the control terminal and the control platform is effectively ensured, and meanwhile, the operation smoothness of the management personnel is ensured, and the use experience is improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is a system block diagram of an artificial intelligent recognition terminal provided by the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1
Referring to fig. 1, an artificial intelligent recognition terminal includes an image acquisition device, a control terminal, a management platform, an analysis and comparison module, a cascade matching module, an alarm module, a performance optimization module and a data server.
The image acquisition equipment is used for acquiring surrounding environment image information; the control terminal is used for issuing an operation instruction by a manager and carrying out retrieval and calling on related information; the management platform is used for verifying login manager information and feeding back related data to the manager for checking.
In this embodiment, the image capturing device specifically includes a camera, a monitor, and a video recorder; the control terminal specifically comprises a smart phone, a tablet personal computer, a desktop personal computer and a notebook personal computer.
The contrast analysis module is used for constructing a contrast map and carrying out contrast recognition on related targets according to the acquired image information.
Specifically, the analysis and comparison module selects data stored in a cloud as a knowledge range, then acquires relevant knowledge data, extracts each group of feature keywords, converts each feature keyword into a word vector, then constructs a transfer model to receive relevant data, uses a transfer vector to represent a vector from an origin to information embedding in a space, measures the preference degree of specific information through transfer and Euclidean distance of the information, constructs a theme text knowledge subgraph according to the relation between the transfer and the information, extracts the entity in the subgraph, learns by adopting a knowledge graph embedding model, takes the learned entity vector as the input of a CNN layer, outputs a corresponding entity table and a relation table, then installs and configures a Neo4j database, and simultaneously starts Neo4j service and imports data to complete comparison graph construction.
Specifically, the contrast analysis module pre-processes the collected image information to obtain corresponding picture information, then performs scale normalization processing on each picture information and performs feature extraction, then performs feature fusion on each extracted group of feature information, performs classification regression on fusion results to output detection frames and categories, collects target detection frame information in the picture information, generates corresponding detection frame coordinates, performs enlarged cutting on related picture information, filters out simple negative samples belonging to the background in each group of cut pictures through an RPN, compares the cut target picture with a contrast map, feeds back the target related information to management personnel if consistency exists, feeds back the target image data to management personnel for manual verification if consistency does not exist, and simultaneously receives verification results of the management personnel to update the contrast map.
It should be further described that, the topic feature vector, the entity vector after word-entity alignment and the related entity context vector are input into the knowledge graph embedding model, then the entity vector and the context entity vector are mapped from the entity space to the word vector space through the relation-entity alignment conversion function, the features are connected together as input, the topic description text is input into the Softmax classifier, the output probability of the target information in the kth topic is obtained through normalization, meanwhile, the adaptive moment estimation algorithm is adopted to minimize the target function, and various parameters of the network in each round of iterative process are updated through back propagation until the model meets the fitting requirement.
Example 2
Referring to fig. 1, an artificial intelligent recognition terminal includes an image acquisition device, a control terminal, a management platform, an analysis and comparison module, a cascade matching module, an alarm module, a performance optimization module and a data server.
The cascade matching module is used for performing cross-camera matching on the identified targets.
Specifically, the cascade matching module calculates the interval time of actual video frames of each image information, records the calculated interval time of the actual video frames, establishes a motion model through a Kalman filtering theory, acquires the motion state of a tracking target in real time through the established motion model, distributes a unique number for each tracking target, defines the motion state of the tracking target in the video frames according to the linear motion assumption of the tracking target, collects the motion state of the tracking target in the current video frames, establishes a prediction equation to estimate the motion state of each tracking target in the next video frame, marks a set of the motion state of the current video frames of each tracking target of each image acquisition device as a characterization position and a characterization covariance matrix, collects the detection results of all targets in the current video frames of each image acquisition device calculated by a multi-target real-time detection algorithm, calculates cosine distances between the detection results and the tracking targets, filters the cosine distances according to the linear motion assumption of the tracking targets, filters the cosine distance matrix according to the linear motion assumption of the tracking targets, and then solves the cosine distance matrix according to the Maackone constraint condition, and if the cosine distance matrix is not successfully matched with the tracking targets, and the set of the tracking targets cannot be successfully matched with the set of the targets is determined by adopting a continuous matching method.
It should be further noted that the motion state is specifically defined as follows:
where S represents the motion state of the tracking target, and (x, y, w, h) represents the center point coordinates and width and height of the bounding box of the tracking target boundary,representing a corresponding tracking target speed value;
the specific calculation formula of the prediction equation is as follows:
in the method, in the process of the invention,representing the mean value of the motion state of the tracking target predicted by the linear motion model in the next video frame,representing the best estimated value of the motion state of the tracking target in the current video frame, A k+1 Representing the state transition matrix at time k+1, P k+1,k Representing the predicted motion state covariance matrix of the tracked object in the next video frame, P k,k Covariance matrix representing motion state of tracking target in current video frame, Q k+1 Representing the motion model noise matrix at time k + 1.
The alarm module is used for alarming the target with the identification abnormality and feeding back related target information to the manager in real time; the data server is used for storing the data collected by the identification terminal for the manager to call and check;
the performance optimization module is used for performing optimized compression on the connection performance of the management platform.
Specifically, the performance optimization module generates a starting chain table for each set of application interfaces of management platforms of different systems, and the LRU chain table sequence, further links each set of starting chain table according to the access times of management personnel from less to more, updates data of each set of pages in each set of starting chain table according to the interactive information of each set of application interfaces, sequentially selects the application interface starting chain table with the least access times from the head of the LRU chain table to select a victim page, stops after enough victim pages are recovered, merges the selected victim pages into a block and marks the block, wakes up a compression driver to analyze the marked block, obtains a physical page belonging to the block, copies the physical page into a buffer, and then invokes a compression algorithm to compress the physical page in the buffer into a compression block, and stores the compression block into a compression area of the performance optimization module.

Claims (8)

1. The artificial intelligent identification terminal is characterized by comprising image acquisition equipment, a control terminal, a management platform, an analysis and comparison module, a cascade matching module, an alarm module, a performance optimization module and a data server;
the image acquisition equipment is used for acquiring surrounding environment image information;
the control terminal is used for issuing an operation instruction by a manager and carrying out retrieval and calling on related information;
the management platform is used for verifying login manager information and feeding back related data to managers for checking;
the contrast analysis module is used for constructing a contrast map and carrying out contrast identification on related targets according to the acquired image information;
the cascade matching module is used for performing cross-camera matching on the identified targets;
the alarm module is used for alarming the target with the identification abnormality and feeding back related target information to the manager in real time;
the performance optimization module is used for performing optimized compression on the connection performance of the management platform;
the data server is used for storing the data collected by the identification terminal for the manager to call and check.
2. The artificial intelligence identification terminal according to claim 1, wherein the image acquisition device comprises a camera, a monitor and a video recorder; the control terminal specifically comprises a smart phone, a tablet personal computer, a desktop personal computer and a notebook personal computer.
3. The artificial intelligent recognition terminal according to claim 1, wherein the analysis comparison module constructs a comparison map as follows:
step one: the analysis and comparison module selects data stored in the cloud as a knowledge range, then simultaneously acquires related knowledge data, extracts each group of characteristic keywords, and simultaneously converts each characteristic keyword into a word vector;
step two: constructing a TransD model to receive related data, using a transfer vector to represent an origin to information embedded vector in a space, and measuring the preference degree of specific information through transfer and Euclidean distance of the information;
step three: the method comprises the steps of constructing a topic text knowledge subgraph by a TransD model, extracting the relation between the subgraph and an entity according to the entity, learning by adopting a knowledge graph embedding model, taking a learned entity vector as the input of a CNN layer, outputting a corresponding entity table and a relation table, installing and configuring a Neo4j database, starting Neo4j service and importing data to complete comparison graph construction.
4. The artificial intelligent recognition terminal according to claim 3, wherein the training and learning of the knowledge-graph embedded model in the step three comprises the following specific steps:
step (1): inputting the topic feature vector, the entity vector with the word-entity alignment and the related entity context vector into a knowledge graph embedding model, mapping the entity vector and the context entity vector from the entity space to the word vector space through a relation-entity alignment conversion function, and connecting the features together as input;
step (2): inputting the topic description text into a Softmax classifier, normalizing to obtain the output probability of target information on the kth topic, minimizing an objective function by adopting an adaptive moment estimation algorithm, and updating various parameters of each round of iterative process network through back propagation until the model meets the fitting requirement.
5. An artificial intelligence recognition terminal according to claim 3, wherein the comparison analysis module compares and recognizes the specific steps as follows:
step I: the contrast analysis module is used for preprocessing the acquired image information to acquire corresponding picture information, then carrying out scale normalization processing on each picture information, carrying out feature extraction, and carrying out feature fusion on each extracted group of feature information;
step II: classifying and regressing the fusion result to output a detection frame and a category, collecting target detection frame information in the picture information, generating corresponding detection frame coordinates, performing enlarged cutting on related picture information, and filtering out simple negative samples belonging to the background in each group of cut pictures through RPN;
step III: and comparing the cut target picture with the comparison map, if the target picture is consistent with the comparison map, feeding back the target related information to a manager, and if the target picture is not consistent with the comparison map, feeding back the target image data to the manager for manual verification, and receiving a verification result of the manager to update the comparison map.
6. The artificial intelligence recognition terminal according to claim 5, wherein the cascade matching module cross-camera matching comprises the following specific steps:
step (1): the cascade matching module calculates the interval time of the actual video frames of each image information, records the calculated interval time of the actual video frames, establishes a motion model through a Kalman filtering theory, and simultaneously acquires the motion state of a tracking target in real time through the established motion model;
step (2): allocating a unique number for each tracking target, defining the motion state of the tracking target in a video frame by a motion model according to the linear motion assumption of the tracking target, collecting the motion state of the tracking target in the current video frame, and constructing a prediction equation to estimate the motion state of each tracking target in the next video frame;
step (3): marking a set of motion states of the current video frames of the tracking targets of each image acquisition device as a characterization position and a characterization covariance matrix, and then collecting detection results of all targets in the current video frames of each image acquisition device calculated by a multi-target real-time detection algorithm;
step (4): calculating the cosine distance between the detection result and the tracking target, filtering by using a specified threshold value to obtain a cost matrix of cosine distance representation, filtering the cost matrix of cosine distance representation again according to the Markov distance matrix and related constraint conditions, and then carrying out binary matching solution on the cost matrix of cosine distance representation by adopting a Hungary algorithm to obtain a matching result set;
step (5): aiming at the detection result of unsuccessful matching, if the undetermined tracking target in the continuous multiple groups of video frames fails to be successfully matched with a certain detection result, the undetermined tracking target is considered to be a newly added determined tracking target.
7. The artificial intelligence recognition terminal according to claim 6, wherein the motion state in step (2) is specifically defined as follows:
where S represents the motion state of the tracking target, and (x, y, w, h) represents the center point coordinates and width and height of the bounding box of the tracking target boundary,representing a corresponding tracking target speed value;
the specific calculation formula of the prediction equation in the step (2) is as follows:
in the method, in the process of the invention,representing the mean value of the motion state of the tracking target predicted by the linear motion model in the next video frame, +.>Representing the best estimated value of the motion state of the tracking target in the current video frame, A k+1 Representing the state transition matrix at time k+1, P k+1,k Representing the predicted motion state covariance matrix of the tracked object in the next video frame, P k,k Covariance matrix representing motion state of tracking target in current video frame, Q k+1 Representing the motion model noise matrix at time k + 1.
8. The artificial intelligence identification terminal according to claim 1, wherein the performance optimization module performs the following steps:
the first step: the performance optimization module generates a starting linked list for each group of application interfaces of management platforms of different systems, and further links each group of starting linked lists according to the access times of management personnel from less to more in sequence of the LRU linked list;
and a second step of: according to the interactive information of each group of application interfaces, updating data of each group of pages in each group of starting linked lists in real time, sequentially selecting the application interface starting linked list with the least access times from the head of the LRU linked list to select the victim page, and stopping until enough victim pages are recovered;
and a third step of: combining the selected victim page into a block and marking, waking up a compression driver program to analyze the marked block, obtaining a physical page belonging to the block, copying the physical page into a buffer area, then calling a compression algorithm to compress the physical page in the buffer area into a compression block, and storing the compression block into a compression area of a performance optimization module.
CN202310664770.5A 2023-06-06 2023-06-06 Artificial intelligent identification terminal Pending CN116863368A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310664770.5A CN116863368A (en) 2023-06-06 2023-06-06 Artificial intelligent identification terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310664770.5A CN116863368A (en) 2023-06-06 2023-06-06 Artificial intelligent identification terminal

Publications (1)

Publication Number Publication Date
CN116863368A true CN116863368A (en) 2023-10-10

Family

ID=88225797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310664770.5A Pending CN116863368A (en) 2023-06-06 2023-06-06 Artificial intelligent identification terminal

Country Status (1)

Country Link
CN (1) CN116863368A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114519351A (en) * 2022-02-21 2022-05-20 国家计算机网络与信息安全管理中心上海分中心 Subject text rapid detection method based on user intention embedded map learning
WO2022227771A1 (en) * 2021-04-27 2022-11-03 北京百度网讯科技有限公司 Target tracking method and apparatus, device and medium
CN115719283A (en) * 2022-11-28 2023-02-28 西京学院 Intelligent accounting management system
CN115760523A (en) * 2022-11-18 2023-03-07 四川云泷生态科技有限公司 Animal management method and system based on cloud platform
CN116016869A (en) * 2022-12-29 2023-04-25 济南幼儿师范高等专科学校 Campus safety monitoring system based on artificial intelligence and Internet of things

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022227771A1 (en) * 2021-04-27 2022-11-03 北京百度网讯科技有限公司 Target tracking method and apparatus, device and medium
CN114519351A (en) * 2022-02-21 2022-05-20 国家计算机网络与信息安全管理中心上海分中心 Subject text rapid detection method based on user intention embedded map learning
CN115760523A (en) * 2022-11-18 2023-03-07 四川云泷生态科技有限公司 Animal management method and system based on cloud platform
CN115719283A (en) * 2022-11-28 2023-02-28 西京学院 Intelligent accounting management system
CN116016869A (en) * 2022-12-29 2023-04-25 济南幼儿师范高等专科学校 Campus safety monitoring system based on artificial intelligence and Internet of things

Similar Documents

Publication Publication Date Title
CN114241282B (en) Knowledge distillation-based edge equipment scene recognition method and device
WO2021217934A1 (en) Method and apparatus for monitoring number of livestock, and computer device and storage medium
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN109325440B (en) Human body action recognition method and system
CN112001347B (en) Action recognition method based on human skeleton morphology and detection target
CN114550053A (en) Traffic accident responsibility determination method, device, computer equipment and storage medium
CN111860193B (en) Text-based pedestrian retrieval self-supervision visual representation learning system and method
CN113627266A (en) Video pedestrian re-identification method based on Transformer space-time modeling
CN112966088B (en) Unknown intention recognition method, device, equipment and storage medium
CN112507859B (en) Visual tracking method for mobile robot
CN111967433A (en) Action identification method based on self-supervision learning network
CN110688980A (en) Human body posture classification method based on computer vision
CN110765285A (en) Multimedia information content control method and system based on visual characteristics
CN110867225A (en) Character-level clinical concept extraction named entity recognition method and system
CN108280516B (en) Optimization method for mutual-pulsation intelligent evolution among multiple groups of convolutional neural networks
CN112115996B (en) Image data processing method, device, equipment and storage medium
CN103279581A (en) Method for performing video retrieval by compact video theme descriptors
KR20190101692A (en) Video watch method based on transfer of learning
CN116630753A (en) Multi-scale small sample target detection method based on contrast learning
CN111291785A (en) Target detection method, device, equipment and storage medium
CN116863368A (en) Artificial intelligent identification terminal
CN113743251B (en) Target searching method and device based on weak supervision scene
CN115240647A (en) Sound event detection method and device, electronic equipment and storage medium
CN114049582A (en) Weak supervision behavior detection method and device based on network structure search and background-action enhancement
CN111475674A (en) Deep learning model training data set construction method and system for violent behavior detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination