CN112306985A - Digital retina multi-modal feature combined accurate retrieval method - Google Patents
Digital retina multi-modal feature combined accurate retrieval method Download PDFInfo
- Publication number
- CN112306985A CN112306985A CN201910701647.XA CN201910701647A CN112306985A CN 112306985 A CN112306985 A CN 112306985A CN 201910701647 A CN201910701647 A CN 201910701647A CN 112306985 A CN112306985 A CN 112306985A
- Authority
- CN
- China
- Prior art keywords
- cluster
- target vehicle
- vehicle
- data
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 210000001525 retina Anatomy 0.000 title claims abstract description 10
- 238000003860 storage Methods 0.000 claims abstract description 11
- 238000004519 manufacturing process Methods 0.000 claims abstract description 8
- 238000012360 testing method Methods 0.000 claims abstract description 8
- 230000002776 aggregation Effects 0.000 claims abstract description 4
- 238000004220 aggregation Methods 0.000 claims abstract description 4
- 230000003287 optical effect Effects 0.000 claims abstract description 4
- 238000007726 management method Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000013145 classification model Methods 0.000 claims description 3
- 238000004140 cleaning Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 14
- 238000012544 monitoring process Methods 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000000306 component Substances 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A digital retina multi-modal feature combined accurate retrieval method comprises the following steps: step 1, building a big data platform network, which comprises a production cluster and a test cluster, adopting at least two core switches, and for the production cluster, assembling a tera optical interface of a service access switch and a gigabit interface of an out-of-band management access switch which are connected below the switches in pairs, wherein the tera is used for walking service data, and the gigabit is used for walking out-of-band management data; for the test cluster, the aggregation switch is connected with a kilomega interface of a single service access switch; the service network adopts network cards above 10 GE. Step 2, after the cluster and each assembly are built, firstly formatting a system storage space where an HDFS file system in the whole cluster is located, and then starting the cluster; the method is realized on the basis of a Hadoop cloud computing frame proved by a large number of practices, and has the advantages of good stability, simplicity and easiness in implementation.
Description
Technical Field
The invention discloses a digital retina multi-modal characteristic combined accurate retrieval method, relates to the field of security monitoring and artificial intelligence, and more particularly relates to a digital retina multi-modal characteristic combined accurate retrieval method and a method for accurately retrieving monitored vehicles or people based on the method.
Background
In the process of rapid development, the video monitoring industry is continuously developing towards networking, high-definition, intellectualization and diversification. With the deep application of artificial intelligence, cloud computing, big data and unmanned aerial vehicle technology, the diversification of video intelligent analysis becomes the most distinctive feature of a new generation of video monitoring system. The intelligent video monitoring platform (platform for short) is developed by a digital retina end-to-end system developed by Beijing university digital video coding and decoding national engineering laboratory, simultaneously supports functions of concentration transcoding, image analysis, feature retrieval, application display and the like of monitoring videos, integrates multiple frontier technologies such as visual content analysis, visual feature retrieval, big data analysis, cloud storage, deep learning and the like, develops multiple technologies such as parallel multi-channel video concentration transcoding, human and vehicle visual feature extraction, massive visual big data quick retrieval, double-current remote communication, software defined camera network, real-time service application middleware, gives consideration to an online camera and an offline video file, is suitable for efficient storage, quick retrieval and intelligent application of city-level large-scale monitoring videos, and can provide an integral large-scale monitoring video intelligent application solution for users, the method can be widely applied to intelligent video processing of various bayonet and micro-bayonet public security surveillance video scenes accessed by public security organs. Provides an effective technical means for comprehensive management of cities and detection of cases by public security institutions.
How to process the massive video monitoring data in the modern society provides great challenges for a technical architecture of a traditional system using a relational database (Oracle, Mysql, SQLServer), and the requirements of users cannot be met from a data storage level alone. However, due to the lack of intelligent massive video analysis techniques, the utilization of such information is extremely low. In order to fully utilize the information and guarantee social security, people try to apply a vehicle identification technology to intelligent video analysis to realize quick confirmation of the identity of a suspected criminal vehicle. However, in the face of mass vehicle image information, the search speed of vehicle identification cannot meet the application requirements of the security department at all, and a rapid mass vehicle image search comparison method is urgently needed.
Disclosure of Invention
The invention aims to provide a method for establishing an efficient vehicle multi-modal feature vector data index table, which ensures the real-time performance and reliability of the spatial index of a vehicle multi-modal feature search engine.
A digital retina multi-modal feature combined accurate retrieval method comprises the following steps:
step 1, building a big data platform network, comprising a production cluster and a test cluster, adopting at least two core switches,
for a production cluster, hanging a tera optical interface of a service access switch and a gigabit interface of an out-of-band management access switch in pairs below a convergence switch, wherein the tera is used for running service data, and the gigabit is used for running out-of-band management data; for the test cluster, the aggregation switch is connected with a kilomega interface of a single service access switch; the service network adopts network cards above 10 GE.
Step 2, after the cluster and each assembly are built, firstly formatting a system storage space where an HDFS file system in the whole cluster is located, and then starting the cluster;
step 3, after starting the Hadoop, starting a ZooKeeper assembly, finally starting an HBase assembly, and starting a Flink assembly;
and 4, collecting data: acquiring a vehicle body area in a target vehicle image, and then extracting characteristic information of the target vehicle image from the vehicle body area, wherein the characteristic information comprises characteristic points of the target vehicle image and the scale, the main direction and the relative position of the characteristic points; then, according to the feature information of the target vehicle image, inquiring feature points of the sample image stored in the feature database and the scale, the main direction and the relative position of the feature points of the sample image, and determining an image similar to the target vehicle image;
step 6, data distributed cloud storage: unstructured data are stored in HDFS and structured data are stored in Hbase.
Step 7, preparing an input picture: selecting a search picture includes the following ways:
(1) importing a target vehicle image by a local file;
(2) picture containing target → target vehicle image capture
(3) Local video: video stream → image capture of target-containing picture → target vehicle
(4) Camera real-time video streaming: video stream → image capture of target-containing picture → target vehicle
(5) And (3) importing the query of the feature library: semantic search → target vehicle image list → multi-angle vehicle image of selected target
the method firstly adopts the traditional method to extract the image characteristic mode to carry out 'rough retrieval' on the image database, and then carries out 'fine retrieval' on the basis of an improved V-I depth network model; the problem of 'sea fishing needles' in the original massive images is converted into realizable 'desktop fishing needles', and the image investigation efficiency is improved;
step 9, screening search results: pre-training a model for filtering vehicles, wherein the model comprises a vehicle type model, a color model, a sub-brand model, a license plate information model and a characteristic region classification model; the similarity calculation between the vehicle images is mainly used for grading the images according to the similarity of the attributes or the characteristics of the images, and judging the similarity of the whole content of the images according to the grades;
step 10, controlling the target vehicle:
if the vehicle picture is not the required vehicle picture, performing secondary retrieval, tertiary retrieval or more times according to semantics or the searched picture until the picture required by control is found;
step 11, alarming the target vehicle;
and 12, replaying the target vehicle track.
The invention has the following beneficial effects: the method does not need to adopt an expensive high-performance workstation for constructing a mass vehicle recognition search engine, is realized on the basis of a Hadoop cloud computing framework proved by a large number of practices, and has good stability, simple method and easy implementation. The invention also provides a high-efficiency vehicle feature vector group data index table method, which ensures the space index real-time performance and reliability of the vehicle image recognition search engine.
Drawings
FIG. 1 is a diagram of a hardware framework to which the present invention relates;
FIG. 2 is a flow chart of the present invention;
FIG. 3 is a diagram of a quasi-search concept of the present invention;
FIG. 4 is a comparison of prior art and inventive vehicle search;
FIG. 5 is a vehicle trajectory playback diagram of the present invention;
fig. 6 is a vehicle position playback diagram of the present invention.
Detailed Description
Based on the prior art, a digital retina multi-modal feature combined accurate retrieval method is provided. The system utilizes three core technologies of video structuralization, vehicle identification and big data processing to extract the characteristics and label analysis processing of 'vehicle' actual combat elements related in urban videos, and combines the structured mass data and the big data processing system together to provide a comprehensive solution for meeting the actual combat and social management function targets for social functional departments such as public security and the like.
In order to achieve the purpose, the invention provides a digital retina multi-modal feature joint accurate retrieval method.
The method comprises the following steps: a first layer: a base resource layer. The method supports the acquisition of the picture of vehicle identification through the butt joint with a monitoring platform, a bayonet system and the like; supporting pictures as a source to be analyzed; a second layer: and (4) an algorithm analysis layer. And (3) extracting the features of the target in the video or the bayonet picture by means of a vehicle depth recognition technology. Extractable vehicle basic features: license plate, color, model, brand, etc. And a third layer: and a big data processing layer. The big data platform includes two core components: HDFS and Hbase store, can store structured data after identification, unstructured data, and can provide distributed computing resources. A fourth layer: a cloud computing layer: and the Flink calculates in real time and supports retrieval, operation and association of data in the Storage. And a fifth layer: and (4) an intelligent application layer. Based on the mass vehicle passing data after identification, a series of intelligent applications are formed by combining the user requirements. Comprises the following steps: retrieval class, data mining class, big data statistics class. A sixth layer: and a working portal layer. And (4) iterating the search results for multiple times through two search modes of human-vehicle semantics and image searching by images, and performing deployment and control operation through the search results. The method can use a cheap common server group to construct a mass vehicle search engine, and is realized on the basis of Hadoop, Hbase and Flink cloud computing frames proved by a large number of practices, so that the method has good stability and reliability and a quick retrieval function.
The technical scheme of the invention is as follows:
the method is based on a cloud computing framework, wherein a reservoir is formed by a distributed personnel or vehicle identity information data table and is used for storing massive personnel or vehicle images, personnel or vehicle characteristic vectors and corresponding personnel or vehicle information; the cloud computing layer is composed of a personnel or vehicle characteristic vector clustering index table and a clustering list table and is used for establishing and maintaining an information index table; the outer layer is used for receiving tasks, calculating vehicle characteristic vectors and distributing the tasks. The system stores personnel or vehicle feature vectors of a large number of personnel or vehicle images obtained by utilizing a personnel or vehicle feature extraction method in an unstructured HBase database to obtain a personnel or vehicle identity information data table, and establishes an information index table comprising a personnel or vehicle feature vector clustering index table and a plurality of clustering name list tables after clustering analysis is respectively carried out on each dimensional feature of the personnel or vehicle feature vectors in the table by utilizing a K-means clustering algorithm.
1. Based on the method, the invention further provides a personnel or vehicle multi-modal characteristic combined accurate retrieval engine design method based on the cloud computing technology, and the method is characterized in that a mass vehicle identification process is divided into two stages of mass data organization and vehicle characteristic searching and comparing. The mass data organization stage is a stage for establishing a high-efficiency vehicle characteristic vector data index table, in this stage, the characteristic vectors of mass vehicle images obtained by calculation by using a characteristic extraction method are stored in an unstructured HBase database to obtain a vehicle identity information data table, and each dimension characteristic of the vehicle characteristic vectors in the table is respectively subjected to cluster analysis by using a K mean value clustering algorithm to establish an information index table (comprising a vehicle characteristic vector cluster index table and a plurality of cluster name list tables); in the vehicle feature searching and comparing stage, each dimension of feature of the feature vector of the vehicle image to be compared is utilized to search in the information index table, the result information obtained by searching is combined, so that the range of the vehicle data needing to be played and compared is greatly reduced, then the parallel calculation in the Flink frame is utilized to carry out the vehicle feature vector comparison calculation, and the calculation efficiency and the load balance are improved.
2. Based on the above method, the invention further provides a method for inputting various images, and provides a conditional input mode for combined accurate retrieval, wherein the following 5 modes are provided:
(1) importing a target vehicle image by a local file;
(2) picture containing target → target vehicle image capture;
(3) local video: video stream → picture containing target → target vehicle image capture;
(4) camera real-time video streaming: video stream → picture containing target → target vehicle image capture;
(5) and (3) importing the query of the feature library: semantic search → target vehicle image list → multi-angle vehicle image of the selected target.
3. Based on the method, the invention further provides a target library method, which is used for storing the result picture with high reliability of vehicle or personnel retrieval each time into the target library to be used as the condition input of secondary retrieval, so that the accuracy of the retrieval result is improved.
4. Based on the method, the invention further provides a mass data storage method, namely, the data distributed cloud storage: unstructured data are stored in HDFS and structured data are stored in Hbase.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described below with reference to the accompanying drawings by referring to specific examples.
The hardware environment for implementation is: the hardware environment for implementation is: the system server side runs in a Hadoop cluster, and the cluster comprises four servers. The cluster adopts a Master-Slave server architecture, one server is used as a Master node Master, and the other three servers are used as Slave nodes Slave, and all the servers operate in the same local area network.
TABLE 1
The invention uses a FastDFS cluster file system to store picture and video information, an HBase cluster to store structural characteristic information, a flash cluster to search people and vehicles, and a Zookeeper to provide service reliability, and the roles are distributed as follows:
TABLE 2
The invention is implemented as follows:
step 1, networking: the project big data platform comprises a production cluster and a test cluster, and two core switches (convergence) are adopted.
For a production cluster, a service access switch (a tera optical interface) and an out-of-band management access switch (a gigabit interface) are hooked under a convergence switch in pairs, wherein tera is used for running service data, and gigabit is used for running out-of-band management data (optional). For the test cluster, a single service access switch (gigabit interface) is connected below the aggregation switch. In order to prevent the exchange bandwidth between the nodes from becoming the bottleneck of the system performance, the service network adopts 10GE network cards.
And 2, after the cluster and each assembly are built, firstly formatting a system storage space where the HDFS file system in the whole cluster is located, and then starting the cluster.
And 3, after starting the Hadoop, starting the ZooKeeper component, finally starting the HBase component, and starting the Flink component.
And 4, collecting data: the method comprises the steps of obtaining a vehicle body area in a target vehicle image, and then extracting feature information of the target vehicle image from the vehicle body area, wherein the feature information comprises feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points. And then inquiring the feature points of the sample image stored in the feature database and the scale, the main direction and the relative position of the feature points of the sample image according to the feature information of the target vehicle image, and determining the image similar to the target vehicle image.
step 6, data distributed cloud storage: unstructured data are stored in HDFS and structured data are stored in Hbase.
Step 7, preparing an input picture: selecting a search picture includes the following ways:
(1) importing a target vehicle image by a local file;
(2) picture containing target → target vehicle image capture
(3) Local video: video stream → image capture of target-containing picture → target vehicle
(4) Camera real-time video streaming: video stream → image capture of target-containing picture → target vehicle
(5) And (3) importing the query of the feature library: semantic search → target vehicle image list → multi-angle vehicle image of selected target
the method firstly adopts the traditional method to extract the image characteristics to carry out 'rough retrieval' on the image database, and then carries out 'fine retrieval' on the basis of the improved V-I deep network model. The problem of 'sea fishing needles' in the original massive images is converted into the achievable 'desktop fishing needles', and the image investigation efficiency is improved.
Step 9, screening search results: pre-training a model for filtering vehicles, wherein the model comprises a vehicle type model, a color model, a sub-brand model, a license plate information model and a characteristic region classification model; the similarity calculation between the vehicle images is mainly used for grading the images according to the similarity of the attributes or the characteristics of the images, and judging the similarity of the whole content of the images according to the grades.
Step 10, controlling the target vehicle:
if the picture is not the required vehicle picture, the secondary retrieval, the third retrieval or more times can be carried out according to the semantic meaning or the searched picture until the picture required by the control is found.
Step 11, alarming the target vehicle
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.
Claims (1)
1. A digital retina multi-modal feature combined accurate retrieval method comprises the following steps:
step 1, building a big data platform network, comprising a production cluster and a test cluster, adopting at least two core switches,
for a production cluster, hanging a tera optical interface of a service access switch and a gigabit interface of an out-of-band management access switch in pairs below a convergence switch, wherein the tera is used for running service data, and the gigabit is used for running out-of-band management data; for the test cluster, the aggregation switch is connected with a kilomega interface of a single service access switch; the service network adopts network cards above 10 GE.
Step 2, after the cluster and each assembly are built, firstly formatting a system storage space where an HDFS file system in the whole cluster is located, and then starting the cluster;
step 3, after starting the Hadoop, starting a ZooKeeper assembly, finally starting an HBase assembly, and starting a Flink assembly;
and 4, collecting data: acquiring a vehicle body area in a target vehicle image, and then extracting characteristic information of the target vehicle image from the vehicle body area, wherein the characteristic information comprises characteristic points of the target vehicle image and the scale, the main direction and the relative position of the characteristic points; then, according to the feature information of the target vehicle image, inquiring feature points of the sample image stored in the feature database and the scale, the main direction and the relative position of the feature points of the sample image, and determining an image similar to the target vehicle image;
step 5, cleaning data:
step 6, data distributed cloud storage: unstructured data are stored in HDFS and structured data are stored in Hbase.
Step 7, preparing an input picture: selecting a search picture includes the following ways:
(1) importing a target vehicle image by a local file;
(2) picture containing target → target vehicle image capture;
(3) local video: video stream → picture containing target → target vehicle image capture;
(4) camera real-time video streaming: video stream → picture containing target → target vehicle image capture;
(5) and (3) importing the query of the feature library: semantic search → target vehicle image list → multi-angle vehicle image of the selected target;
step 8, multi-modal feature combined accurate retrieval:
the method firstly adopts the traditional method to extract the image characteristic mode to carry out 'rough retrieval' on the image database, and then carries out 'fine retrieval' on the basis of an improved V-I depth network model;
step 9, screening search results: pre-training a model for filtering vehicles, wherein the model comprises a vehicle type model, a color model, a sub-brand model, a license plate information model and a characteristic region classification model; the similarity calculation between the vehicle images is mainly used for grading the images according to the similarity of the attributes or the characteristics of the images, and judging the similarity of the whole content of the images according to the grades;
step 10, controlling the target vehicle:
if the vehicle picture is not the required vehicle picture, performing secondary retrieval, tertiary retrieval or more times according to semantics or the searched picture until the picture required by control is found;
step 11, alarming the target vehicle;
and 12, replaying the target vehicle track.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910701647.XA CN112306985A (en) | 2019-07-31 | 2019-07-31 | Digital retina multi-modal feature combined accurate retrieval method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910701647.XA CN112306985A (en) | 2019-07-31 | 2019-07-31 | Digital retina multi-modal feature combined accurate retrieval method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112306985A true CN112306985A (en) | 2021-02-02 |
Family
ID=74485162
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910701647.XA Pending CN112306985A (en) | 2019-07-31 | 2019-07-31 | Digital retina multi-modal feature combined accurate retrieval method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112306985A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114257817A (en) * | 2022-03-01 | 2022-03-29 | 浙江智慧视频安防创新中心有限公司 | Encoding method and decoding method of multitask digital retina characteristic stream |
CN117437781A (en) * | 2023-11-13 | 2024-01-23 | 北京易华录信息技术股份有限公司 | Traffic intersection order management and control method based on digital retina and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107566785A (en) * | 2017-08-02 | 2018-01-09 | 深圳微品时代网络技术有限公司 | A kind of video monitoring system and method towards big data |
CN108009489A (en) * | 2017-11-29 | 2018-05-08 | 合肥寰景信息技术有限公司 | Face for mass data is deployed to ensure effective monitoring and control of illegal activities analysis system |
CN108363771A (en) * | 2018-02-08 | 2018-08-03 | 杭州电子科技大学 | A kind of image search method towards public security investigation application |
-
2019
- 2019-07-31 CN CN201910701647.XA patent/CN112306985A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107566785A (en) * | 2017-08-02 | 2018-01-09 | 深圳微品时代网络技术有限公司 | A kind of video monitoring system and method towards big data |
CN108009489A (en) * | 2017-11-29 | 2018-05-08 | 合肥寰景信息技术有限公司 | Face for mass data is deployed to ensure effective monitoring and control of illegal activities analysis system |
CN108363771A (en) * | 2018-02-08 | 2018-08-03 | 杭州电子科技大学 | A kind of image search method towards public security investigation application |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114257817A (en) * | 2022-03-01 | 2022-03-29 | 浙江智慧视频安防创新中心有限公司 | Encoding method and decoding method of multitask digital retina characteristic stream |
CN114257817B (en) * | 2022-03-01 | 2022-09-02 | 浙江智慧视频安防创新中心有限公司 | Encoding method and decoding method of multi-task digital retina characteristic stream |
CN117437781A (en) * | 2023-11-13 | 2024-01-23 | 北京易华录信息技术股份有限公司 | Traffic intersection order management and control method based on digital retina and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Strezoski et al. | Omniart: a large-scale artistic benchmark | |
CN109344285B (en) | Monitoring-oriented video map construction and mining method and equipment | |
US9251425B2 (en) | Object retrieval in video data using complementary detectors | |
CN105630897B (en) | Content-aware geographic video multilevel correlation method | |
CN111241305A (en) | Data processing method and device, electronic equipment and computer readable storage medium | |
KR102028930B1 (en) | method of providing categorized video processing for moving objects based on AI learning using moving information of objects | |
CN113434573B (en) | Multi-dimensional image retrieval system, method and equipment | |
Castanon et al. | Retrieval in long-surveillance videos using user-described motion and object attributes | |
CN112766119A (en) | Method for accurately identifying strangers and constructing community security based on multi-dimensional face analysis | |
US11836637B2 (en) | Construction method of human-object-space interaction model based on knowledge graph | |
Rodrigues et al. | Smaframework: Urban data integration framework for mobility analysis in smart cities | |
CN112306985A (en) | Digital retina multi-modal feature combined accurate retrieval method | |
CN104820711A (en) | Video retrieval method for figure target in complex scene | |
Alam et al. | Intellibvr-intelligent large-scale video retrieval for objects and events utilizing distributed deep-learning and semantic approaches | |
CN112925899B (en) | Ordering model establishment method, case clue recommendation method, device and medium | |
Kim et al. | TVDP: Translational visual data platform for smart cities | |
Sarker et al. | Transformer-based person re-identification: a comprehensive review | |
CN109447112A (en) | A kind of portrait clustering method, electronic equipment and storage medium | |
Chen et al. | Design and implementation of video analytics system based on edge computing | |
KR20220027000A (en) | Method and device for extracting spatial relationship of geographic location points | |
Qu et al. | A time sequence location method of long video violence based on improved C3D network | |
Khan et al. | A big data platform for spatio-temporal social event discovery | |
CN116050517A (en) | Public security field oriented multi-mode data management method and system | |
CN110750678A (en) | Method and system for monitoring video data association description and storage management | |
Zhu et al. | Person re-identification in the real scene based on the deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |