CN116894103B - Data classification storage system for specific scene - Google Patents

Data classification storage system for specific scene Download PDF

Info

Publication number
CN116894103B
CN116894103B CN202310906827.8A CN202310906827A CN116894103B CN 116894103 B CN116894103 B CN 116894103B CN 202310906827 A CN202310906827 A CN 202310906827A CN 116894103 B CN116894103 B CN 116894103B
Authority
CN
China
Prior art keywords
monitored
sub
information
area
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310906827.8A
Other languages
Chinese (zh)
Other versions
CN116894103A (en
Inventor
郑美惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Wintoo Information Technology Co ltd
Original Assignee
Anhui Wintoo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Wintoo Information Technology Co ltd filed Critical Anhui Wintoo Information Technology Co ltd
Priority to CN202310906827.8A priority Critical patent/CN116894103B/en
Publication of CN116894103A publication Critical patent/CN116894103A/en
Application granted granted Critical
Publication of CN116894103B publication Critical patent/CN116894103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a data classified storage system of a specific scene, which belongs to the technical field of data storage, and the data classified storage system obtains internal relations among monitored areas of cameras in an image acquisition unit by analyzing and processing historical data, and performs targeted distributed storage on image information acquired by the cameras, so that targeted retrieval analysis can be performed when specific abnormal features are retrieved, and the image information acquired by all the cameras is prevented from being comprehensively and in a covered mode.

Description

Data classification storage system for specific scene
Technical Field
The invention belongs to the technical field of data storage, and particularly relates to a data classification storage system of a specific scene.
Background
The method for acquiring and acquiring the image information of a designated area by using the monitoring camera is a common monitoring means in the prior art, a large amount of image information can be stored along with the development of a storage technology, so that the data integrity is ensured, but simultaneously, a large amount of data also brings a difficulty to screening data.
Disclosure of Invention
The invention aims to provide a data classification storage system for a specific scene, which solves the problem that in the prior art, when identifying abnormal features in massive image information, the abnormal features need to be carried out in a full coverage mode, and the retrieval efficiency is greatly reduced.
The aim of the invention can be achieved by the following technical scheme:
a scene-specific data classification storage system comprising:
the image acquisition unit comprises cameras distributed in the monitored area, and each camera acquires the image information in the monitored area and transmits the image information to the classification unit;
the feature recognition unit is used for recognizing and analyzing the image information acquired by the image acquisition unit and recognizing abnormal features in the image information;
the classifying unit is used for classifying the image information acquired by each camera in the image acquisition unit and then sending the image information to the storage unit for storage;
the storage unit comprises a plurality of sub-storage units, and the image information acquired by the image acquisition unit is stored in a distributed mode through the plurality of sub-storage units;
the method for classifying the image information acquired by each camera in the image acquisition unit through the classification unit and then sending the image information to the storage unit for storage comprises the following steps:
s1, acquiring image information of each monitoring subarea of a monitored area through each camera in an image acquisition unit, and transmitting the image information to a classification unit;
marking a characteristic information set of a person as target characteristic information through a classification unit;
analyzing the image information acquired by the camera through the feature recognition unit to acquire target feature information;
s2, acquiring a monitored path of target characteristic information in a monitored area;
the monitored path of the target feature in the monitored area refers to the sequence of the target feature information in each monitored subarea;
s3, marking different feature information sets as target feature information through a classification unit, and acquiring all corresponding monitored paths in a monitored area within the past preset time t2 to form a monitored path set;
acquiring the number ri of occurrence of each monitored subarea in a monitored path set;
taking one monitored sub-area as a central point, and considering the corresponding central point as a group core when the ri value corresponding to each monitored sub-area which can be directly reached by the monitored sub-area is smaller than the ri value corresponding to the central point;
marking all monitored sub-areas which can be directly reached by the group cores as first-level associated sub-areas corresponding to the information groups;
acquiring a secondary association sub-area corresponding to each primary association sub-area in the information group;
when the association coefficient G between one monitored sub-area and a corresponding first-level association sub-area is larger than a preset value G1y, the corresponding monitored sub-area is considered to be a second-level association sub-area of the corresponding information group;
sequentially acquiring all levels of associated subregions corresponding to the information groups, and marking all levels of associated subregions and group cores as the same information group;
the fact that the target characteristic information can directly reach the corresponding monitored subarea without passing through other monitored subareas means that the target characteristic information can directly reach the corresponding monitored subarea;
s4, sequentially obtaining a plurality of information groups according to the method of the step S3, taking the rest monitored subareas which do not belong to any information group as at least one information group, and storing the monitoring image information of the monitoring subareas corresponding to each information group as a group in a sub-storage unit.
As a further aspect of the present invention, when the time difference between the two occurrences of the target feature information in the monitored sub-area is equal to or greater than the preset value t1, it is considered that, for the target feature information, the two corresponding monitored sub-areas belong to different monitored paths.
As a further scheme of the invention, the calculation method of the association coefficient G between the two monitored subareas comprises the following steps:
marking the two monitored subarea sub-tables as a first object and a second object;
acquiring a monitored path set;
acquiring the number k1 of times that each feature information set starts from a first object and the number k2 of times that the corresponding feature information set starts from the first object and then goes to a second object;
acquiring the number k3 of times that each feature information set starts from the second object and the number k4 of times that the corresponding feature information set starts from the second object and then goes to the second object;
when any one of k2/k1 and k4/k3 is larger than a preset value k5, the larger value of k2/k1 and k4/k3 is used as the association coefficient G between the two corresponding monitored subareas.
As a further scheme of the invention, the system further comprises a retrieval unit, wherein a user inputs abnormal characteristics through the retrieval unit and can also input monitored subareas with corresponding abnormal characteristics through the retrieval unit;
when a user determines abnormal characteristics, identifying the corresponding abnormal characteristics through a characteristic identification unit, then acquiring a monitored subarea in which the abnormal characteristics appear for the first time, and acquiring a sub-storage unit to which the monitored subarea belongs;
SS2, searching in the corresponding sub-storage unit, acquiring a motion path of the abnormal feature in the monitored sub-area corresponding to the corresponding sub-storage unit according to the time sequence of the abnormal feature in the monitored sub-area, and marking the path as a fragment path;
acquiring monitored subareas corresponding to the beginning and the end of the fragment path, and marking the corresponding monitored subareas as extension subareas;
SS3, acquiring other monitored sub-areas which can be directly reached by the extension sub-area except the monitored sub-areas which are in the same information group with the extension sub-area, and marking the other monitored sub-areas as to-be-adapter sub-areas;
acquiring a correlation coefficient G between the extension sub-region and each to-be-joined sub-region;
grouping the to-be-joined areas according to the information groups, and calculating and obtaining the sum of the association coefficients of the to-be-joined areas corresponding to the information groups;
SS4, sequencing the information groups according to the sequence from the big to the small of the sum of the association coefficients of the corresponding to-be-spliced areas;
the feature recognition unit sequentially carries out recognition analysis on the image information in the sub-storage units corresponding to the information groups according to the sequence;
and when the corresponding abnormal characteristics are found, continuing to perform steps from SS2 to SS4 until the corresponding abnormal characteristics cannot be identified in the corresponding sub-storage units.
As a further scheme of the present invention, when the abnormal feature is input only through the search unit, the sub-storage units are sequentially searched according to the order from the higher ri value to the lower ri value corresponding to the monitored sub-area corresponding to each sub-storage unit until the corresponding abnormal feature is found.
The invention has the beneficial effects that:
1. according to the method, the historical data are analyzed and processed to obtain the internal relation among the monitored areas of the cameras in the image acquisition unit, and the image information acquired by the cameras is distributed and stored in a targeted mode, so that the targeted retrieval analysis can be performed when specific abnormal characteristics are retrieved, the image information acquired by all the cameras is prevented from being comprehensively and in a covered mode, the data quantity required to be retrieved can be reduced, the retrieval efficiency of the corresponding abnormal characteristics is improved, a user can quickly obtain the active data of the corresponding abnormal characteristics in the monitored area, the searching frequency of the unusual storage information can be reduced, and the storage cost is reduced.
2. According to the invention, the image information acquired by the image acquisition information is stored in a distributed manner, so that the safety of the image information acquired by the camera is effectively improved.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of a framework of a data sort storage system of a specific scenario of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A data classification storage system for a specific scenario, as shown in fig. 1, includes:
the image acquisition unit comprises cameras distributed in the monitored area, and each camera acquires the image information in the monitored area and transmits the image information to the classification unit;
the feature recognition unit is used for recognizing and analyzing the image information acquired by the image acquisition unit and recognizing abnormal features in the image information;
the classifying unit is used for classifying the image information acquired by each camera in the image acquisition unit and then sending the image information to the storage unit for storage;
the storage unit comprises a plurality of sub-storage units, and the image information acquired by the image acquisition unit is stored in a distributed mode through the plurality of sub-storage units;
the searching unit is used for inputting the abnormal characteristics by a user and inputting the monitored subareas with the corresponding abnormal characteristics by the searching unit;
the method for classifying the image information acquired by each camera in the image acquisition unit through the classification unit and then sending the image information to the storage unit for storage comprises the following steps:
s1, acquiring image information of each monitoring subarea of a monitored area through each camera in an image acquisition unit, and transmitting the image information to a classification unit;
each camera is marked as C1, C2, … and Cn in sequence, wherein n is the number of cameras arranged in the monitored area;
marking the feature information set of one person as target feature information through a classification unit;
analyzing the image information acquired by the camera Ci through the feature recognition unit to acquire target feature information;
wherein i is more than or equal to 1 and less than or equal to n;
s2, acquiring a monitored path of target characteristic information in a monitored area;
the monitored path of the target feature in the monitored area refers to the sequence of the target feature information in each monitored subarea;
when the time difference between the two occurrences of the target feature information in the monitored sub-area is greater than or equal to a preset value t1, the two corresponding monitored sub-areas are considered to belong to different monitored paths for the target feature information;
s3, marking different feature information sets as target feature information through a classification unit, and acquiring all corresponding monitored paths in a monitored area within the past preset time t2 to form a monitored path set;
acquiring the number ri of occurrence of each monitored subarea in a monitored path set;
taking one monitored sub-area as a central point, and considering the corresponding central point as a group core when the ri value corresponding to each monitored sub-area which can be directly reached by the monitored sub-area is smaller than the ri value corresponding to the central point;
marking the group cores and the corresponding monitored sub-areas which can be reached directly as the same information group;
marking all monitored sub-areas which can be directly reached by the group cores as first-level associated sub-areas corresponding to the information groups;
acquiring a secondary association sub-area corresponding to each primary association sub-area in the information group;
when the association coefficient G between one monitored sub-area and a corresponding first-level association sub-area is larger than a preset value G1y, the corresponding monitored sub-area is considered to be a second-level association sub-area of the corresponding information group;
in one embodiment of the present invention, the method for calculating the correlation coefficient G between two monitored sub-areas is:
marking the two monitored subarea sub-tables as a first object and a second object;
acquiring a monitored path set;
acquiring the number k1 of times that each feature information set starts from a first object and the number k2 of times that the corresponding feature information set starts from the first object and then goes to a second object;
acquiring the number k3 of times that each feature information set starts from the second object and the number k4 of times that the corresponding feature information set starts from the second object and then goes to the second object;
when any one of k2/k1 and k4/k3 is larger than a preset value k5, taking the larger value of k2/k1 and k4/k3 as a correlation coefficient G between the two corresponding monitored subareas;
sequentially acquiring each level of associated sub-area corresponding to the information group according to the method, and marking each level of associated sub-area and the group core as the same information group;
wherein the fact that the target characteristic information can directly reach the corresponding monitored subarea without passing through other monitored subareas;
s4, sequentially obtaining a plurality of information groups according to the method of the step S3, taking the rest monitored subareas which do not belong to any information group as at least one information group, and storing the monitoring image information of the monitoring subareas corresponding to each information group as a group in a sub-storage unit;
s5, when the user determines the abnormal characteristics, identifying the corresponding abnormal characteristics through the characteristic identification unit, then acquiring a monitored subarea in which the abnormal characteristics appear, and acquiring a sub-storage unit to which the monitored subarea belongs;
searching in the corresponding sub-storage units, acquiring which monitored subareas the abnormal features appear in, acquiring the motion path of the abnormal features in the monitored subareas corresponding to the corresponding sub-storage units according to the time sequence, and marking the path as a fragment path;
acquiring monitored subareas corresponding to the beginning and the end of the fragment path, and marking the corresponding monitored subareas as extension subareas;
the beginning of the fragment path refers to the monitored subarea which is first involved when the abnormal feature enters the corresponding sub-storage unit from the other sub-storage units, and the end of the fragment path refers to the monitored subarea which is last involved when the abnormal feature enters the other sub-storage units from the corresponding sub-storage unit;
two or more fragment paths can be included in the monitoring range corresponding to one of the sub-storage units;
acquiring other monitored sub-areas which can be directly reached by the extension sub-area except for the monitored sub-areas which are in the same information group with the extension sub-area, and marking the other monitored sub-areas as to-be-adapter sub-areas;
acquiring a correlation coefficient G between the extension sub-region and each to-be-joined sub-region;
grouping the to-be-joined areas according to the information groups, and calculating and obtaining the sum of the association coefficients of the to-be-joined areas corresponding to the information groups;
sequencing the information groups according to the sequence from the big to the small of the sum of the association coefficients of the corresponding to-be-spliced regions;
the feature recognition unit sequentially carries out recognition analysis on the image information in the sub-storage units corresponding to the information groups according to the sequence;
and when the corresponding abnormal characteristics are found, continuing to process according to the steps until the corresponding abnormal characteristics cannot be identified in the corresponding sub-storage units.
It should be noted that, when the feature recognition unit sequentially performs recognition analysis on the image information in the sub-storage units corresponding to each information group in sequence, a certain time range may be determined first, and the feature recognition unit performs recognition analysis only on the image information in the corresponding time range;
therefore, the processing amount of data can be greatly reduced, and the retrieval efficiency is improved.
In one embodiment of the invention, when the abnormal feature is input only through the retrieval unit, the sub-storage units are sequentially retrieved according to the sequence from the big to the small of the ri value corresponding to the monitored sub-area corresponding to each sub-storage unit until the corresponding abnormal feature is found, so that the occurrence position of the abnormal feature is determined;
according to the method, the historical data are analyzed and processed to obtain the internal relation among the monitored areas of the cameras in the image acquisition unit, and the image information acquired by the cameras is distributed and stored in a targeted mode, so that the targeted retrieval analysis can be performed when specific abnormal features are retrieved, the image information acquired by all the cameras is prevented from being retrieved comprehensively and in a covering mode, on one hand, the retrieval efficiency of the corresponding abnormal features can be improved, on the other hand, the search frequency of the unusual stored information can be reduced, and the storage cost is reduced;
in addition, the invention stores the image information acquired by the image acquisition information in a distributed way, thereby effectively improving the safety of the image information acquired by the camera.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative and explanatory of the invention, as various modifications and additions may be made to the particular embodiments described, or in a similar manner, by those skilled in the art, without departing from the scope of the invention or exceeding the scope of the invention as defined in the claims.

Claims (3)

1. A scene-specific data classification storage system, comprising:
the image acquisition unit comprises cameras distributed in the monitored area, and each camera acquires the image information in the monitored area and transmits the image information to the classification unit;
the feature recognition unit is used for recognizing and analyzing the image information acquired by the image acquisition unit and recognizing abnormal features in the image information;
the classifying unit is used for classifying the image information acquired by each camera in the image acquisition unit and then sending the image information to the storage unit for storage;
the storage unit comprises a plurality of sub-storage units, and the image information acquired by the image acquisition unit is stored in a distributed mode through the plurality of sub-storage units;
the method for classifying the image information acquired by each camera in the image acquisition unit through the classification unit and then sending the image information to the storage unit for storage comprises the following steps:
s1, acquiring image information of each monitoring subarea of a monitored area through each camera in an image acquisition unit, and transmitting the image information to a classification unit;
marking a characteristic information set of a person as target characteristic information through a classification unit;
analyzing the image information acquired by the camera through the feature recognition unit to acquire target feature information;
s2, acquiring a monitored path of target characteristic information in a monitored area;
the monitored path of the target characteristic information in the monitored area refers to the sequence of the target characteristic information in each monitored subarea;
s3, marking different feature information sets as target feature information through a classification unit, and acquiring all corresponding monitored paths in a monitored area within the past preset time t2 to form a monitored path set;
acquiring the number ri of occurrence of each monitored subarea in a monitored path set;
taking one monitored sub-area as a central point, and considering the corresponding central point as a group core when the ri value corresponding to each monitored sub-area which can be directly reached by the monitored sub-area is smaller than the ri value corresponding to the central point;
marking all monitored sub-areas which can be directly reached by the group cores as first-level associated sub-areas corresponding to the information groups;
acquiring a secondary association sub-area corresponding to each primary association sub-area in the information group;
when the association coefficient G between one monitored sub-area and a corresponding first-level association sub-area is larger than a preset value G1y, the corresponding monitored sub-area is considered to be a second-level association sub-area of the corresponding information group;
sequentially acquiring all levels of associated subregions corresponding to the information groups, and marking all levels of associated subregions and group cores as the same information group;
the fact that the target characteristic information can directly reach the corresponding monitored subarea without passing through other monitored subareas means that the target characteristic information can directly reach the corresponding monitored subarea;
s4, sequentially obtaining a plurality of information groups according to the method of the step S3, taking the rest monitored subareas which do not belong to any information group as at least one information group, and storing the monitoring image information of the monitoring subareas corresponding to each information group as a group in a sub-storage unit;
the calculating method of the association coefficient G between the two monitored subareas comprises the following steps:
marking the two monitored subareas as a first object and a second object respectively;
acquiring a monitored path set;
acquiring the number k1 of times that each feature information set starts from a first object and the number k2 of times that the corresponding feature information set starts from the first object and then goes to a second object;
acquiring the number k3 of times that each feature information set starts from the second object and the number k4 of times that the corresponding feature information set starts from the second object and then goes to the first object;
when any one of k2/k1 and k4/k3 is larger than a preset value k5, taking the larger value of k2/k1 and k4/k3 as a correlation coefficient G between the two corresponding monitored subareas;
the data classified storage system further comprises a retrieval unit, wherein a user inputs abnormal characteristics through the retrieval unit and also inputs monitored subareas with corresponding abnormal characteristics through the retrieval unit;
when a user determines abnormal characteristics, identifying the corresponding abnormal characteristics through a characteristic identification unit, then acquiring a monitored subarea in which the abnormal characteristics appear for the first time, and acquiring a sub-storage unit to which the monitored subarea belongs;
SS2, searching in the corresponding sub-storage unit, acquiring a motion path of the abnormal feature in the monitored sub-area corresponding to the corresponding sub-storage unit according to the time sequence of the abnormal feature in the monitored sub-area, and marking the path as a fragment path;
acquiring monitored subareas corresponding to the beginning and the end of the fragment path, and marking the corresponding monitored subareas as extension subareas;
SS3, acquiring other monitored sub-areas which can be directly reached by the extension sub-area except the monitored sub-areas which are in the same information group with the extension sub-area, and marking the other monitored sub-areas as to-be-adapter sub-areas;
acquiring a correlation coefficient G between the extension sub-region and each to-be-joined sub-region;
grouping the to-be-joined areas according to the information groups, and calculating and obtaining the sum of the association coefficients of the to-be-joined areas corresponding to the information groups;
SS4, sequencing the information groups according to the sequence from the big to the small of the sum of the association coefficients of the corresponding to-be-spliced areas;
the feature recognition unit sequentially carries out recognition analysis on the image information in the sub-storage units corresponding to the information groups according to the sequence;
and when the corresponding abnormal characteristics are found, continuing to perform steps from SS2 to SS4 until the corresponding abnormal characteristics cannot be identified in the corresponding sub-storage units.
2. The data classification storage system of a specific scene according to claim 1, wherein when a time difference between two occurrences of the target feature information in the monitored sub-area is equal to or greater than a preset value t1, the two corresponding monitored sub-areas are considered to belong to different monitored paths for the target feature information.
3. The data classification storage system of a specific scene according to claim 1, wherein when the abnormal feature is input only by the search unit, the sub-storage units are sequentially searched in order of the ri value corresponding to the monitored sub-area corresponding to each sub-storage unit from the large to the small until the corresponding abnormal feature is found.
CN202310906827.8A 2023-07-24 2023-07-24 Data classification storage system for specific scene Active CN116894103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310906827.8A CN116894103B (en) 2023-07-24 2023-07-24 Data classification storage system for specific scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310906827.8A CN116894103B (en) 2023-07-24 2023-07-24 Data classification storage system for specific scene

Publications (2)

Publication Number Publication Date
CN116894103A CN116894103A (en) 2023-10-17
CN116894103B true CN116894103B (en) 2024-02-09

Family

ID=88313368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310906827.8A Active CN116894103B (en) 2023-07-24 2023-07-24 Data classification storage system for specific scene

Country Status (1)

Country Link
CN (1) CN116894103B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101033238B1 (en) * 2010-08-26 2011-05-06 주식회사 대덕지에스 Video surveillance system and recording medium recording in which video surveillance program is recorded
CN103049460A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Video surveillance scene information classifying and storing method and search method
CN106254494A (en) * 2016-08-17 2016-12-21 浙江诚名智能工程有限公司 Artificial abortion's monitoring system
WO2020103293A1 (en) * 2018-11-22 2020-05-28 深圳云天励飞技术有限公司 Method, device, and electronic device for presenting individual search information
CN115731515A (en) * 2022-11-21 2023-03-03 合肥辉煌杀虫服务有限公司 Positioning system for mouse moving area and moving track

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015122163A1 (en) * 2014-02-14 2015-08-20 日本電気株式会社 Video processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101033238B1 (en) * 2010-08-26 2011-05-06 주식회사 대덕지에스 Video surveillance system and recording medium recording in which video surveillance program is recorded
CN103049460A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Video surveillance scene information classifying and storing method and search method
CN106254494A (en) * 2016-08-17 2016-12-21 浙江诚名智能工程有限公司 Artificial abortion's monitoring system
WO2020103293A1 (en) * 2018-11-22 2020-05-28 深圳云天励飞技术有限公司 Method, device, and electronic device for presenting individual search information
CN115731515A (en) * 2022-11-21 2023-03-03 合肥辉煌杀虫服务有限公司 Positioning system for mouse moving area and moving track

Also Published As

Publication number Publication date
CN116894103A (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN106815263B (en) The searching method and device of legal provision
EP1494132B1 (en) Method and apparatus for representing a group of images
CN112765235A (en) Human resource intelligent management system based on feature recognition and big data analysis and cloud management server
CN107341508B (en) Fast food picture identification method and system
CN106033443B (en) A kind of expanding query method and device in vehicle retrieval
CN113297578B (en) Information perception method and information security system based on big data and artificial intelligence
KR100491250B1 (en) Method and System for registering goods information
KR102161882B1 (en) Method for locating one or more candidate digital images being likely candidates for depicting an object
CN112818162B (en) Image retrieval method, device, storage medium and electronic equipment
CN114419501A (en) Video recommendation method and device, computer equipment and storage medium
CN105760844A (en) Video stream data processing method, apparatus and system
CN111078512A (en) Alarm record generation method and device, alarm equipment and storage medium
DE102014113817A1 (en) Device and method for recognizing an object in an image
CN115151952A (en) High-precision identification method and system for power transformation equipment
CN116894103B (en) Data classification storage system for specific scene
CN117221087A (en) Alarm root cause positioning method, device and medium
US11255716B2 (en) Analysis support apparatus, analysis support method, and a computer-readable medium containing an analysis support program
CN116911500A (en) Grassland mouse condition intelligent supervision system and method based on big data monitoring
Koskela et al. The PicSOM retrieval system: description and evaluations
CN113657179A (en) Image recognition and modeling method and device, electronic equipment and storage medium
CN110032933B (en) Image data acquisition method and device, terminal and storage medium
CN114048344A (en) Similar face searching method, device, equipment and readable storage medium
CN113837091A (en) Identification method, identification device, electronic equipment and computer-readable storage medium
JP4036009B2 (en) Image data classification device
CN117131244B (en) Novel distributed big data screening and filtering system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant