CN116069976A - Regional video analysis method and system - Google Patents
Regional video analysis method and system Download PDFInfo
- Publication number
- CN116069976A CN116069976A CN202310201769.9A CN202310201769A CN116069976A CN 116069976 A CN116069976 A CN 116069976A CN 202310201769 A CN202310201769 A CN 202310201769A CN 116069976 A CN116069976 A CN 116069976A
- Authority
- CN
- China
- Prior art keywords
- video
- camera
- database
- meta
- analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 102
- 238000001514 detection method Methods 0.000 claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 22
- 238000013507 mapping Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 abstract description 13
- 238000012545 processing Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005429 filling process Methods 0.000 description 1
- 238000011900 installation process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/787—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Computational Linguistics (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention relates to the technical field of video detection and analysis, and particularly discloses a regional video analysis method and a regional video analysis system, wherein the method comprises the steps of inquiring position information of a camera and establishing a database according to the position information; acquiring a meta-video according to a preset camera, and storing the meta-video into the database according to working parameters of the camera; receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and splicing and fitting to generate a regional video; and analyzing the regional video based on a preset sample video library to generate an analysis report. According to the method, the meta-video is acquired through the preset camera, and spliced according to the position of the camera to obtain the regional video; calculating a correlation coefficient between the region video and a preset sample video through computer equipment, and determining the sample video which is most matched with the region video, thereby generating an analysis report; the working pressure of monitoring personnel is greatly reduced.
Description
Technical Field
The invention relates to the technical field of video detection and analysis, in particular to a regional video analysis method and system.
Background
Whether production or management activities, are limited to a certain area; within this area, the manager often has a need for monitoring, the purpose of which is to obtain the status of the area in real time.
The existing monitoring demands are mostly completed by manually matching with a camera, the camera is a monitoring tool, the camera is a monitoring main body, monitoring staff needs to watch information fed back by the camera in real time, most of the information is repeated and boring video, the work of the monitoring staff is extremely boring, and how to reduce the invalid monitoring amount of the monitoring staff is the technical problem to be solved by the technical scheme of the invention.
Disclosure of Invention
The invention aims to provide a regional video analysis method and a regional video analysis system, which are used for solving the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a method of regional video analysis, the method comprising:
inquiring the position information of a camera, and establishing a database according to the position information;
acquiring a meta-video according to a preset camera, and storing the meta-video into the database according to working parameters of the camera;
receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and splicing and fitting to generate a regional video;
analyzing the regional video based on a preset sample video library to generate an analysis report; wherein the analysis report is one of a sample video library.
As a further scheme of the invention: the step of inquiring the position information of the camera and establishing a database according to the position information comprises the following steps:
inquiring the position information of a camera, and establishing a camera position diagram;
taking each camera as a center, acquiring the total number of cameras within a preset radius range, and determining a core camera according to the total number of cameras;
sequentially taking the core cameras as the centers, establishing incremental detection circles, determining detection radiuses when the number of cameras in the detection circles reaches a preset number threshold, and establishing a database connected with each camera in the detection circles;
circularly executing until all cameras are connected with the database; the mapping relation between the camera and the database is many-to-one.
As a further scheme of the invention: the step of obtaining the meta-video according to the preset camera and storing the meta-video to the database according to the working parameters of the camera comprises the following steps:
acquiring working parameters of the camera in real time, and counting the working parameters according to the time information to obtain a working parameter table;
determining detection areas of cameras according to the working parameters, and counting the detection areas of all cameras corresponding to the same database at the same moment to obtain a meta-image at the moment;
the meta-images are inserted into the database according to a time sequence.
As a further scheme of the invention: the step of receiving the analysis period input by the user, extracting the meta-videos in the database based on the analysis period, and generating the regional videos by splicing fitting comprises the following steps:
receiving analysis time periods input by a user, and intercepting the database based on the analysis time periods to obtain a library to be analyzed;
sequentially inquiring the position information of the cameras corresponding to the library to be analyzed, and determining a region to be filled in a preset region map according to the position information;
reading meta-images in the libraries to be analyzed, inserting the area images into the areas to be filled, and obtaining the area images at the moment when one traversal of all the libraries to be analyzed is completed;
arranging the regional images according to time sequence to obtain regional videos;
sequentially selecting a certain region image as a reference image in the step of arranging the region images according to time sequence, and filling missing parts in the reference image according to adjacent images of the reference image; the content of the missing part is an area which is not collected at a certain moment.
As a further scheme of the invention: the step of analyzing the region video based on the preset sample video library and generating an analysis report comprises the following steps:
inputting the regional video into a preset conversion model, and outputting a first characteristic value group; wherein the feature value is determined by a region image in the region video; the sequence of the characteristic values in the characteristic value group and the sequence of the regional image are in a mapping relation;
reading the analysis period, calculating a proportion interval of the analysis period in a preset time period, and inquiring a preset sample video library according to the proportion interval;
sequentially extracting sample videos in the sample video library, inputting the conversion model, and outputting a second characteristic value set;
calculating correlation coefficients of the first characteristic value group and the second characteristic value group, and selecting a target sample video according to the correlation coefficients;
querying an analysis report of the target sample video in the sample video library.
As a further scheme of the invention: the calculation formula of the correlation coefficient is as follows:
for the first characteristic value groupFirst->Personal characteristic value->For the mean of the eigenvalues in the first set of eigenvalues,is the standard deviation of the sample; />Is the +.>Personal characteristic value->For the mean value of the eigenvalues in the second eigenvalue group,/->Is the standard deviation of the samples.
The technical scheme of the invention also provides a regional video analysis system, which comprises:
the database establishing module is used for inquiring the position information of the camera and establishing a database according to the position information;
the meta-video storage module is used for acquiring a meta-video according to a preset camera and storing the meta-video to the database according to working parameters of the camera;
the splicing fitting module is used for receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and generating an area video by splicing fitting;
the video analysis module is used for analyzing the regional video based on a preset sample video library and generating an analysis report; wherein the analysis report is one of a sample video library.
As a further scheme of the invention: the database establishment module comprises:
the position map establishing unit is used for inquiring the position information of the camera and establishing a camera position map;
the core determining unit is used for taking each camera as a center, obtaining the total number of cameras within a preset radius range, and determining the core cameras according to the total number of the cameras;
the classifying unit is used for sequentially taking the core cameras as the centers, establishing incremental detection circles, determining detection radiuses when the number of the cameras in the detection circles reaches a preset number threshold value, and establishing a database connected with each camera in the detection circles;
the circulation execution unit is used for performing circulation until all cameras are connected with the database; the mapping relation between the camera and the database is many-to-one.
As a further scheme of the invention: the meta video storage module includes:
the working parameter statistics unit is used for acquiring working parameters of the camera in real time, and carrying out statistics on the working parameters according to the time information to obtain a working parameter table;
the meta-image generating unit is used for determining detection areas of the cameras according to the working parameters, counting the detection areas of all the cameras corresponding to the same database at the same moment, and obtaining a meta-image at the moment;
and the data inserting unit is used for inserting the meta-images into the database according to the time sequence.
As a further scheme of the invention: the splice fitting module comprises:
the database intercepting unit is used for receiving an analysis period input by a user, intercepting the database based on the analysis period and obtaining a database to be analyzed;
the to-be-filled area determining unit is used for sequentially inquiring the position information of the cameras corresponding to the to-be-analyzed library and determining an area to be filled in a preset area map according to the position information;
the fitting execution unit is used for reading the meta-image in the to-be-analyzed library, inserting the area image into the to-be-filled area, and obtaining the area image at the moment when one traversal of all the to-be-analyzed libraries is completed;
the image arrangement unit is used for arranging the regional images according to the time sequence to obtain regional videos;
sequentially selecting a certain region image as a reference image in the step of arranging the region images according to time sequence, and filling missing parts in the reference image according to adjacent images of the reference image; the content of the missing part is an area which is not collected at a certain moment.
Compared with the prior art, the invention has the beneficial effects that: according to the method, the meta-video is acquired through the preset camera, and spliced according to the position of the camera to obtain the regional video; calculating a correlation coefficient between the region video and a preset sample video through computer equipment, and determining the sample video which is most matched with the region video, thereby generating an analysis report; the acquired region video can update the sample video, so that timeliness and accuracy of the matching process are ensured; the working pressure of monitoring personnel is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flow chart diagram of a method of regional video analysis.
Fig. 2 is a first sub-flowchart of a method of regional video analysis.
Fig. 3 is a second sub-flowchart of the regional video analysis method.
Fig. 4 is a third sub-flowchart of the regional video analysis method.
Fig. 5 is a fourth sub-flowchart of the regional video analysis method.
Fig. 6 is a block diagram showing the constitution of the regional video analysis system.
Description of the embodiments
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
Fig. 1 is a flow chart of a regional video analysis method, and in an embodiment of the present invention, a regional video analysis method includes:
step S100: inquiring the position information of a camera, and establishing a database according to the position information;
the main body for acquiring the regional video is a camera, the camera is pre-installed by a worker and has unchanged position, and a database is established according to the positions of different cameras and is used for acquiring and storing the video acquired by each camera.
Step S200: acquiring a meta-video according to a preset camera, and storing the meta-video into the database according to working parameters of the camera;
for an area to be analyzed, it is difficult to acquire an image of the whole area through one camera, and most of the areas are divided into a plurality of small areas, and then video acquisition is carried out on each small area through the plurality of cameras; the video acquired by each camera is the meta-video; in the working process of the camera, the working parameters (mainly working angles) determine which part of the to-be-analyzed area corresponds to the acquired meta-video.
Step S300: receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and splicing and fitting to generate a regional video;
the analysis process is carried out within a certain time range, the time range is input by a worker, the user inputs an analysis period, an execution subject of the method inquires and extracts the meta-videos in a database, and all the meta-videos are spliced, so that the regional video of the whole region to be analyzed can be obtained.
Step S400: analyzing the regional video based on a preset sample video library to generate an analysis report; wherein the analysis report is one of a sample video library.
A stable area exists for a period of time, for example, for a certain production shop, different production activities, the activity track of workers is similar and repeated continuously every day, wherein a day is a period of time.
Fig. 2 is a first sub-flowchart of a regional video analysis method, where the step of querying the position information of the camera and establishing a database according to the position information includes:
step S101: inquiring the position information of a camera, and establishing a camera position diagram;
in the installation process of the camera, position information can be recorded.
Step S102: taking each camera as a center, acquiring the total number of cameras within a preset radius range, and determining a core camera according to the total number of cameras;
in the areas to be analyzed, the monitoring values of different subareas are different, the subareas with low monitoring values have lower densities of corresponding cameras, and the subareas with high monitoring values have higher densities of corresponding cameras; traversing all cameras, and judging whether the camera is a core camera according to the number of cameras around a certain camera.
Step S103: sequentially taking the core cameras as the centers, establishing incremental detection circles, determining detection radiuses when the number of cameras in the detection circles reaches a preset number threshold, and establishing a database connected with each camera in the detection circles;
the method comprises the steps of sequentially clustering all cameras by using marked core cameras, wherein the clustering method comprises the steps of determining a continuously-enlarged detection circle, calculating the number of the cameras in the detection circle in real time, determining the radius of the detection circle when the number reaches a certain degree, and establishing a database connected with each camera in the detection circle; this is done in the sense that it can be ensured that the number of cameras corresponding to each database is similar.
Step S104: circularly executing until all cameras are connected with the database; the mapping relation between the camera and the database is many-to-one;
step S103 is repeated until all cameras are connected to the database.
It should be noted that there may be isolated cameras, and databases for these isolated cameras may be separately established, where the number of cameras corresponding to each database needs to be approximately the same as other databases.
From the foregoing, it can be seen that one database may correspond to a plurality of cameras, and one camera may correspond to only one database.
Fig. 3 is a second sub-flowchart of the regional video analysis method, where the step of obtaining a meta-video according to a preset camera and storing the meta-video to the database according to working parameters of the camera includes:
step S201: acquiring working parameters of the camera in real time, and counting the working parameters according to the time information to obtain a working parameter table;
acquiring working parameters of the camera in real time, and obtaining a working parameter table; in most cases, the working parameters of the camera do not change greatly, so there may be a lot of repeated data in the working parameter table.
Step S202: determining detection areas of cameras according to the working parameters, and counting the detection areas of all cameras corresponding to the same database at the same moment to obtain a meta-image at the moment;
the detection area of the camera can be determined according to the working parameters (working height and working angle) of the camera, the determination process is not difficult, and the detection can be completed through simple calculation; and taking the database as a reference, and counting the acquired data of all cameras at the same moment to obtain a meta-image, wherein the meta-image is a union of detection areas and is a subset of the image of the whole area.
Step S203: inserting the meta-images into the database according to a temporal sequence;
the meta-images are inserted into a database and arranged according to a time sequence.
FIG. 4 is a third sub-flowchart of a method for analyzing regional videos, wherein the steps of receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and generating regional videos by stitching and fitting include:
step S301: receiving analysis time periods input by a user, and intercepting the database based on the analysis time periods to obtain a library to be analyzed;
and the analysis period is input by a user, and data in each database is extracted according to the analysis period to obtain a database to be analyzed, wherein one database corresponds to one database to be analyzed.
Step S302: sequentially inquiring the position information of the cameras corresponding to the library to be analyzed, and determining a region to be filled in a preset region map according to the position information;
as can be seen from step S103, the distances between the cameras corresponding to most of the libraries to be analyzed (the subset of the databases) are very close, and when the distances between the cameras are very close, the detected areas are also close, so that an approximate area can be predetermined in the area map according to the position information of the cameras, and the preprocessing process can greatly improve the data filling speed.
Step S303: reading meta-images in the libraries to be analyzed, inserting the area images into the areas to be filled, and obtaining the area images at the moment when one traversal of all the libraries to be analyzed is completed;
each library to be analyzed corresponds to a region, and after the library to be analyzed is processed, a total region image at a certain moment can be obtained; the region image is a union of the regions to be filled.
Step S304: arranging the regional images according to time sequence to obtain regional videos;
and arranging the regional images according to the time sequence to obtain the regional video.
Sequentially selecting a certain region image as a reference image in the step of arranging the region images according to time sequence, and filling missing parts in the reference image according to adjacent images of the reference image; the content of the missing part is an area which is not collected at a certain moment.
It should be noted that, during the filling process, there may be some missing portions in the obtained region image, where at this time, there is no region acquired by any camera, these regions may be represented by empty sets, but during the subsequent processing, the empty sets may affect the data processing process, so that simple filling of the empty sets is required.
FIG. 5 is a fourth sub-flowchart of a method for analyzing regional videos, wherein the step of generating an analysis report includes:
step S401: inputting the regional video into a preset conversion model, and outputting a first characteristic value group; wherein the feature value is determined by a region image in the region video; the sequence of the characteristic values in the characteristic value group and the sequence of the regional image are in a mapping relation;
the regional video is a collection of regional images, and one numerical value (the characteristic value) corresponding to the regional images is calculated by means of the existing image processing algorithm; the image processing algorithm is a mapping of the color values of each pixel point to determine a numerical value. One value corresponds to one image and one video corresponds to a plurality of values, and thus a first characteristic value group is obtained.
Step S402: reading the analysis period, calculating a proportion interval of the analysis period in a preset time period, and inquiring a preset sample video library according to the proportion interval;
reading an analysis period, and calculating corresponding proportion points of two times of the analysis period in a preset time period, wherein a proportion interval can be obtained by the proportion points; and inquiring a sample video library which is counted in advance by the proportion interval.
Step S403: sequentially extracting sample videos in the sample video library, inputting the conversion model, and outputting a second characteristic value set;
the sample video is converted into the second feature value group in the same manner as in step S401.
Step S404: calculating correlation coefficients of the first characteristic value group and the second characteristic value group, and selecting a target sample video according to the correlation coefficients;
comparing the first set of eigenvalues to the second set of eigenvalues, the sample video that most closely matches the region video can be determined, referred to as the target sample video.
Step S405: querying an analysis report of a target sample video in the sample video library;
the analysis report is synchronously generated in the process of establishing the sample video library, and the analysis report is directly read after the target sample video is determined.
Specifically, the calculation formula of the correlation coefficient is as follows:
is the +.>Personal characteristic value->For the mean of the eigenvalues in the first set of eigenvalues,is the standard deviation of the sample; />Is the +.>Personal characteristic value->For the mean value of the eigenvalues in the second eigenvalue group,/->Is the standard deviation of the samples.
Example 2
Fig. 6 is a block diagram of a regional video analysis system, and in an embodiment of the present invention, a regional video analysis system, the system 10 includes:
the database establishing module 11 is used for inquiring the position information of the camera and establishing a database according to the position information;
the meta-video storage module 12 is configured to obtain a meta-video according to a preset camera, and store the meta-video to the database according to working parameters of the camera;
the splicing fitting module 13 is used for receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and generating an area video by splicing fitting;
the video analysis module 14 is used for analyzing the region video based on a preset sample video library and generating an analysis report; wherein the analysis report is one of a sample video library.
The database creation module 11 includes:
the position map establishing unit is used for inquiring the position information of the camera and establishing a camera position map;
the core determining unit is used for taking each camera as a center, obtaining the total number of cameras within a preset radius range, and determining the core cameras according to the total number of the cameras;
the classifying unit is used for sequentially taking the core cameras as the centers, establishing incremental detection circles, determining detection radiuses when the number of the cameras in the detection circles reaches a preset number threshold value, and establishing a database connected with each camera in the detection circles;
the circulation execution unit is used for performing circulation until all cameras are connected with the database; the mapping relation between the camera and the database is many-to-one.
The meta video storage module 12 includes:
the working parameter statistics unit is used for acquiring working parameters of the camera in real time, and carrying out statistics on the working parameters according to the time information to obtain a working parameter table;
the meta-image generating unit is used for determining detection areas of the cameras according to the working parameters, counting the detection areas of all the cameras corresponding to the same database at the same moment, and obtaining a meta-image at the moment;
and the data inserting unit is used for inserting the meta-images into the database according to the time sequence.
The splice fitting module 13 includes:
the database intercepting unit is used for receiving an analysis period input by a user, intercepting the database based on the analysis period and obtaining a database to be analyzed;
the to-be-filled area determining unit is used for sequentially inquiring the position information of the cameras corresponding to the to-be-analyzed library and determining an area to be filled in a preset area map according to the position information;
the fitting execution unit is used for reading the meta-image in the to-be-analyzed library, inserting the area image into the to-be-filled area, and obtaining the area image at the moment when one traversal of all the to-be-analyzed libraries is completed;
the image arrangement unit is used for arranging the regional images according to the time sequence to obtain regional videos;
sequentially selecting a certain region image as a reference image in the step of arranging the region images according to time sequence, and filling missing parts in the reference image according to adjacent images of the reference image; the content of the missing part is an area which is not collected at a certain moment.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (10)
1. A method of regional video analysis, the method comprising:
inquiring the position information of a camera, and establishing a database according to the position information;
acquiring a meta-video according to a preset camera, and storing the meta-video into the database according to working parameters of the camera;
receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and splicing and fitting to generate a regional video;
analyzing the regional video based on a preset sample video library to generate an analysis report; wherein the analysis report is one of a sample video library.
2. The regional video analysis method of claim 1, wherein the querying the position information of the camera, and the step of creating a database based on the position information comprises:
inquiring the position information of a camera, and establishing a camera position diagram;
taking each camera as a center, acquiring the total number of cameras within a preset radius range, and determining a core camera according to the total number of cameras;
sequentially taking the core cameras as the centers, establishing incremental detection circles, determining detection radiuses when the number of cameras in the detection circles reaches a preset number threshold, and establishing a database connected with each camera in the detection circles;
circularly executing until all cameras are connected with the database; the mapping relation between the camera and the database is many-to-one.
3. The regional video analysis method according to claim 1, wherein the step of acquiring the meta-video according to a preset camera and storing the meta-video in the database according to the working parameters of the camera comprises:
acquiring working parameters of the camera in real time, and counting the working parameters according to the time information to obtain a working parameter table;
determining detection areas of cameras according to the working parameters, and counting the detection areas of all cameras corresponding to the same database at the same moment to obtain a meta-image at the moment;
the meta-images are inserted into the database according to a time sequence.
4. The method of claim 3, wherein the step of receiving the analysis period of the user input, extracting the meta-video in the database based on the analysis period, and generating the region video by stitching fit comprises:
receiving analysis time periods input by a user, and intercepting the database based on the analysis time periods to obtain a library to be analyzed;
sequentially inquiring the position information of the cameras corresponding to the library to be analyzed, and determining a region to be filled in a preset region map according to the position information;
reading meta-images in the libraries to be analyzed, inserting the area images into the areas to be filled, and obtaining the area images at the moment when one traversal of all the libraries to be analyzed is completed;
arranging the regional images according to time sequence to obtain regional videos;
sequentially selecting a certain region image as a reference image in the step of arranging the region images according to time sequence, and filling missing parts in the reference image according to adjacent images of the reference image; the content of the missing part is an area which is not collected at a certain moment.
5. The regional video analysis method of claim 1, wherein the step of analyzing the regional video based on a preset sample video library, and generating an analysis report comprises:
inputting the regional video into a preset conversion model, and outputting a first characteristic value group; wherein the feature value is determined by a region image in the region video; the sequence of the characteristic values in the characteristic value group and the sequence of the regional image are in a mapping relation;
reading the analysis period, calculating a proportion interval of the analysis period in a preset time period, and inquiring a preset sample video library according to the proportion interval;
sequentially extracting sample videos in the sample video library, inputting the conversion model, and outputting a second characteristic value set;
calculating correlation coefficients of the first characteristic value group and the second characteristic value group, and selecting a target sample video according to the correlation coefficients;
querying an analysis report of the target sample video in the sample video library.
6. The regional video analysis method of claim 5, wherein the correlation coefficient is calculated by the formula:
is the +.>Personal characteristic value->For the mean value of the eigenvalues in the first eigenvalue group,/->Is the standard deviation of the sample; />Is the +.>Personal characteristic value->For the mean of the eigenvalues in the second set of eigenvalues,is the standard deviation of the samples.
7. A regional video analysis system, the system comprising:
the database establishing module is used for inquiring the position information of the camera and establishing a database according to the position information;
the meta-video storage module is used for acquiring a meta-video according to a preset camera and storing the meta-video to the database according to working parameters of the camera;
the splicing fitting module is used for receiving an analysis period input by a user, extracting meta-videos in a database based on the analysis period, and generating an area video by splicing fitting;
the video analysis module is used for analyzing the regional video based on a preset sample video library and generating an analysis report; wherein the analysis report is one of a sample video library.
8. The regional video analysis system of claim 7, wherein the database creation module comprises:
the position map establishing unit is used for inquiring the position information of the camera and establishing a camera position map;
the core determining unit is used for taking each camera as a center, obtaining the total number of cameras within a preset radius range, and determining the core cameras according to the total number of the cameras;
the classifying unit is used for sequentially taking the core cameras as the centers, establishing incremental detection circles, determining detection radiuses when the number of the cameras in the detection circles reaches a preset number threshold value, and establishing a database connected with each camera in the detection circles;
the circulation execution unit is used for performing circulation until all cameras are connected with the database; the mapping relation between the camera and the database is many-to-one.
9. The regional video analysis system of claim 7, wherein the meta-video storage module comprises:
the working parameter statistics unit is used for acquiring working parameters of the camera in real time, and carrying out statistics on the working parameters according to the time information to obtain a working parameter table;
the meta-image generating unit is used for determining detection areas of the cameras according to the working parameters, counting the detection areas of all the cameras corresponding to the same database at the same moment, and obtaining a meta-image at the moment;
and the data inserting unit is used for inserting the meta-images into the database according to the time sequence.
10. The regional video analysis system of claim 9, wherein the splice fitting module comprises:
the database intercepting unit is used for receiving an analysis period input by a user, intercepting the database based on the analysis period and obtaining a database to be analyzed;
the to-be-filled area determining unit is used for sequentially inquiring the position information of the cameras corresponding to the to-be-analyzed library and determining an area to be filled in a preset area map according to the position information;
the fitting execution unit is used for reading the meta-image in the to-be-analyzed library, inserting the area image into the to-be-filled area, and obtaining the area image at the moment when one traversal of all the to-be-analyzed libraries is completed;
the image arrangement unit is used for arranging the regional images according to the time sequence to obtain regional videos;
sequentially selecting a certain region image as a reference image in the step of arranging the region images according to time sequence, and filling missing parts in the reference image according to adjacent images of the reference image; the content of the missing part is an area which is not collected at a certain moment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310201769.9A CN116069976B (en) | 2023-03-06 | 2023-03-06 | Regional video analysis method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310201769.9A CN116069976B (en) | 2023-03-06 | 2023-03-06 | Regional video analysis method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116069976A true CN116069976A (en) | 2023-05-05 |
CN116069976B CN116069976B (en) | 2023-09-12 |
Family
ID=86173282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310201769.9A Active CN116069976B (en) | 2023-03-06 | 2023-03-06 | Regional video analysis method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116069976B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110231419A1 (en) * | 2010-03-17 | 2011-09-22 | Lighthaus Logic Inc. | Systems, methods and articles for video analysis reporting |
WO2012081319A1 (en) * | 2010-12-15 | 2012-06-21 | 株式会社日立製作所 | Video monitoring apparatus |
US20150065803A1 (en) * | 2013-09-05 | 2015-03-05 | Erik Scott DOUGLAS | Apparatuses and methods for mobile imaging and analysis |
US20160323535A1 (en) * | 2014-01-03 | 2016-11-03 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and apparatus for extracting surveillance recording videos |
CN109218755A (en) * | 2017-07-07 | 2019-01-15 | 华为技术有限公司 | A kind for the treatment of method and apparatus of media data |
CN110532923A (en) * | 2019-08-21 | 2019-12-03 | 深圳供电局有限公司 | A kind of personage's trajectory retrieval method and its system |
CN111062234A (en) * | 2018-10-17 | 2020-04-24 | 深圳市冠旭电子股份有限公司 | Monitoring method, intelligent terminal and computer readable storage medium |
CN111145545A (en) * | 2019-12-25 | 2020-05-12 | 西安交通大学 | Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning |
US20200288112A1 (en) * | 2019-03-07 | 2020-09-10 | Alibaba Group Holding Limited | Method, apparatus, medium, and device for processing multi-angle free-perspective video data |
CN114911239A (en) * | 2022-05-27 | 2022-08-16 | 上海伯镭智能科技有限公司 | Method and system for identifying abnormity of unmanned mine car |
CN115396622A (en) * | 2022-10-28 | 2022-11-25 | 广东电网有限责任公司中山供电局 | Electronic equipment for low-bit-rate video reconstruction |
CN115410418A (en) * | 2022-10-31 | 2022-11-29 | 北京千尧新能源科技开发有限公司 | Operation monitoring method and system for operation and maintenance boarding corridor bridge |
CN115409867A (en) * | 2022-08-15 | 2022-11-29 | 富成数字技术集团有限公司 | Track analysis method and system based on video processing technology |
CN115631449A (en) * | 2022-12-19 | 2023-01-20 | 南京和电科技有限公司 | Intelligent video identification management method and system |
-
2023
- 2023-03-06 CN CN202310201769.9A patent/CN116069976B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110231419A1 (en) * | 2010-03-17 | 2011-09-22 | Lighthaus Logic Inc. | Systems, methods and articles for video analysis reporting |
WO2012081319A1 (en) * | 2010-12-15 | 2012-06-21 | 株式会社日立製作所 | Video monitoring apparatus |
US20150065803A1 (en) * | 2013-09-05 | 2015-03-05 | Erik Scott DOUGLAS | Apparatuses and methods for mobile imaging and analysis |
US20160323535A1 (en) * | 2014-01-03 | 2016-11-03 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and apparatus for extracting surveillance recording videos |
CN109218755A (en) * | 2017-07-07 | 2019-01-15 | 华为技术有限公司 | A kind for the treatment of method and apparatus of media data |
CN111062234A (en) * | 2018-10-17 | 2020-04-24 | 深圳市冠旭电子股份有限公司 | Monitoring method, intelligent terminal and computer readable storage medium |
US20200288112A1 (en) * | 2019-03-07 | 2020-09-10 | Alibaba Group Holding Limited | Method, apparatus, medium, and device for processing multi-angle free-perspective video data |
CN110532923A (en) * | 2019-08-21 | 2019-12-03 | 深圳供电局有限公司 | A kind of personage's trajectory retrieval method and its system |
CN111145545A (en) * | 2019-12-25 | 2020-05-12 | 西安交通大学 | Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning |
CN114911239A (en) * | 2022-05-27 | 2022-08-16 | 上海伯镭智能科技有限公司 | Method and system for identifying abnormity of unmanned mine car |
CN115409867A (en) * | 2022-08-15 | 2022-11-29 | 富成数字技术集团有限公司 | Track analysis method and system based on video processing technology |
CN115396622A (en) * | 2022-10-28 | 2022-11-25 | 广东电网有限责任公司中山供电局 | Electronic equipment for low-bit-rate video reconstruction |
CN115410418A (en) * | 2022-10-31 | 2022-11-29 | 北京千尧新能源科技开发有限公司 | Operation monitoring method and system for operation and maintenance boarding corridor bridge |
CN115631449A (en) * | 2022-12-19 | 2023-01-20 | 南京和电科技有限公司 | Intelligent video identification management method and system |
Non-Patent Citations (2)
Title |
---|
张瑞麟;李佳蔚;甘雨;: "基于FAST角点检测算法上对Y型与X型角点的检测", 电子技术与软件工程, no. 10, pages 69 - 70 * |
陈雪涛;穆春阳;马行;: "基于嵌入式的实时视频采集与拼接系统的设计", 科学技术与工程, no. 09, pages 221 - 225 * |
Also Published As
Publication number | Publication date |
---|---|
CN116069976B (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110196892B (en) | Comprehensive protective land monitoring platform based on Internet of things and method thereof | |
CN101315631A (en) | News video story unit correlation method | |
CN111709361A (en) | Unmanned aerial vehicle inspection data processing method for power transmission line | |
CN111831856B (en) | Metadata-based automatic holographic digital power grid data storage system and method | |
CN116775750B (en) | Data management method and device based on metallurgical industry | |
CN113656477A (en) | Method for verifying and fusing multi-source heterogeneous data of homeland space | |
CN116069976B (en) | Regional video analysis method and system | |
CN111598874B (en) | Mangrove canopy density investigation method based on intelligent mobile terminal | |
CN117610972A (en) | Green building digital management system and method based on artificial intelligence | |
UA126999C2 (en) | Determining activity swath from machine-collected worked data | |
CN112783962B (en) | ETL technology-based time-space big data artificial intelligence analysis method and system | |
CN112766245B (en) | PDF format file-based visual instrument acquisition method and system | |
CN115310107A (en) | Internet of things data secure storage method and system based on cloud computing | |
KR20200007563A (en) | Machine Learning Data Set Preprocessing Method for Energy Consumption Analysis | |
CN114999644A (en) | Building personnel epidemic situation prevention and control visual management system and management method | |
CN114898347A (en) | Machine vision identification method for pointer instrument | |
CN111431978B (en) | Automatic collection system of instrument | |
WO2023095956A1 (en) | Method, apparatus, and system for searching for and providing shape relationship information about 3d model | |
CN115802013B (en) | Video monitoring method, device and equipment based on intelligent illumination and storage medium | |
CN117689271B (en) | Quality management method and device for product, terminal equipment and storage medium | |
CN117692610B (en) | AR workshop inspection system | |
CN116755388B (en) | High-precision control system and control method for universal milling head | |
CN118053007B (en) | Standard content comparison display method and system based on big data | |
CN117951210B (en) | Method and system for processing measurement data based on document page | |
CN112487966B (en) | Mobile vendor behavior recognition management system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240108 Address after: No. 19 Dongshan West Road, Yicheng Street, Yixing City, Wuxi City, Jiangsu Province, 214200 Patentee after: Yixing Public Security Bureau Patentee after: Nanjing Power Technology Co.,Ltd. Address before: Area C, 9th Floor, Building 2, No. 68, Aoti Street, Jianye District, Nanjing City, Jiangsu Province, 210000 Patentee before: Nanjing Power Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |