CN116383436A - Monitoring intelligent analysis method and device, electronic equipment and storage medium - Google Patents

Monitoring intelligent analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116383436A
CN116383436A CN202310177753.9A CN202310177753A CN116383436A CN 116383436 A CN116383436 A CN 116383436A CN 202310177753 A CN202310177753 A CN 202310177753A CN 116383436 A CN116383436 A CN 116383436A
Authority
CN
China
Prior art keywords
monitoring
target
video
queried
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310177753.9A
Other languages
Chinese (zh)
Inventor
白志云
白志强
麻潇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Xiaoyun Electronic Technology Co ltd
Original Assignee
Shanxi Xiaoyun Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Xiaoyun Electronic Technology Co ltd filed Critical Shanxi Xiaoyun Electronic Technology Co ltd
Priority to CN202310177753.9A priority Critical patent/CN116383436A/en
Publication of CN116383436A publication Critical patent/CN116383436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7328Query by example, e.g. a complete video frame or video sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application relates to a monitoring intelligent analysis method, a monitoring intelligent analysis device, electronic equipment and a storage medium. The method for managing the monitoring video collected by the monitoring system arranged in the target area comprises the following steps: acquiring search conditions of a target to be queried; selecting the monitoring video in the monitoring system according to the retrieval condition to obtain a pending video; selecting all undetermined targets meeting the retrieval conditions based on the undetermined video and obtaining pictures of the undetermined targets; determining a picture of a target to be queried from pictures of the target to be determined; determining a video fragment of the target to be queried in the undetermined video according to the picture of the target to be queried; according to the video clips, analyzing behavior characteristics of a target to be queried; determining all fragments of the search condition of the target to be queried in the monitoring video by combining the behavior characteristics and the distribution condition of the monitoring cameras in the monitoring system; splicing all the fragments according to the time sequence to generate a target video; and drawing a route map of the action track of the target to be queried based on the target video.

Description

Monitoring intelligent analysis method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a method and apparatus for monitoring and intelligent analysis, an electronic device, and a storage medium.
Background
With the continuous development of modernization, urban construction is more and more complete, wherein a large number of monitoring systems are arranged in the city, and the monitoring systems can effectively monitor the traffic flow of road vehicles and shoot illegal driving.
However, since each monitoring system is relatively independent, when a plurality of monitoring systems are called to jointly survey the action track of a certain target, a large amount of manpower and multiple screens are required to be used for simultaneously calling video data in the plurality of monitoring systems, then the video data is watched a little by little to determine the video segment where the target is located, and finally the action track of the target is drawn by manual induction, so that a large amount of time and labor cost are wasted in the whole survey process.
Disclosure of Invention
The application provides a monitoring intelligent analysis method, a monitoring intelligent analysis device, electronic equipment and a storage medium. The method aims to solve the problems that when a certain target is searched from a large number of monitoring videos, a large amount of labor cost is wasted to read all the monitoring videos, time is wasted in the whole process, and part of action tracks of the target can be missed when the action tracks of the target are drawn.
In a first aspect, the present application provides a method for monitoring and intelligently analyzing, configured to manage a monitoring video collected by a monitoring system disposed in a target area, where the method includes:
acquiring search conditions of a target to be queried; the search conditions include: time conditions, place conditions, feature conditions, and search time spans;
selecting the monitoring video in the monitoring system according to the time condition and the place condition to obtain a pending video;
selecting all undetermined targets meeting the characteristic conditions based on the undetermined video, and obtaining pictures of the undetermined targets;
determining a picture of the target to be queried from the pictures of the target to be determined;
determining a video fragment of the target to be queried in a pending video according to the picture of the target to be queried;
analyzing the behavior characteristics of the target to be queried according to the video fragment;
combining the behavior characteristics and the distribution condition of monitoring cameras in a monitoring system, and determining all fragments of a target to be queried, which accords with the retrieval time span, in the monitoring video;
splicing all the fragments according to the time sequence to generate a target video;
And drawing a route map of the action track of the target to be queried based on the target video, and displaying the route map and the target video.
According to the scheme, the search condition of the target to be queried is input by a user, the picture of the target to be determined, which accords with the search condition, is generated, the picture of the target to be queried is further determined from the picture of the target to be determined, then the video clip of the target to be queried in monitoring is confirmed through the picture of the target to be queried, the search range of the target to be queried is narrowed step by utilizing the search condition, the action route of the target is further analyzed by utilizing the video clip, the clips in other videos related to the target are continuously searched according to the action route, thus the complete monitoring video related to the target is obtained, finally, the route map related to the action track of the target to be queried is generated according to the monitoring video, the route map related to the action track of the target to be queried is automatically obtained through the search condition, the time of manually looking the video for searching the target for once is reduced, and the efficiency of drawing the route map of the action track of the target is also improved.
Optionally, the method further comprises: creating a monitoring camera information table;
storing the monitoring place information of the monitoring camera and the storage path of the generated monitoring video into the monitoring camera information table as monitoring camera information;
Creating a monitoring camera database for each monitoring camera;
the method comprises the steps of reading monitoring videos in a segmented mode according to a preset time period, and analyzing characteristic information of each monitoring target in each monitoring video;
and correspondingly generating a monitoring video table in the monitoring camera database according to the preset time period and the characteristic information.
According to the scheme, a database is established for each monitoring camera, so that the situation that information of all monitoring cameras is in one database or one table is avoided, then a table is independently established according to each video, all information of one video is stored in each table, the data can be stored in a grading manner, the data structure is more reasonable, meanwhile, the situation that the database is accessed into one table or one database is avoided, loss is reduced to the minimum when the table or the database is damaged, and the data safety is improved.
Optionally, the selecting the monitoring video in the monitoring system according to the search condition to obtain the pending video includes:
selecting monitoring camera information meeting the place conditions from a monitoring camera information table;
entering a storage position corresponding to the monitoring camera to generate a monitoring video according to the storage path in the monitoring camera information;
Based on the time represented by the time condition, selecting a segment containing the time in the monitoring video monitoring time as the video to be determined;
selecting all undetermined targets meeting the search condition based on the undetermined video, and obtaining pictures of the undetermined targets, wherein the method comprises the following steps:
according to the monitoring camera information, connecting a corresponding monitoring camera database;
selecting a corresponding monitoring video table in the monitoring camera database according to the pending video;
according to the characteristic conditions, characteristic information conforming to the characteristic conditions is matched from the monitoring video table, and a monitoring target corresponding to each piece of characteristic information is the target to be determined;
aiming at each undetermined target, intercepting a video fragment of the undetermined target in the undetermined video;
and correspondingly extracting a frame of picture which contains the characteristic information and has the clearest image quality for each video segment, and taking the picture as the picture of the undetermined target.
According to the scheme, the monitoring camera which can monitor the target site is selected from the monitoring camera information table according to the site condition, the time condition is utilized, the monitoring video of the target time is selected from the database corresponding to the monitoring camera, and finally the target which accords with the characteristics is selected from the monitoring video through the characteristic information, so that the aim of searching all search information is achieved during searching, the searching pressure on a system is reduced by utilizing a hierarchical searching mode, the searching time is shortened, and the searching accuracy is improved.
Optionally, the analyzing the characteristic information of each monitoring target appearing in each section of monitoring video includes:
extracting key frames from the monitoring video to obtain a plurality of key frames;
comparing all the key frames to establish a static environment model;
analyzing all key frames based on the static environment model, and judging whether the monitoring target appears in the key frames;
if the monitoring target appears, further analyzing the monitoring target;
and judging the type of the monitoring target, analyzing the stature information, the facial information and the clothing information of the monitoring target if the type of the monitoring target is human, and analyzing the license plate information, the type and the color of the vehicle of the monitoring target if the type of the monitoring target is vehicle.
And comparing the key frames in the monitoring video to obtain a static environment model, and reversely analyzing the key frames by using the static environment model, so that whether all targets in the monitoring video appear can be rapidly obtained, further, the targets are subjected to characteristic analysis, and the analysis efficiency of the characteristic information of the targets in the monitoring video is improved.
Optionally, the obtaining the search condition of the target to be queried includes:
acquiring a sample picture of the target to be queried, and acquiring characteristic information in the sample picture;
And further selecting part of the characteristic information from the characteristic information as a retrieval condition of the target to be queried.
According to the scheme, the image characteristics are analyzed, the obtained characteristics are used as the retrieval conditions for retrieval, more input modes of the retrieval conditions are provided for the user, and the use experience of the user is improved.
Optionally, the behavior feature includes: travel speed, entry time, exit time, entry direction, exit direction;
the analyzing the behavior characteristics of the target to be queried according to the video segment comprises the following steps:
acquiring the moving distance and the moving duration of the target to be queried through the video segment, and further calculating the advancing speed of the target to be queried;
acquiring the time when the target to be queried first appears in the monitoring area through the video segment, and taking the time as the entering time; acquiring the time when a target to be queried finally leaves a monitoring area as the leaving time; acquiring a moving direction of a target to be queried appearing in a monitoring area for the first time, and taking the moving direction as the entering direction; acquiring a moving direction of a target to be queried, which finally leaves a monitoring area, as the leaving direction;
and determining all fragments of the target to be queried, which conform to the retrieval time span in the monitoring video, by combining the behavior characteristics and the distribution condition of the monitoring cameras in the monitoring system, wherein the method comprises the following steps:
Acquiring road section information monitored by each monitoring camera and the adjacent condition of each monitoring camera in the monitoring system;
determining a last monitoring camera according to the entering direction, the advancing speed and the adjacent condition; determining a next monitoring camera according to the leaving direction, the travelling speed and the adjacent condition;
obtaining a video segment of the monitoring video of the target to be queried on the last monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the last monitoring camera, the travelling speed and the entering time; obtaining a video segment of the monitoring video of the target to be queried at the next monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the next monitoring camera, the travelling speed and the departure time;
the steps are circulated, new video clips are continuously obtained, and if the target to be queried leaves the monitoring area of the monitoring system, or the monitoring time of the new video clips reaches the current time, or the target to be queried stops moving, or the monitoring time of the new video clips exceeds the retrieval time span, circulation is stopped; after the circulation is stopped, all the acquired video fragments are all the fragments of the target to be queried in the monitoring video.
According to the scheme, the behavior characteristics of the monitoring video, such as the advancing speed, the advancing direction, the entering time, the leaving time and the like, about the target to be queried are utilized to analyze, the distribution condition of the monitoring cameras and the road section information monitored by each monitoring camera are combined, the monitoring video of the target to be queried is monitored according to one section, the monitoring video of the target to be queried shot by the adjacent cameras is acquired in a chained recursion mode, and the efficiency of retrieving video segments about the target to be queried in the monitoring video is improved.
Optionally, the drawing a route map of the action track of the target to be queried based on the target video includes:
acquiring information of each road section monitored in the target video;
marking each road section in a solid line in a map according to the information of each road section;
and according to the advancing direction of the target to be queried in the target video, connecting adjacent road sections marked by solid lines in the map by using a dotted line, and drawing a direction mark.
According to the scheme, the route of the target to be queried, which is moved, can be marked on the map correspondingly by utilizing the road section information of the target to be queried in each section of monitoring video, and then all the moved routes are connected through the advancing direction of the target to be queried, so that the drawing of the action track diagram of the target to be queried is completed.
In a second aspect, the present application provides a monitoring intelligent analysis device. The method for managing the monitoring video collected by the monitoring system arranged in the target area comprises the following steps:
the acquisition module is used for acquiring the retrieval conditions of the target to be queried; the search conditions include: time conditions, place conditions, feature conditions, and search time spans;
the first retrieval module is used for selecting the monitoring video in the monitoring system according to the time condition and the place condition to obtain a pending video; selecting all undetermined targets meeting the characteristic conditions based on the undetermined video, and obtaining pictures of the undetermined targets;
the reverse retrieval module determines a picture of the target to be queried from the pictures of the target to be determined; determining a video fragment of the target to be queried in a pending video according to the picture of the target to be queried;
the second retrieval module is used for analyzing the behavior characteristics of the target to be queried according to the video clips; combining the behavior characteristics and the distribution condition of monitoring cameras in a monitoring system, and determining all fragments of a target to be queried, which accords with the retrieval time span, in the monitoring video;
The drawing module is used for splicing all the fragments according to the time sequence to generate a target video; and drawing a route map of the action track of the target to be queried based on the target video, and calling a display module to display the route map and the target video.
Optionally, the intelligent analysis device is monitored and further comprises a database table building module.
The database creation table module is used for creating a monitoring camera information table;
storing the monitoring place information of the monitoring camera and the storage path of the generated monitoring video into the monitoring camera information table as monitoring camera information;
creating a monitoring camera database for each monitoring camera;
the method comprises the steps of reading monitoring videos in a segmented mode according to a preset time period, and analyzing characteristic information of each monitoring target in each monitoring video;
and correspondingly generating a monitoring video table in the monitoring camera database according to the preset time period and the characteristic information.
Optionally, the first search module performs section selection on the monitoring video in the monitoring system according to the search condition, and is specifically configured to:
selecting monitoring camera information meeting the place conditions from a monitoring camera information table;
Entering a storage position corresponding to the monitoring camera to generate a monitoring video according to the storage path in the monitoring camera information;
based on the time represented by the time condition, selecting a segment containing the time in the monitoring video monitoring time as the video to be determined;
the first retrieval module selects all pending targets meeting the retrieval conditions based on the pending video, and is specifically used for:
according to the monitoring camera information, connecting a corresponding monitoring camera database;
selecting a corresponding monitoring video table in the monitoring camera database according to the pending video;
according to the characteristic conditions, characteristic information conforming to the characteristic conditions is matched from the monitoring video table, and a monitoring target corresponding to each piece of characteristic information is the target to be determined;
aiming at each undetermined target, intercepting a video fragment of the undetermined target in the undetermined video;
and correspondingly extracting a frame of picture which contains the characteristic information and has the clearest image quality for each video segment, and taking the picture as the picture of the undetermined target.
Optionally, the monitoring intelligent analysis device further comprises a video analysis module;
the video analysis module is specifically configured to, when analyzing the feature information of each monitoring target appearing in each section of monitoring video:
extracting key frames from the monitoring video to obtain a plurality of key frames;
comparing all the key frames to establish a static environment model;
analyzing all key frames based on the static environment model, and judging whether the monitoring target appears in the key frames;
if the monitoring target appears, further analyzing the monitoring target;
and judging the type of the monitoring target, analyzing the stature information, the facial information and the clothing information of the monitoring target if the type of the monitoring target is human, and analyzing the license plate information, the type and the color of the vehicle of the monitoring target if the type of the monitoring target is vehicle.
Optionally, when the first search module obtains the search condition of the target to be queried, the first search module is specifically configured to:
acquiring a sample picture of the target to be queried, and acquiring characteristic information in the sample picture;
and further selecting part of the characteristic information from the characteristic information as a retrieval condition of the target to be queried.
Optionally, the behavior feature includes: travel speed, entry time, exit time, entry direction, exit direction;
The second retrieval module is specifically configured to, when analyzing the behavior characteristics of the target to be queried according to the video clip: acquiring the moving distance and the moving duration of the target to be queried through the video segment, and further calculating the advancing speed of the target to be queried;
acquiring the time when the target to be queried first appears in the monitoring area through the video segment, and taking the time as the entering time; acquiring the time when a target to be queried finally leaves a monitoring area as the leaving time; acquiring a moving direction of a target to be queried appearing in a monitoring area for the first time, and taking the moving direction as the entering direction; acquiring a moving direction of a target to be queried, which finally leaves a monitoring area, as the leaving direction;
the second retrieval module is specifically configured to, when determining that the target to be queried accords with all segments of the retrieval time span in the surveillance video by combining the behavior characteristics and the distribution condition of the surveillance cameras in the surveillance system: acquiring road section information monitored by each monitoring camera and the adjacent condition of each monitoring camera in the monitoring system;
determining a last monitoring camera according to the entering direction, the advancing speed and the adjacent condition; determining a next monitoring camera according to the leaving direction, the travelling speed and the adjacent condition;
Obtaining a video segment of the monitoring video of the target to be queried on the last monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the last monitoring camera, the travelling speed and the entering time; obtaining a video segment of the monitoring video of the target to be queried at the next monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the next monitoring camera, the travelling speed and the departure time;
the steps are circulated, new video clips are continuously obtained, if the target to be queried leaves the monitoring area of the monitoring system, or the monitoring time of the new video clips reaches the current time, or the target to be queried stops moving, the circulation is stopped, or the monitoring time of the new video clips exceeds the retrieval time span; after the circulation is stopped, all the acquired video fragments are all the fragments of the target to be queried in the monitoring video.
Optionally, the drawing module is specifically configured to, when drawing the route map of the action track of the target to be queried based on the target video:
Acquiring information of each road section monitored in the target video;
marking each road section in a solid line in a map according to the information of each road section;
and according to the advancing direction of the target to be queried in the target video, connecting adjacent road sections marked by solid lines in the map by using a dotted line, and drawing a direction mark.
In a third aspect, the present application provides an electronic device, comprising: a memory and a processor, the memory having stored thereon a computer program capable of being loaded by the processor and performing the method of the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program capable of being loaded by a processor and performing the method of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flow chart of a method for monitoring and intelligent analysis according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a monitoring intelligent analysis device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, unless otherwise specified, the term "/" generally indicates that the associated object is an "or" relationship.
Embodiments of the present application are described in further detail below with reference to the drawings attached hereto.
Along with the continuous development of modernization, a large number of monitoring systems are set up in cities, but because each monitoring video contains a large amount of target information and each monitoring system contains a large amount of monitoring videos, when the monitoring system is called to survey the action track of a certain target, all monitoring videos need to be read once by a large amount of manpower, so that all monitoring videos of the target are selected, a large amount of time and labor cost are wasted in the investigation process, and missed observation situations can occur in the process of manually reading the monitoring videos, so that the drawn action track is not very accurate.
Based on the above, the application provides a monitoring intelligent analysis method, a device, electronic equipment and a storage medium. The method comprises the steps of searching out fragments containing the target to be queried in a monitoring video of a monitoring system rapidly by inputting search conditions of the target to be queried, and drawing a movement track diagram of the target to be queried.
Fig. 1 is a schematic view of an application scenario provided in the present application. As shown in fig. 1, the method is specifically carried on a server in a software form, a user accesses the server through user equipment, and sends retrieval conditions to the server, and the server searches monitoring videos acquired by a monitoring system according to the retrieval conditions and returns target videos and route patterns which meet the retrieval conditions. The monitoring system consists of a certain number of monitoring cameras. Reference may be made to the following examples for specific implementation.
Fig. 2 is a flowchart of a method for monitoring and intelligent analysis according to an embodiment of the present application, where the method of the present embodiment may be applied to a server in the above scenario. As shown in fig. 2, the method includes:
step S201, obtaining search conditions of the target to be queried, wherein the search conditions comprise time conditions, place conditions, characteristic conditions and search time spans.
The time condition is a fixed time or a time period, such as 12:00 or 12:00-12:30, the place condition is a street name or a logo building name, the feature condition is feature information of the target to be queried, such as clothing information describing a person, height information, person name and the like, or license plate information describing a vehicle, vehicle model number and vehicle body color, and the retrieval time span is video of how long about the target to be queried is needed, such as one day and two days.
The method is specifically integrated into software, a search page is provided, and a user can enter the search page provided by the software to input search conditions when accessing the search page.
In some specific implementations, the search page includes a search input box and a condition selection box, where the search input box includes an identifier, and the identifier is used to prompt a user to input only feature conditions of the target to be queried in the input box, and if there are multiple sets of feature conditions that are separated by spaces, for example, the target to be queried that the user needs to search is a vehicle, and "white brand a license plate number b1" is input in the search input box.
In this embodiment, the license plate is specifically exemplified for representing license plate distinction between vehicles, and is irrelevant to actual vehicle information.
The condition selection frame is used for inputting time conditions, place conditions and searching time span, for example, clicking the selection frame of place conditions, popping up a map screen on the page, displaying each street name on the map screen, and automatically inputting the street name after clicking and selecting.
If the name information is the name information of the target to be queried, the name is automatically searched in a database, the data of the database is acquired by interfacing with a system of some related departments, the corresponding picture of the name is acquired, the face information corresponding to the name is further acquired, the face information is used as characteristic information for searching, and the place condition, the time condition and the search time span can be input in the form of an input box.
Step S202, selecting the monitoring video in the monitoring system according to the time condition and the place condition to obtain the undetermined video, selecting all undetermined targets meeting the characteristic condition based on the undetermined video, and obtaining pictures of the undetermined targets.
The undetermined video is a segment of a monitoring video which accords with time conditions and place conditions, and the monitoring video is generated by shooting by a monitoring camera.
Because the detail degree of the search conditions input by the user is different, the number of the undetermined targets conforming to the search conditions is also different.
In some specific implementation manners, the location condition in the search condition of the user is obtained, the monitoring cameras in the monitoring system are screened, the monitoring cameras in the location indicated by the location condition are selected, the time condition in the search condition is used, the monitoring video shot by the selected monitoring cameras is selected, and the video with the monitoring time including the time indicated by the time condition in the monitoring video is selected as the pending video.
And then, reading the undetermined video, comparing the characteristics of each appearing target in the undetermined video by utilizing the characteristic conditions in the retrieval conditions, selecting the target meeting the characteristic conditions, and intercepting a clear picture of the target correspondingly.
For example, the location condition is selected as "A road, B street" time condition is 12:00-12:01, the undetermined video is 12 shot by a monitoring camera at the intersection of the A road and the B road: 00-12: 01.
The characteristic condition is "white brand a", read the above-mentioned A way B street intersection 12:00-12:01, analyzing each vehicle characteristic, and selecting the following 3 targets of white brand a license plate number b2, white brand a license plate number b3, white brand a license plate number b1 and corresponding pictures, wherein the targets meet characteristic conditions.
Step S203, determining a picture of the target to be queried from the pictures of the target to be determined, and determining a video clip of the target to be queried in the video to be determined according to the picture of the target to be queried.
Specifically, after the pictures of the target to be determined are obtained, all the pictures are returned to the search page to be displayed, and the user further selects the target to be continuously queried.
Or, the user supplements some additional information and screens the targets to be determined further, if the user inputs that the targets to be queried are hit-and-run vehicles, whether the vehicles in all the targets to be determined are damaged or not can be judged, and then the targets to be queried are selected automatically.
And S204, analyzing the behavior characteristics of the target to be queried according to the video clips, and determining all clips of the target to be queried, which conform to the retrieval time span, in the monitoring video by combining the behavior characteristics and the distribution condition of the monitoring cameras in the monitoring system.
The behavior characteristics are characteristics of representing the action direction, action distance and the like of the target to be queried. The distribution condition is a condition representing the adjacent direction and the adjacent distance of each monitoring camera.
Specifically, the video segment is read, and the running speed of the target to be queried, which is calculated by the moving time and the moving distance of the target to be queried in the video segment, is calculated, or the running speed of the target to be queried, which has the longest stable time when the target to be queried runs, is directly selected as the running speed of the target to be queried.
Then searching for an adjacent monitoring camera of the monitoring camera in the travelling direction by taking the monitoring camera shooting the video clip as a center, calculating the time for the target to enter or leave the monitoring area of the adjacent monitoring camera according to the travelling speed, the entering time and the leaving time of the target to be queried, calculating the appearance time of the target to be queried on the adjacent camera based on the travelling speed of the target to be queried and the length of the monitoring area of the adjacent monitoring camera, and intercepting the monitoring video of the adjacent monitoring camera to obtain a new video clip.
And finally, repeating the steps on the new video segment, and recursively acquiring the new video segment so as to acquire all segments in the retrieval time span.
Step S205, all the fragments are spliced according to the time sequence, a target video is generated, a route map of the action track of the target to be queried is drawn based on the target video, and the route map and the target video are displayed.
Wherein each fragment is a fragment containing only the target to be queried.
Specifically, splicing is performed according to the time of starting monitoring and the time of ending monitoring of each video clip, a target video is generated, and a route map of the action track of the target to be queried is drawn in a corresponding map based on the action track of the target to be queried in the target video.
In some specific implementation modes, a target video is read, a road surface on which a target to be queried passes is color-marked, when the target to be queried leaves from a segment, a current picture is subjected to screenshot to generate a moving route picture of the target to be queried on the current segment, then the ground color characteristics of the picture are analyzed, route line segments are generated on the corresponding segment in the map, finally the steps are repeated to analyze all the moving route pictures, route line segments are generated on the moving tracks of the target to be queried in all the monitored segments in the map, and routes among the monitored segments are connected through the advancing direction of the target to be queried in the video, so as to generate a route map.
According to the scheme, the search condition of the target to be queried is input by a user, the picture of the target to be determined, which accords with the search condition, is generated, the picture of the target to be queried is further determined from the picture of the target to be determined, then the video clip of the target to be queried in monitoring is confirmed through the picture of the target to be queried, the search range of the target to be queried is narrowed step by utilizing the search condition, the action route of the target is further analyzed by utilizing the video clip, the clips in other videos related to the target are continuously searched according to the action route, thus the complete monitoring video related to the target is obtained, finally, the route map related to the action track of the target to be queried is generated according to the monitoring video, the route map related to the action track of the target to be queried is automatically obtained through the search condition, the time of manually looking the video for searching the target for once is reduced, and the efficiency of drawing the route map of the action track of the target is also improved.
In some embodiments, the method further includes creating a monitoring camera information table, storing monitoring location information of the monitoring cameras as monitoring camera information in the monitoring camera information table, creating a monitoring camera database for each monitoring camera, reading the monitoring video in segments according to a preset time period, analyzing feature information of each monitoring target appearing in each monitoring video, and correspondingly generating a monitoring video table in the monitoring camera database according to the preset time period and the feature information.
The preset time period is a non-fixed time period, and may be set to 30 minutes, 1 hour, 2 hours, or the like.
The monitoring target is another target that is not an inherent target in the monitoring place, for example: a movable object such as a vehicle, a person, etc., while a ground, a tree, a fence, etc., are inherent objects.
Specifically, all the monitoring camera information in the monitoring system is completely recorded into a table of a database, an index is created for each piece of monitoring camera information according to the monitoring camera mark, each piece of monitoring camera information comprises the monitoring location information of the monitoring camera and the storage path of the monitoring video generated by the monitoring camera, and the number of the monitoring cameras in each monitoring system is not constant, so that the horizontal sub-table can be carried out according to the number.
And then, establishing a corresponding database for each monitoring camera, wherein the database is only used for storing the data of the characteristic information of the monitored targets of the corresponding camera, then reading the content of the monitoring video of the camera for a preset time period, extracting the characteristics of all the monitoring targets, storing the characteristic information in each read video into a monitoring video table, and setting the name of the monitoring video table as the time period monitored by the video. For example: the preset time period is 1 hour, the monitoring video is generated by the monitoring camera in the number 2 month 13, and when the monitoring video is read, 00 is read respectively: 00-01: 00. 01:00-02:00 … …:00-24:00, and with 00:00-01: 00. 01:00-02:00 … …:00-24:00 is the name as the name of the surveillance video table.
According to the scheme, a database is established for each monitoring camera, so that the situation that information of all monitoring cameras is stored in one table or one table is avoided, then each table is independently established according to each video, all information of one video is stored in each table, data can be stored in a grading mode, full text retrieval can be avoided in subsequent inquiry, meanwhile, the situation that the database is accessed into one table or one table is avoided, and loss is reduced to the minimum when damage to the table or the table occurs.
In some embodiments, according to the search condition, selecting the monitoring video in the monitoring system to obtain the pending video, including: selecting monitoring camera information meeting the place condition from a monitoring camera information table, entering a storage position corresponding to the monitoring camera to generate a monitoring video according to a storage path in the monitoring camera information, and selecting a segment containing the time in the monitoring video monitoring time based on the time represented by the time condition as the video to be determined. Correspondingly, the selecting all the undetermined targets meeting the search condition based on the undetermined video, and obtaining the pictures of the undetermined targets comprises the following steps: according to the monitoring camera information, a corresponding monitoring camera database is connected, a corresponding monitoring video table in the monitoring camera database is selected according to the to-be-determined video, characteristic information meeting the characteristic conditions is matched in the monitoring video table according to the characteristic conditions, each monitoring target corresponding to the characteristic information is the to-be-determined target, video fragments of the to-be-determined targets in the to-be-determined video are intercepted for each to-be-determined target, a frame of picture which contains the characteristic information and has the clearest picture quality is extracted for each video fragment, and the picture is taken as a picture of the to-be-determined target.
Specifically, after the retrieval conditions are obtained, the location conditions in the retrieval conditions are preferentially retrieved, the monitoring location fields which are stored in the monitoring camera information table in a matching mode for the location conditions are selected, and the successfully matched monitoring location information is selected, so that the corresponding information of the monitoring camera and the storage path for generating the monitoring video are obtained.
Secondly, screening the monitoring videos in the storage path by using time conditions, selecting monitoring fragments which accord with the time period and comprise time points represented by the time conditions, for example, the site conditions are A paths and the time conditions are No. 12 points of 2 months 10, firstly selecting all information comprising A paths in a monitoring camera information table, wherein three monitoring cameras respectively comprise an A path B street intersection, an A path C street intersection and an A path D street intersection, then entering the monitoring video storage positions corresponding to the three monitoring cameras, and saving 11 days of 2 months 10: 30-12: the monitoring segment of 30 is used as the pending video.
The information of each monitoring camera in the monitoring camera information table also stores the connection information of the monitoring camera database, database connection is performed based on the connection information in the information of the successfully matched monitoring cameras, and then a monitoring video table with time matching is selected according to the monitoring time of the pending video, for example, the monitoring time of the pending video is 11:30-12:30, and the name of the monitoring video table is 11:00-12: 00. 12:00-13:00 is matched with the monitoring time of the undetermined video, and the two tables are selected.
After a monitoring video table is selected, feature information in the monitoring video table is screened by utilizing feature conditions, the feature information which accords with the feature information is selected, and as each piece of feature information describes information of the same target, each piece of screened feature information corresponds to one monitoring target, each piece of feature information comprises a time point when the target appears and a time point when the target disappears, the video is intercepted in a pending video according to the time points to obtain a video fragment only comprising the monitoring target, the feature quantity of the monitoring target is analyzed in each frame of picture, and one frame which comprises the most features of the monitoring target and has the clearest picture is selected as a picture of the monitoring target.
For example, if the feature condition is "180 men's red coat black shorts white sneaker", all the feature information including the feature condition is selected, and "180 men's red coat black shorts white sneaker black cap 11:20-11:21"," 180 men's red coat black shorts white sneaker 12:20-12:21", if there are two matches, both are selected and corresponding intercept 11:00-12:00 corresponds to 11 in pending video: 20-11: 21. 12:20-12: 21.
According to the scheme, the monitoring camera which can monitor the target site is selected from the monitoring camera information table according to the site condition, the time condition is utilized, the monitoring video of the target time is selected from the database corresponding to the monitoring camera, and finally the target which accords with the characteristics is selected from the monitoring video through the characteristic information, so that the aim of searching all search information is achieved during searching, the searching pressure on a system is reduced by utilizing a hierarchical searching mode, the searching time is shortened, and the searching accuracy is improved.
In some embodiments, analyzing the characteristic information of each monitoring target appearing in the monitoring video includes: extracting key frames from the monitoring video to obtain a plurality of key frames, comparing all the key frames, establishing a static environment model, analyzing all the key frames based on the static environment model, judging whether the monitoring target appears in the key frames, if so, further analyzing the monitoring target, judging the type of the monitoring target, judging the monitoring target as human, analyzing stature information, facial information and clothing information of the monitoring target, and judging the vehicle, and then analyzing license plate information, vehicle type and vehicle color of the monitoring target.
The key frame records all information of the picture in the current time, and the static environment model is a model picture which has no monitoring target in the monitoring place and only contains the inherent environment characteristics of the monitoring place.
Specifically, all key frame pictures in the monitoring video are extracted, then the key frame pictures are compared one by one, when a certain static target appears in more than half key frames, the static target can be determined to be in a monitoring place, then the static target is selected and added into a static environment model, and then a picture of the static environment model is generated.
It should be noted that when a static object appears in more than half of the key frames, it can be determined that the static object is a fixed scene in the monitored location.
And comparing the picture of the static environment model with the key frame again, and in the comparison, if a target which is not in the picture of the static environment model appears in the key frame, setting the target as a monitoring target and carrying out image preliminary analysis on the monitoring target.
If the monitoring target is a human being, further analyzing the coat color information, the coat type, the coat color, the coat type, the shoe color, the cap color, the height information and the face information of the human being by utilizing a picture recognition technology. If the monitoring target is a vehicle, further analyzing license plate information, vehicle color and vehicle type of the vehicle by using a picture recognition technology.
And comparing the key frames in the monitoring video to obtain a static environment model, and reversely analyzing the key frames by using the static environment model, so that whether all targets in the monitoring video appear can be rapidly obtained, further, the targets are subjected to characteristic analysis, and the analysis efficiency of the characteristic information of the targets in the monitoring video is improved.
In some embodiments, obtaining the search condition of the target to be queried includes: and acquiring a sample picture of the target to be queried, acquiring characteristic information in the sample picture, and further selecting part of the characteristic information from the characteristic information as a retrieval condition of the target to be queried.
The sample picture is a picture containing the retrieval condition of the target to be queried.
Specifically, after uploading a sample picture of a target to be queried on a search page, a user identifies the sample picture by utilizing an image identification technology, all the features in the picture are extracted, all the features are returned to the search page for display, and the user selects one or more features from all the features to be selected as a retrieval condition. For example, the user only has one place picture without knowing the name, after uploading the picture, the place feature of the place is automatically extracted, and the user selects the feature as a search condition.
In another implementation manner of this embodiment, after all the features in the picture are extracted, the user continues to input the supplementary feature information, and then uses the supplementary feature information and the feature information of the picture together as the search condition.
According to the scheme, the image characteristics are analyzed, the obtained characteristics are used as the search conditions for searching, more search condition input modes are provided for the user, and the use experience of the user is improved.
In some embodiments, the behavior characteristics include a traveling speed, an entering time, an exiting time, an entering direction, and an exiting direction, and analyzing, according to the video clip, the behavior characteristics of the target to be queried includes: acquiring the moving distance and the moving duration of the target to be queried through the video segment, further calculating the advancing speed of the target to be queried, acquiring the time when the target to be queried first appears in the monitoring area through the video segment, and acquiring the time when the target to be queried finally leaves the monitoring area as the leaving time; acquiring a moving direction of a target to be queried appearing in a monitoring area for the first time, and taking the moving direction as the entering direction; and acquiring the moving direction of the target to be queried, which finally leaves the monitoring area, as the leaving direction. And combining the behavior characteristics and the distribution condition of the monitoring cameras in the monitoring system, determining all fragments of the target to be queried, which accord with the retrieval time span, in the monitoring video, wherein the method comprises the following steps: acquiring road section information monitored by each monitoring camera and the adjacent condition of each monitoring camera in a monitoring system; determining the last monitoring camera according to the entering direction, the advancing speed and the adjacent condition; determining the next monitoring camera according to the leaving direction, the advancing speed and the adjacent condition; obtaining a video segment of a monitoring video of a last monitoring camera of a target to be queried according to the road segment information monitored by the video segment, the road segment information monitored by the last monitoring camera, the travelling speed and the entering time; according to the road section information monitored by the video segment, the road section information monitored by the next monitoring camera, the travelling speed and the departure time, obtaining the video segment of the monitoring video of the target to be queried at the next monitoring camera, and circulating the steps to continuously obtain a new video segment, and stopping circulating if the target to be queried leaves the monitoring area of the monitoring system or the monitoring time of the new video segment reaches the current time or the target to be queried stops moving or the monitoring time of the new video segment exceeds the retrieval time span; after the circulation is stopped, all the acquired video clips are all the clips in the monitoring video of the target to be queried.
In the implementation mode, since the monitoring range of each monitoring camera is fixed, the moving distance of the target to be queried in reality can be calculated through the moving distance of the target to be queried in the video segment, and then the travelling speed of the target to be queried in moving is calculated through the moving time of the target to be queried in the video segment.
The time when the target to be queried appears in the monitoring area and the time when the target to be queried leaves the monitoring area are recorded and are respectively used as the entering time and the leaving time of the target to be queried, and the moving direction when the target to be queried appears in the monitoring area and the moving direction when the target to be queried leaves the monitoring area are recorded and are respectively used as the entering direction and the leaving direction of the target to be queried.
And determining the next monitoring camera of the current monitoring camera in the leaving direction of the target to be queried based on the advancing direction of the target to be queried in the video segment and the distribution condition of each monitoring camera in the monitoring system.
And determining the distance between the monitoring sites in the two monitoring cameras by utilizing the road section information in the monitoring sites of the current monitoring camera and the next monitoring camera. Dividing the distance by the travel time obtained by the travel speed of the target to be queried, adding the travel time and the departure time of the target to be queried to obtain the entry time of the target to be queried in the monitoring area of the next monitoring camera, calculating the departure time of the target to be queried from the monitoring area of the adjacent monitoring camera according to the entry time and the travel speed and the road section information of the monitoring area of the next monitoring camera, intercepting the corresponding monitoring video in the next monitoring camera according to the entry time and the departure time in the next monitoring camera, and generating a new video segment. And similarly, a video clip in one monitoring camera of the target to be queried in the entering direction can be obtained.
And repeating the steps in the implementation manner, continuously obtaining the new video segment according to the behavior characteristics of the target to be queried in the new video segment, the distribution condition of each monitoring camera in the monitoring system and the road section information monitored by the monitoring cameras until the target to be queried leaves the monitoring area of the monitoring system in the new video segment, or the monitoring time of the new video segment reaches the current time, or the target to be queried stops moving, or the monitoring time of the new video segment exceeds the retrieval time span, and stopping circulation.
For example, a monitoring camera is used to monitor a road segment of path a, which is 20 meters long, 11 on 2 months and 10 days: 30:40 captures someone walking into the monitored area from east to west and at 11:31:00 shoots the person leaving the monitoring area from east to west, the person's travel speed is 60 meters per minute, and the entry time is 11:30: 40. departure time 11:31: 00. the entering direction is from east to west, and the leaving direction is from east to west.
The road section scope monitored by the current monitoring camera is an A road B street intersection, the road section scope monitored by the monitoring camera adjacent to the monitoring camera is an A road C street intersection, the distance between the east side intersection of the A road B street intersection and the west side intersection of the A road C street intersection is 1200 meters, and the distance between the monitoring sites of the two corresponding monitoring cameras is also 1200 meters.
The time for the person to reach the intersection of the a road and the C street is 11:51:00, the road section of the A road and C road intersection is 20 meters long, and the time for the person to get out of the A road and C road intersection is 11:51:20, intercepting 11 in the monitoring video of the monitoring camera of the A-path C-street intersection correspondingly: 51:00-11:51: 20.
For the sake of clarity and understanding, the time is accurate, and the actual intercepting time is adjusted correspondingly.
In other implementations, when determining neighboring cameras, it is determined that the target to be queried is a monitoring camera within a certain range in the departure direction.
According to the scheme, the behavior characteristics of the monitoring video, such as the advancing speed, the advancing direction, the entering time, the leaving time and the like, about the target to be queried are utilized to analyze, the distribution condition of the monitoring cameras and the road section information monitored by each monitoring camera are combined, the monitoring video of the target to be queried is monitored according to one section, the monitoring video of the target to be queried shot by the adjacent cameras is acquired in a chained recursion mode, and the efficiency of retrieving video segments about the target to be queried in the monitoring video is improved.
In some embodiments, drawing a roadmap of the action trajectory of the object to be queried based on the object video, comprising: and acquiring information of each road section monitored in the target video, marking each road section in a solid line in the map according to the information of each road section, connecting adjacent road sections marked in the solid line in the map according to the travelling direction of the target to be inquired in the target video, and drawing a direction mark.
Specifically, since the target video is composed of video segments in which the target to be queried appears, and the road segments monitored by each video segment are road segments in which the target to be queried appears, after the target video is completely read, road segment information represented by the video segments composing the target video is respectively identified in a map, then, according to the travelling direction of the target to be queried in each video segment, the adjacent road segments in the map are connected by a dotted line, and the direction information is marked on the dotted line, so that the behavior track diagram of the target to be queried is completed.
According to the scheme, the route of the target to be queried, which is moved, can be marked on the map correspondingly by utilizing the road section information of the target to be queried in each section of monitoring video, and then all the moved routes are connected through the advancing direction of the target to be queried, so that the drawing of the action track diagram of the target to be queried is completed.
Fig. 3 is a schematic structural diagram of a monitoring intelligent analysis device according to an embodiment of the present application, and as shown in fig. 3, the monitoring intelligent analysis device 300 is configured to manage a monitoring video collected by a monitoring system disposed in a target area, and includes:
an obtaining module 301, configured to obtain a search condition of a target to be queried; the search conditions include: time conditions, place conditions, feature conditions, and search time spans;
The first search module 302 is configured to segment the monitoring video in the monitoring system according to the time condition and the place condition to obtain a pending video; selecting all undetermined targets meeting the characteristic conditions based on the undetermined video, and obtaining pictures of the undetermined targets;
the reverse retrieval module 303 determines a picture of the target to be queried from the pictures of the target to be determined; determining a video fragment of the target to be queried in a pending video according to the picture of the target to be queried;
the second retrieval module 304 is configured to analyze, according to the video clip, behavior characteristics of the target to be queried; combining the behavior characteristics and the distribution condition of monitoring cameras in a monitoring system, and determining all fragments of a target to be queried, which accords with the retrieval time span, in the monitoring video;
the drawing module 305 is configured to splice all the segments according to a time sequence, and generate a target video; and drawing a route pattern of the action track of the target to be queried based on the target video, and calling a display module to display the route pattern and the target video.
Optionally, the monitoring intelligent analysis device 300 further comprises a database table creating module 306.
A database creation table module 306, configured to create a monitoring camera information table;
storing monitoring place information of a monitoring camera and a storage path for generating a monitoring video into a monitoring camera information table as monitoring camera information;
creating a monitoring camera database for each monitoring camera;
the method comprises the steps of reading monitoring videos in a segmented mode according to a preset time period, and analyzing characteristic information of each monitoring target in each monitoring video;
and correspondingly generating a monitoring video table in the monitoring camera database according to the preset time period and the characteristic information.
Optionally, the first search module 302 performs section selection on the monitoring video in the monitoring system according to the search condition, and is specifically configured to:
selecting monitoring camera information meeting the place conditions from a monitoring camera information table;
entering a storage position corresponding to the monitoring camera to generate a monitoring video according to the storage path in the monitoring camera information;
based on the time represented by the time condition, selecting a segment containing the time in the monitoring video monitoring time as the video to be determined;
The first search module 302 is specifically configured to, when selecting all pending targets that meet the search condition based on the pending video and obtaining a picture of the pending targets:
according to the monitoring camera information, connecting a corresponding monitoring camera database;
selecting a corresponding monitoring video table in the monitoring camera database according to the pending video;
according to the characteristic conditions, characteristic information conforming to the characteristic conditions is matched from the monitoring video table, and a monitoring target corresponding to each piece of characteristic information is the target to be determined;
aiming at each undetermined target, intercepting a video fragment of the undetermined target in the undetermined video;
and correspondingly extracting a frame of picture which contains the characteristic information and has the clearest image quality for each video segment, and taking the picture as the picture of the undetermined target.
Optionally, the monitoring intelligent analysis device 300 further includes a video analysis module 307;
the video analysis module 307 is specifically configured to, when analyzing the feature information of each monitoring target appearing in each section of monitoring video:
extracting key frames from the monitoring video to obtain a plurality of key frames;
Comparing all the key frames to establish a static environment model;
analyzing all key frames based on the static environment model, and judging whether the monitoring target appears in the key frames;
if the monitoring target appears, further analyzing the monitoring target;
and judging the type of the monitoring target, analyzing the stature information, the facial information and the clothing information of the monitoring target if the type of the monitoring target is human, and analyzing the license plate information, the type and the color of the vehicle of the monitoring target if the type of the monitoring target is vehicle.
Optionally, when the first search module 302 obtains the search condition of the target to be queried, the method is specifically used for:
acquiring a sample picture of the target to be queried, and acquiring characteristic information in the sample picture;
and further selecting part of the characteristic information from the characteristic information as a retrieval condition of the target to be queried.
Optionally, the behavior feature includes: travel speed, entry time, exit time, entry direction, exit direction;
the second search module 304 is specifically configured to, when analyzing the behavior characteristics of the target to be queried according to the video clip: acquiring the moving distance and the moving duration of the target to be queried through the video clip, and further calculating the travelling speed of the target to be queried;
Acquiring the time when the target to be queried first appears in the monitoring area through the video clip, and taking the time as the entering time; acquiring the time when the target to be queried finally leaves a monitoring area as leaving time; acquiring a moving direction of a target to be queried appearing in a monitoring area for the first time, and taking the moving direction as an entering direction; acquiring a moving direction of a target to be queried leaving a monitoring area finally as a leaving direction;
the second search module 304 is specifically configured to, when determining that the target to be queried accords with all segments of the search time span in the surveillance video in combination with the behavior characteristics and distribution conditions of surveillance cameras in the surveillance system: acquiring road section information monitored by each monitoring camera and the adjacent condition of each monitoring camera in the monitoring system;
determining a last monitoring camera according to the entering direction, the advancing speed and the adjacent condition; determining a next monitoring camera according to the leaving direction, the travelling speed and the adjacent condition;
obtaining a video segment of the monitoring video of the target to be queried on the last monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the last monitoring camera, the travelling speed and the entering time; obtaining a video segment of the monitoring video of the target to be queried at the next monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the next monitoring camera, the travelling speed and the departure time;
The steps are circulated, new video clips are continuously obtained, if the target to be queried leaves the monitoring area of the monitoring system, or the monitoring time of the new video clips reaches the current time, or the target to be queried stops moving, the circulation is stopped, or the monitoring time of the new video clips exceeds the retrieval time span; after the circulation is stopped, all the acquired video fragments are all the fragments of the target to be queried in the monitoring video.
Optionally, the drawing module 305 is specifically configured to, when drawing a route map of the action track of the target to be queried based on the target video:
acquiring information of each road section monitored in a target video;
according to the information of each road section, marking each road section in a solid line in a map;
and according to the advancing direction of the target to be queried in the target video, connecting adjacent road sections marked by solid lines in the map by using a dotted line, and drawing a direction mark.
The apparatus of this embodiment may be used to perform the method of any of the foregoing embodiments, and its implementation principle and technical effects are similar, and will not be described herein again.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 4, an electronic device 400 according to the present embodiment may include: a memory 401 and a processor 402.
The memory 401 has stored thereon a computer program that can be loaded by the processor 402 and that performs the methods of the above-described embodiments.
Wherein the processor 402 is coupled to the memory 401, e.g. via a bus.
Optionally, the electronic device 400 may also include a transceiver. It should be noted that, in practical applications, the transceiver is not limited to one, and the structure of the electronic device 400 is not limited to the embodiments of the present application.
The processor 402 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. Processor 402 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
A bus may include a path that communicates information between the components. The bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
Memory 401 may be, but is not limited to, a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory ), a CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 401 is used for storing application program codes for executing the present application and is controlled to be executed by the processor 402. The processor 402 is configured to execute the application code stored in the memory 401 to implement what is shown in the foregoing method embodiment.
Among them, electronic devices include, but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. But may also be a server or the like. The electronic device shown in fig. 4 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments herein.
The electronic device of the present embodiment may be used to execute the method of any of the foregoing embodiments, and its implementation principle and technical effects are similar, and will not be described herein.
The present application also provides a computer-readable storage medium storing a computer program capable of being loaded by a processor and executing the method in the above embodiments.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.

Claims (10)

1. A monitoring intelligent analysis method for managing a monitoring video collected by a monitoring system provided in a target area, the method comprising:
acquiring search conditions of a target to be queried; the search conditions include: time conditions, place conditions, feature conditions, and search time spans;
selecting the monitoring video in the monitoring system according to the time condition and the place condition to obtain a pending video;
Selecting all undetermined targets meeting the characteristic conditions based on the undetermined video, and obtaining pictures of the undetermined targets;
determining a picture of the target to be queried from the pictures of the target to be determined;
determining a video fragment of the target to be queried in a pending video according to the picture of the target to be queried;
analyzing the behavior characteristics of the target to be queried according to the video fragment;
combining the behavior characteristics and the distribution condition of monitoring cameras in a monitoring system, and determining all fragments of a target to be queried, which accords with the retrieval time span, in the monitoring video;
splicing all the fragments according to the time sequence to generate a target video;
and drawing a route map of the action track of the target to be queried based on the target video, and displaying the route map and the target video.
2. The method as recited in claim 1, further comprising:
creating a monitoring camera information table;
storing the monitoring place information of the monitoring camera and the storage path of the generated monitoring video into the monitoring camera information table as monitoring camera information;
creating a monitoring camera database for each monitoring camera;
The method comprises the steps of reading monitoring videos in a segmented mode according to a preset time period, and analyzing characteristic information of each monitoring target in each monitoring video;
and correspondingly generating a monitoring video table in the monitoring camera database according to the preset time period and the characteristic information.
3. The method according to claim 2, wherein the selecting the surveillance video in the surveillance system according to the search condition to obtain the pending video includes:
selecting monitoring camera information meeting the place conditions from a monitoring camera information table;
entering a storage position corresponding to the monitoring camera to generate a monitoring video according to the storage path in the monitoring camera information;
based on the time represented by the time condition, selecting a segment containing the time in the monitoring video monitoring time as the video to be determined;
selecting all undetermined targets meeting the search condition based on the undetermined video, and obtaining pictures of the undetermined targets, wherein the method comprises the following steps:
according to the monitoring camera information, connecting a corresponding monitoring camera database;
selecting a corresponding monitoring video table in the monitoring camera database according to the pending video;
According to the characteristic conditions, characteristic information conforming to the characteristic conditions is matched from the monitoring video table, and a monitoring target corresponding to each piece of characteristic information is the target to be determined;
aiming at each undetermined target, intercepting a video fragment of the undetermined target in the undetermined video;
and correspondingly extracting a frame of picture which contains the characteristic information and has the clearest image quality for each video segment, and taking the picture as the picture of the undetermined target.
4. The method according to claim 2, wherein analyzing the characteristic information of each monitoring target appearing in each section of the monitoring video includes:
extracting key frames from the monitoring video to obtain a plurality of key frames;
comparing all the key frames to establish a static environment model;
analyzing all key frames based on the static environment model, and judging whether the monitoring target appears in the key frames;
if the monitoring target appears, further analyzing the monitoring target;
and judging the type of the monitoring target, analyzing the stature information, the facial information and the clothing information of the monitoring target if the type of the monitoring target is human, and analyzing the license plate information, the type and the color of the vehicle of the monitoring target if the type of the monitoring target is vehicle.
5. The method according to claim 1, wherein the obtaining the search condition of the target to be queried includes:
acquiring a sample picture of the target to be queried, and acquiring characteristic information in the sample picture;
and further selecting part of the characteristic information from the characteristic information as a retrieval condition of the target to be queried.
6. The method of any one of claims 1-5, wherein the behavioral characteristics include: travel speed, entry time, exit time, entry direction, exit direction;
the analyzing the behavior characteristics of the target to be queried according to the video segment comprises the following steps:
acquiring the moving distance and the moving duration of the target to be queried through the video segment, and further calculating the advancing speed of the target to be queried;
acquiring the time when the target to be queried first appears in the monitoring area through the video segment, and taking the time as the entering time; acquiring the time when a target to be queried finally leaves a monitoring area as the leaving time; acquiring a moving direction of a target to be queried appearing in a monitoring area for the first time, and taking the moving direction as the entering direction; acquiring a moving direction of a target to be queried, which finally leaves a monitoring area, as the leaving direction;
And determining all fragments of the target to be queried, which conform to the retrieval time span in the monitoring video, by combining the behavior characteristics and the distribution condition of the monitoring cameras in the monitoring system, wherein the method comprises the following steps:
acquiring road section information monitored by each monitoring camera and the adjacent condition of each monitoring camera in the monitoring system;
determining a last monitoring camera according to the entering direction, the advancing speed and the adjacent condition; determining a next monitoring camera according to the leaving direction, the travelling speed and the adjacent condition;
obtaining a video segment of the monitoring video of the target to be queried on the last monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the last monitoring camera, the travelling speed and the entering time; obtaining a video segment of the monitoring video of the target to be queried at the next monitoring camera according to the road segment information monitored by the video segment, the road segment information monitored by the next monitoring camera, the travelling speed and the departure time;
the steps are circulated, new video clips are continuously obtained, and if the target to be queried leaves the monitoring area of the monitoring system, or the monitoring time of the new video clips reaches the current time, or the target to be queried stops moving, or the monitoring time of the new video clips exceeds the retrieval time span, circulation is stopped; after the circulation is stopped, all the acquired video fragments are all the fragments of the target to be queried in the monitoring video.
7. The method of claim 6, wherein the mapping the action trajectory of the object to be queried based on the object video comprises:
acquiring information of each road section monitored in the target video;
marking each road section in a solid line in a map according to the information of each road section;
and according to the advancing direction of the target to be queried in the target video, connecting adjacent road sections marked by solid lines in the map by using a dotted line, and drawing a direction mark.
8. A monitoring intelligent analysis device for managing a monitoring video collected by a monitoring system provided in a target area, comprising:
the acquisition module is used for acquiring the retrieval conditions of the target to be queried; the search conditions include: time conditions, place conditions, feature conditions, and search time spans;
the first retrieval module is used for selecting the monitoring video in the monitoring system according to the time condition and the place condition to obtain a pending video; selecting all undetermined targets meeting the characteristic conditions based on the undetermined video, and obtaining pictures of the undetermined targets;
the reverse retrieval module determines a picture of the target to be queried from the pictures of the target to be determined; determining a video fragment of the target to be queried in a pending video according to the picture of the target to be queried;
The second retrieval module is used for analyzing the behavior characteristics of the target to be queried according to the video clips; combining the behavior characteristics and the distribution condition of monitoring cameras in a monitoring system, and determining all fragments of a target to be queried, which accords with the retrieval time span, in the monitoring video;
the drawing module is used for splicing all the fragments according to the time sequence to generate a target video; and drawing a route map of the action track of the target to be queried based on the target video, and calling a display module to display the route map and the target video.
9. An electronic device, comprising: a memory and a processor;
the memory is used for storing program instructions;
the processor is configured to invoke and execute program instructions in the memory to perform the monitoring intelligent analysis method according to any of claims 1-7.
10. A computer-readable storage medium, wherein the computer-readable storage medium has a computer program stored therein; the computer program, when executed by a processor, implements the monitoring intelligent analysis method according to any of claims 1-7.
CN202310177753.9A 2023-02-28 2023-02-28 Monitoring intelligent analysis method and device, electronic equipment and storage medium Pending CN116383436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310177753.9A CN116383436A (en) 2023-02-28 2023-02-28 Monitoring intelligent analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310177753.9A CN116383436A (en) 2023-02-28 2023-02-28 Monitoring intelligent analysis method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116383436A true CN116383436A (en) 2023-07-04

Family

ID=86962384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310177753.9A Pending CN116383436A (en) 2023-02-28 2023-02-28 Monitoring intelligent analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116383436A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866534A (en) * 2023-09-05 2023-10-10 南京隆精微电子技术有限公司 Processing method and device for digital video monitoring system
CN117830911A (en) * 2024-03-06 2024-04-05 一脉通(深圳)智能科技有限公司 Intelligent identification method and device for intelligent camera, electronic equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866534A (en) * 2023-09-05 2023-10-10 南京隆精微电子技术有限公司 Processing method and device for digital video monitoring system
CN116866534B (en) * 2023-09-05 2023-11-28 南京隆精微电子技术有限公司 Processing method and device for digital video monitoring system
CN117830911A (en) * 2024-03-06 2024-04-05 一脉通(深圳)智能科技有限公司 Intelligent identification method and device for intelligent camera, electronic equipment and medium
CN117830911B (en) * 2024-03-06 2024-05-28 一脉通(深圳)智能科技有限公司 Intelligent identification method and device for intelligent camera, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN116383436A (en) Monitoring intelligent analysis method and device, electronic equipment and storage medium
KR101398700B1 (en) Annotation system and method for video data
CN109740420A (en) Vehicle illegal recognition methods and Related product
US8639023B2 (en) Method and system for hierarchically matching images of buildings, and computer-readable recording medium
US11657623B2 (en) Traffic information providing method and device, and computer program stored in medium in order to execute method
CN108197619A (en) A kind of localization method based on signboard image, device, equipment and storage medium
US11907291B2 (en) System for integral analysis and management of video data
CN110136091A (en) Image processing method and Related product
CN106331606A (en) Multi-screen display system and method for video detection system
KR20190124436A (en) Method for searching building based on image and apparatus for the same
US9135338B2 (en) Systems and methods for efficient feature based image and video analysis
JP2023129429A (en) Information processing device, information processing method, and program
RU2710308C1 (en) System and method for processing video data from archive
Jiao et al. Traffic behavior recognition from traffic videos under occlusion condition: a Kalman filter approach
CN112766670B (en) Evaluation method and device based on high-precision map data crowdsourcing update system
CN112464757A (en) High-definition video-based target real-time positioning and track reconstruction method
CN115424465B (en) Method and device for constructing parking lot map and storage medium
CN106777078A (en) A kind of video retrieval method and system based on information database
JP7389955B2 (en) Information processing device, information processing method and program
CN104683760A (en) Video processing method and system
CN114677627A (en) Target clue finding method, device, equipment and medium
CN110781797B (en) Labeling method and device and electronic equipment
KR102030352B1 (en) Real time tracking system of vehicle base a cognition vehicles number
CN108320075A (en) A kind of visualized presence monitoring system and method having photo management function
CN113569645A (en) Track generation method, device and system based on image detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination