CN116382813B - Video real-time processing AI engine system for smart city management - Google Patents
Video real-time processing AI engine system for smart city management Download PDFInfo
- Publication number
- CN116382813B CN116382813B CN202310252918.4A CN202310252918A CN116382813B CN 116382813 B CN116382813 B CN 116382813B CN 202310252918 A CN202310252918 A CN 202310252918A CN 116382813 B CN116382813 B CN 116382813B
- Authority
- CN
- China
- Prior art keywords
- plug
- event
- video stream
- service module
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 claims description 48
- 230000008569 process Effects 0.000 claims description 46
- 230000003044 adaptive effect Effects 0.000 claims description 15
- 238000003062 neural network model Methods 0.000 claims description 13
- 230000003542 behavioural effect Effects 0.000 claims description 9
- 230000006399 behavior Effects 0.000 description 43
- 238000010586 diagram Methods 0.000 description 9
- 230000009471 action Effects 0.000 description 6
- 230000003068 static effect Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4482—Procedural
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44521—Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
- G06F9/44526—Plug-ins; Add-ons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- Artificial Intelligence (AREA)
- Economics (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Computational Linguistics (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- Human Resources & Organizations (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Alarm Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a video real-time processing AI engine system for intelligent city management, which comprises the following components: the video stream scheduling module is used for receiving the task information and distributing scheduling parameters for the application service module according to the task information; the application service module is used for assembling and initializing a functional plug-in unit for video stream processing and an event plug-in unit for behavior event analysis according to the scheduling parameters; processing the video stream data and outputting behavior event data to a notification module; and the notification module is used for transmitting the behavior event data to an external receiving end. The AI engine system provided by the invention has simple architecture, can provide rapid assembly capability based on a workflow mode, is suitable for customizing the AI engine of a complex video stream scene, realizes high-efficiency, low-cost and diversified energized video stream processing, reduces the cost and period of energized video streams on line, and ensures the practicability of an AI engine intelligent analysis platform.
Description
Technical Field
The invention relates to the technical field of informationized city management, in particular to a video real-time processing AI engine system for intelligent city management.
Background
With the development of society, the more important the construction progress of smart cities is due to factors such as urban innovation, economic growth, public safety and the like. In the construction, the video monitoring coverage rate of more cities is more than 90%, and the number of cameras in more cities is millions. A part of the video stream acquired by the cameras adopts a passive monitoring mode and is stored for subsequent manual verification; some use the "active monitor" mode to analyze various behaviors occurring in the video stream in real time. The active monitoring mode is used for analyzing the video stream and needs to be used for an AI intelligent analysis platform. Wherein the AI engine system is the brain of the AI intelligent analysis platform. The AI engine in the prior art has complex architecture and difficult scene customization, and needs a plurality of research and development investment aiming at a behavior analysis event, thereby greatly increasing the online cost and period.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide the video real-time processing AI engine system for smart city management, which can be assembled and operated quickly, has higher accuracy of analysis of behavior events, and has low cost and multiple purposes.
In order to achieve the technical purpose, the technical scheme provided by the invention comprises the following steps:
A video real-time processing AI engine system for smart city management includes the connection in proper order:
the video stream scheduling module is used for receiving the task information and distributing scheduling parameters for the application service module according to the task information;
the application service module is used for assembling and initializing a functional plug-in unit for video stream processing and an event plug-in unit for behavior event analysis according to the scheduling parameters; processing the video stream data and outputting behavior event data to a notification module;
The notification module is used for transmitting the behavior event data to an external receiving end;
The scheduling parameters include: camera parameters used for representing basic information of the camera; task parameters for characterizing task basic information and behavioral event target parameters for characterizing behavioral event analysis capability requirements;
in some preferred embodiments, the system further comprises a public service module connected with the application service module;
the application service module determines whether to perform cross-video stream processing according to the scheduling parameters, if so, respectively assembling and initializing a branch function plug-in unit and a branch event plug-in unit for each branch video stream and a summary event plug-in unit for summary behavior event analysis, and starting the public service module;
The public service module is configured to: and summarizing and forwarding the output data of each branch event plugin group to the summarized event plugin group.
In some preferred embodiments, the system further comprises a data packet processing module, wherein the data packet processing module is connected with the video stream scheduling module, receives video stream data, performs unpacking processing and returns unpacked data; and the notification module is connected with the notification module, receives the behavior event data, performs package processing and returns package data.
In some preferred embodiments, the application service module is internally provided with a plurality of pull plug-ins supporting different protocols and a plurality of decoding plug-ins supporting different decoding modes;
the application service module is configured to: selecting an adaptive pulling plug-in according to the camera parameters and the task parameters to carry out pulling operation on video stream data; selecting an adaptive decoding plug-in to perform decoding operation on video stream data; assembling the adapted pull plug-ins and decoding plug-ins into a functional plug-in group; and transmitting the video stream data processed by the function plug-in unit to the event plug-in unit.
In some preferred embodiments, the application service module has built-in:
the process plug-in library is arranged to contain a plurality of process plug-ins for realizing logic functions;
the terminal plug-in library is set to comprise a plurality of terminal plug-ins for realizing the behavior event alarming function;
The application service module is configured to: selecting an adaptive process plug-in to perform logic analysis processing on video stream data according to the behavior event target parameters so as to realize identification of behavior events contained in the video stream; selecting an adaptive endpoint plug-in according to the identified behavior event to realize alarming of the behavior event; sequentially assembling the adaptive process plug-ins and the endpoint plug-ins into event plug-in groups; and transmitting the behavior event data processed by the event plug-in group to a notification module.
In some preferred embodiments, the process plug-in library further comprises a neural network model plug-in connected to the functional plug-in group for identifying objects in the video image, wherein the neural network model plug-in is configured to have a plurality of trained image object identification neural network models built in.
In some preferred embodiments, the process plug-in library also contains tracking plug-ins for extracting moving or stationary attributes of the target.
In some preferred embodiments, the application service module is configured to: and naming the same process plug-in which is used for multiple times in a plug-in name and alias mode according to the function type of the process plug-in the event plug-in group so as to ensure the uniqueness of the use of the process plug-in.
Advantageous effects
The AI engine system provided by the invention has simple architecture, can provide rapid assembly capability based on a workflow mode, is suitable for customizing the AI engine of a complex video stream scene, realizes high-efficiency, low-cost and diversified energized video stream processing, reduces the cost and period of energized video streams on line, and ensures the practicability of an AI engine intelligent analysis platform.
Drawings
FIG. 1 is a schematic diagram of an AI engine system for real-time video processing for smart city management in accordance with a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram showing the structure of a functional plug-in module according to another preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of the composition of a process plug-in library and an endpoint plug-in library of an application service module in accordance with another preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of an event plug-in package architecture of an AI engine for a suspected pyrotechnic behavior analysis task in accordance with another preferred embodiment of the invention;
FIG. 5 is a schematic diagram of an event plug-in set of AI engines for behavior analysis tasks above 60 seconds suspected of being out of service in another preferred embodiment of the invention;
FIG. 6 is a schematic diagram of a common service module and event plug-in group for cross-video popularity analysis tasks in another preferred embodiment of the present invention;
FIG. 7 is a diagram of a naming method of a process plug-in that is used multiple times in the same task in another preferred embodiment of the present invention;
Detailed Description
The present invention will be further described with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Example 1
As shown in fig. 1, the present embodiment discloses a video real-time processing AI engine system for smart city management, which includes:
The video stream scheduling module is used for receiving the task information and distributing scheduling parameters for the application service module according to the task information; the scheduling parameters include: camera parameters used for representing basic information of the camera; task parameters for characterizing task basic information and behavioral event objective parameters for characterizing behavioral event analysis capability requirements.
The application service module is used for assembling and initializing a functional plug-in unit for video stream processing and an event plug-in unit for behavior event analysis according to the scheduling parameters; processing the video stream data and outputting behavior event data to a notification module.
And the notification module is used for transmitting the behavior event data to an external receiving end.
It should be understood that the task information is determined by a person skilled in the art according to the needs of the smart city AI engine system to be built, and specifically, the person skilled in the art may initiate, through the HTTP client, to start the analysis of the video stream behavior event signal to the video stream scheduling module, and transfer the task information. The relationship between task information and scheduling parameters is described as follows:
For a specific task of smart city management, it includes the following information: and calling a certain number or address camera to acquire and process video images of the target area so as to analyze specific behavior events contained in the target area. It should be understood that the behavioral event refers to: an Action (Behaviors) to be taken in response to an Event, the Action comprising a number of specific actions (actions). By analyzing and extracting specific actions contained in the targets in the video images, target behaviors represented by the actions can be obtained, and then the events which are happening or have happened are estimated according to the behaviors.
For example, a long parking maneuver in a no-parking area characterizes the target's no-parking behavior, where it can be inferred if the target is experiencing a no-parking event; for such tasks, the task information thereof includes: and calling at least 1 camera facing the forbidden region to acquire video streams of the forbidden region, and analyzing whether vehicles in the video streams meet the condition of illegal parking. From the above task information, the following requirements can be determined:
Basic information of the camera (including camera number, camera address, video stream format (file stream, RTSP stream, RTMP stream, etc.); task basic information (whether I-frame decoding is enabled, decoding mode, skip frame analysis interval, whether rendering is enabled, video stream analysis polygonal region or mask region is focused on, alarm callback address, etc.); behavioral event analysis capability requires information (capability name, detection sensitivity, capability focus polygon area, target size, alarm reporting period interval, capability policy parameters, etc.).
The component logic of the AI engine system of the application includes: the AI engine system includes a stationary state and an operating state. The static state contains the configuration and plug-ins of all services on which the engine system depends, and the combination and binding relationship of all configurations and plug-ins. The running state refers to configuration and plugins which are used for realizing video stream processing and AI capability analysis and are selectively accessed by the engine system for realizing task targets, namely, the AI engine system selects related plugin combinations and binding relations in a static state, and initializes plugin related parameters through configuration files. The task objective is realized by depending on the task information, but the task information is not in a data format which can be understood by a machine and a system, so that the task information needs to be converted, and the conversion result is a scheduling parameter. The AI engine system performs service configuration and plug-in selection according to the scheduling parameters.
In some preferred embodiments, the implementation of the target task relies on the processing of data across video streams, e.g., requiring some video streams with a wider field of view to observe their behavior state, and some video streams with a narrower field of view and a clear target to collect detail data to complete analysis of behavior events. Whereas the prior art only supports AI capability analysis of a single video stream, support for multiple video streams is inadequate. For this reason, the present embodiment solves the problem of cross-video stream data processing by introducing a common service module, and specifically includes:
The public service module is connected with the application service module; the application service module determines whether to perform cross-video stream processing according to the scheduling parameters, if so, respectively assembling and initializing a branch function plug-in unit and a branch event plug-in unit for each branch video stream and a summary event plug-in unit for summary behavior event analysis, and starting the public service module;
The public service module is configured to: and summarizing and forwarding the output data of each branch event plugin group to the summarized event plugin group.
In other preferred embodiments, in order to transmit larger data and maintain the consistency of the structure during the data transmission, a packet processing module is considered, where the packet processing module is configured to connect with the video stream scheduling module, receive the video stream data for unpacking and return unpacked data; and the notification module is connected with the notification module, receives the behavior event data, performs package processing and returns package data. It should be understood that the specific method of unpacking and packing data may be selected and adapted by those skilled in the art according to actual needs in the art, and the present invention is not limited thereto.
Example 2
The present embodiment is developed on the basis of the above embodiment 1, and as shown in fig. 2, the present embodiment provides a specific structural example of an application service module.
The application service module is internally provided with a plurality of pull plug-ins supporting different protocols and a plurality of decoding plug-ins supporting different decoding modes;
The application service module is configured to: selecting an adaptive pulling plug-in according to the camera parameters and the task parameters to carry out pulling operation on video stream data; selecting an adaptive decoding plug-in to perform decoding operation on video stream data; assembling the adapted pull plug-ins and decoding plug-ins into a functional plug-in group; and transmitting the video stream data processed by the function plug-in unit to the event plug-in unit. In some preferred embodiments, the pull plug-ins include RTSP protocol stream pull plug-ins, RTMP protocol stream pull plug-ins, file video stream module pull plug-ins, and the like; the decoding plug-in comprises a CPU decoding plug-in, a hardware decoding plug-in, a soft and hard decoding plug-in and the like. Obviously, the selection of the pulling plug-in is related to the parameters of the camera, and the video stream format generated by the camera is specifically a mapping relation; the decoding plug-in is related to task parameters, in particular to the decoding mode. The output data structure of the pull plug-in is consistent with the input data structure of the decode plug-in.
Example 3
The present embodiment is developed on the basis of the above embodiment 1, and as shown in fig. 3, this embodiment gives a specific structural example of another application service module.
The application service module is internally provided with:
the process plug-in library is arranged to contain a plurality of process plug-ins for realizing logic functions; the process plug-in is divided into a single plug-in data source function plug-in and a multi-plug-in data source function plug-in, the single data source process plug-in comprises a tag/size filtering plug-in, a position filtering plug-in, a time filtering plug-in and the like, and the multi-data source process plug-in comprises a vehicle structuring plug-in, a personnel structuring plug-in and the like.
The terminal plug-in library is set to comprise a plurality of terminal plug-ins for realizing the behavior event alarming function; the terminal plug-in comprises a vehicle structural alarm plug-in, a personnel structural alarm plug-in, a target object alarm plug-in and the like.
The application service module is configured to: selecting an adaptive process plug-in to perform logic analysis processing on video stream data according to the behavior event target parameters so as to realize identification of behavior events contained in the video stream; selecting an adaptive endpoint plug-in according to the identified behavior event to realize alarming of the behavior event; sequentially assembling the adaptive process plug-ins and the endpoint plug-ins into event plug-in groups; and transmitting the behavior event data processed by the event plug-in group to a notification module.
Wherein, the process plug-in is required to be associated with other plug-ins, such as other process plug-ins or end plug-ins, before and after the process plug-in; while the end plug-in is not associated with other process plug-ins or end plug-ins at the back, its data is passed directly to the notification module.
FIG. 4 is a schematic diagram of another preferred embodiment of an event plug-in set for an AI engine for a suspected pyrotechnic behavior analysis task.
In other preferred embodiments, the process plug-in library further comprises tracking plug-ins for extracting moving or stationary properties of the object. The tracking plug-in comprises a static tracking plug-in and a mobile tracking plug-in.
The input data structure of the static tracking plug-in is an image target information object list, the functional parameters include target frame matching IOU threshold value and the like, and the output data structure is a target information list updated with target ID and target duration. The function is as follows: the target information object which is relatively static for a period of time is extracted and used as input data of a follow-up functional plug-in.
The input data structure of the mobile tracking plug-in is an image target information object list, the functional parameters include a target loss time threshold value and the like, and the output data structure is a target information list with target ID and target duration. The function is as follows: and extracting the target information object with the movement attribute as input data of other subsequent plugins.
Further, the selection or non-selection of the tracking plug-in is determined by the target parameter of the behavior event in the scheduling parameters, and generally, when the task requirement has a requirement for a certain duration of behavior of the target, for example, for an urban road shutdown behavior monitoring task, the target vehicle is only used as a suspected shutdown event after being monitored to stop continuously for a certain time in a shutdown area. At this point, the tracking plug-in needs to be invoked. Fig. 5 is a schematic diagram of the structure of the event plugin set of the preferred embodiment of the AI engine for the behavior analysis task more than 60 seconds suspected of being illicit.
Example 4
The present embodiment is developed on the basis of the above embodiment 3, and in order to better identify and process target information in a video image, the present embodiment considers that a neural network model is used to identify and process an image target, specifically:
The process plug-in library also comprises a neural network model plug-in which is connected with the functional plug-in group and is used for identifying targets in video images, and the neural network model plug-in is set to be internally provided with a plurality of trained image target identification neural network models. Obviously, the neural network model comprises a trained network structure and network weights, and the specific workflow is as follows:
Determining the required functional attributes of the neural network model according to the behavioral event target parameters, such as selecting the name of the model, the path in which the model is stored, initializing the threshold value of the model and the like;
Inputting a data structure, wherein the data is BGR image frames;
and outputting a data structure, wherein the output data is a target information list in the image.
It should be understood that the specific structure and type of the neural network model used in the present embodiment can be selected by those skilled in the art according to actual needs, and the present invention is not limited thereto.
Example 5
The present embodiment is developed on the basis of the above embodiment 1, and a specific working example of a public service module is given in the present embodiment.
For some cross-video-stream AI capabilities, some video streams with wider fields of view are often required to observe the behavior state of the video streams, some video streams with narrower fields of view and clear targets are required to acquire detail data, and a single video stream cannot complete the AI capability analysis. For example, when the task of capturing the license plate of the overspeed vehicle in the target area is processed, the video stream of the driving track of the vehicle for at least 1 second needs to be captured in a wider view, and the license plate identification is performed by the video stream with a narrower view and clear license plate, so that the video stream can not meet the requirements.
As shown in fig. 6, in this embodiment, by integrating video streams of each single camera, and constructing event plug-in units for identifying a driving track and a license plate respectively, finally, the driving track and a behavior event identification result of the license plate are integrated through a public service module and transmitted to a vehicle structuring process plug-in unit, and finally, an alarm is given through a terminal plug-in unit. It should be noted that, when the cross-video stream processing is required, the behavior event target parameters of the scheduling parameters include: capability name, master camera task information (detection sensitivity, capability focus polygon area, target size, alarm reporting period interval, capability policy parameters), sub-camera task information (sub-camera task basic information, sub-AI capability (capability name, camera address, detection sensitivity, capability focus polygon area, target size, alarm reporting period interval, capability policy parameters, etc.)). It is particularly pointed out that sub-AI capabilities do not contain endpoint plug-ins, whose data would flow into the common service module.
In this embodiment and other preferred embodiments, as can be seen from fig. 7, a certain process plug-in may be used multiple times in the same task, where in the prior art, for such a situation, it is often necessary to create projects with different names for the process plug-in, so that maintenance and management are not easy (if the process plug-in needs to be modified, multiple projects need to be modified at the same time), and readability is poor (the same function is implemented, which is called different name projects). In order to ensure the uniqueness of the process plug-in the AI engine, an alias module is designed for the process plug-in according to the function type of the process plug-in the event plug-in group, the uniqueness of the process plug-in the whole AI engine using process is ensured through the naming mode of the plug-in name and the alias when the process plug-in is instantiated, and the alias is always used as a binding relation when the plug-in is associated with the plug-in.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (6)
1. A video real-time processing AI engine system for intelligent city management, characterized by, including connecting gradually:
the video stream scheduling module is used for receiving the task information and distributing scheduling parameters for the application service module according to the task information;
the application service module is used for assembling and initializing a functional plug-in unit for video stream processing and an event plug-in unit for behavior event analysis according to the scheduling parameters; processing the video stream data and outputting behavior event data to a notification module;
The notification module is used for transmitting the behavior event data to an external receiving end;
The scheduling parameters include: camera parameters used for representing basic information of the camera; task parameters for characterizing task basic information and behavioral event target parameters for characterizing behavioral event analysis capability requirements;
the application service module is internally provided with a plurality of pull plug-ins supporting different protocols and a plurality of decoding plug-ins supporting different decoding modes;
The application service module is configured to: selecting an adaptive pulling plug-in according to the camera parameters and the task parameters to carry out pulling operation on video stream data; selecting an adaptive decoding plug-in to perform decoding operation on video stream data; assembling the adapted pull plug-ins and decoding plug-ins into a functional plug-in group; transmitting the video stream data processed by the function plug-in unit to the event plug-in unit;
The application service module is internally provided with:
the process plug-in library is arranged to contain a plurality of process plug-ins for realizing logic functions;
the terminal plug-in library is set to comprise a plurality of terminal plug-ins for realizing the behavior event alarming function;
The application service module is configured to: selecting an adaptive process plug-in to perform logic analysis processing on video stream data according to the behavior event target parameters so as to realize identification of behavior events contained in the video stream; selecting an adaptive endpoint plug-in according to the identified behavior event to realize alarming of the behavior event; sequentially assembling the adaptive process plug-ins and the endpoint plug-ins into event plug-in groups; and transmitting the behavior event data processed by the event plug-in group to a notification module.
2. The video real-time processing AI engine system for smart city management of claim 1 further comprising a common service module coupled to the application service module;
the application service module determines whether to perform cross-video stream processing according to the scheduling parameters, if so, respectively assembling and initializing a branch function plug-in unit and a branch event plug-in unit for each branch video stream and a summary event plug-in unit for summary behavior event analysis, and starting the public service module;
The public service module is configured to: and summarizing and forwarding the output data of each branch event plugin group to the summarized event plugin group.
3. The video real-time processing AI engine system for smart city management of claim 1 or 2, further comprising a packet processing module configured to connect with the video stream scheduling module, receive video stream data for unpacking processing and return unpacked data; and the notification module is connected with the notification module, receives the behavior event data, performs package processing and returns package data.
4. The video real-time processing AI engine system for smart city management of claim 1 wherein: the process plug-in library also comprises a neural network model plug-in which is connected with the functional plug-in group and is used for identifying targets in video images, and the neural network model plug-in is set to be internally provided with a plurality of trained image target identification neural network models.
5. The video real-time processing AI engine system for smart city management of claim 1 wherein: the process plug-in library also contains tracking plug-ins for extracting moving or stationary attributes of the target.
6. The video real-time processing AI engine system for smart city management of claim 1 wherein the application service module is configured to: and naming the same process plug-in which is used for multiple times in a plug-in name and alias mode according to the function type of the process plug-in the event plug-in group so as to ensure the uniqueness of the use of the process plug-in.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310252918.4A CN116382813B (en) | 2023-03-16 | 2023-03-16 | Video real-time processing AI engine system for smart city management |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310252918.4A CN116382813B (en) | 2023-03-16 | 2023-03-16 | Video real-time processing AI engine system for smart city management |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116382813A CN116382813A (en) | 2023-07-04 |
CN116382813B true CN116382813B (en) | 2024-04-19 |
Family
ID=86970406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310252918.4A Active CN116382813B (en) | 2023-03-16 | 2023-03-16 | Video real-time processing AI engine system for smart city management |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116382813B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012047757A1 (en) * | 2010-10-04 | 2012-04-12 | Avocent | System and method for monitoring and managing data center resources in real time incorporating manageability subsystem |
CN109491718A (en) * | 2018-09-13 | 2019-03-19 | 北京米文动力科技有限公司 | A kind of plug-in loading method and equipment |
CN110286892A (en) * | 2019-06-26 | 2019-09-27 | 成都九洲电子信息系统股份有限公司 | A kind of quick exploitation automotive engine system based on business Process Design |
CN111158779A (en) * | 2019-12-24 | 2020-05-15 | 深圳云天励飞技术有限公司 | Data processing method and related equipment |
CN111639859A (en) * | 2020-06-01 | 2020-09-08 | 腾讯科技(深圳)有限公司 | Template generation method and device for artificial intelligence AI solution and storage medium |
CN113516102A (en) * | 2021-08-06 | 2021-10-19 | 上海中通吉网络技术有限公司 | Deep learning parabolic behavior detection method based on video |
CN114187541A (en) * | 2021-10-27 | 2022-03-15 | 福建亿榕信息技术有限公司 | Intelligent video analysis method and storage device for user-defined service scene |
CN114691094A (en) * | 2020-12-31 | 2022-07-01 | 深圳云天励飞技术股份有限公司 | Video structuring system design engine, method, computer device and medium |
CN114691112A (en) * | 2020-12-29 | 2022-07-01 | 网联清算有限公司 | Data processing method and device and data processing server |
CN115576677A (en) * | 2022-12-08 | 2023-01-06 | 中国科学院空天信息创新研究院 | Task flow scheduling management system and method for rapidly processing batch remote sensing data |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008072249A2 (en) * | 2006-12-15 | 2008-06-19 | Mango D.S.P. Ltd | System, apparatus and method for flexible modular programming for video processors |
US20100118147A1 (en) * | 2008-11-11 | 2010-05-13 | Honeywell International Inc. | Methods and apparatus for adaptively streaming video data based on a triggering event |
US20220327006A1 (en) * | 2021-04-09 | 2022-10-13 | Nb Ventures, Inc. Dba Gep | Process orchestration in enterprise application of codeless platform |
-
2023
- 2023-03-16 CN CN202310252918.4A patent/CN116382813B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012047757A1 (en) * | 2010-10-04 | 2012-04-12 | Avocent | System and method for monitoring and managing data center resources in real time incorporating manageability subsystem |
CN109491718A (en) * | 2018-09-13 | 2019-03-19 | 北京米文动力科技有限公司 | A kind of plug-in loading method and equipment |
CN110286892A (en) * | 2019-06-26 | 2019-09-27 | 成都九洲电子信息系统股份有限公司 | A kind of quick exploitation automotive engine system based on business Process Design |
CN111158779A (en) * | 2019-12-24 | 2020-05-15 | 深圳云天励飞技术有限公司 | Data processing method and related equipment |
CN111639859A (en) * | 2020-06-01 | 2020-09-08 | 腾讯科技(深圳)有限公司 | Template generation method and device for artificial intelligence AI solution and storage medium |
CN114691112A (en) * | 2020-12-29 | 2022-07-01 | 网联清算有限公司 | Data processing method and device and data processing server |
CN114691094A (en) * | 2020-12-31 | 2022-07-01 | 深圳云天励飞技术股份有限公司 | Video structuring system design engine, method, computer device and medium |
CN113516102A (en) * | 2021-08-06 | 2021-10-19 | 上海中通吉网络技术有限公司 | Deep learning parabolic behavior detection method based on video |
CN114187541A (en) * | 2021-10-27 | 2022-03-15 | 福建亿榕信息技术有限公司 | Intelligent video analysis method and storage device for user-defined service scene |
CN115576677A (en) * | 2022-12-08 | 2023-01-06 | 中国科学院空天信息创新研究院 | Task flow scheduling management system and method for rapidly processing batch remote sensing data |
Also Published As
Publication number | Publication date |
---|---|
CN116382813A (en) | 2023-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2688296B1 (en) | Video monitoring system and method | |
AU2009243916B2 (en) | A system and method for electronic surveillance | |
CN112969049B (en) | Intelligent detection system for ship violation behaviors | |
CN101702771B (en) | Network video intelligent monitoring system and method | |
CN100551050C (en) | Video monitoring system based on the built-in smart video processing device of serial ports | |
US10990840B2 (en) | Configuring data pipelines with image understanding | |
CN109656792A (en) | Applied performance analysis method, apparatus, computer equipment and storage medium based on network call log | |
CN109905423B (en) | Intelligent management system | |
US11990031B2 (en) | Network operating center (NOC) workspace interoperability | |
CN114679607B (en) | Video frame rate control method and device, electronic equipment and storage medium | |
CN111090773B (en) | Digital retina system structure and software architecture method and system | |
CN103167265A (en) | Video processing method and video processing system based on intelligent image identification | |
CN112632637A (en) | Tamper-proof evidence obtaining method, system, device, storage medium and electronic equipment | |
CN108391092A (en) | Danger identifying system based on deep learning | |
US11532158B2 (en) | Methods and systems for customized image and video analysis | |
CN115729683A (en) | Task processing method, device, system, computer equipment and storage medium | |
CN116382813B (en) | Video real-time processing AI engine system for smart city management | |
US20140068777A1 (en) | Method and system for detecting anamolies within voluminous private data | |
CN103383814A (en) | Method for capturing violation of regulations | |
CN113660540B (en) | Image information processing method, system, display method, device and storage medium | |
US11770538B2 (en) | Method for providing prunable video | |
TW202303399A (en) | Equipment linkage method, equipment and computer-readable storage medium | |
CN116132623A (en) | Intelligent analysis method, system and equipment based on video monitoring | |
CN112347996A (en) | Scene state judgment method, device, equipment and storage medium | |
CN112202786B (en) | Illegal data identification method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |