CN116778370A - Event processing method, device, equipment, storage medium and program product - Google Patents

Event processing method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN116778370A
CN116778370A CN202210220854.5A CN202210220854A CN116778370A CN 116778370 A CN116778370 A CN 116778370A CN 202210220854 A CN202210220854 A CN 202210220854A CN 116778370 A CN116778370 A CN 116778370A
Authority
CN
China
Prior art keywords
event
information
scene
service scene
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210220854.5A
Other languages
Chinese (zh)
Inventor
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210220854.5A priority Critical patent/CN116778370A/en
Publication of CN116778370A publication Critical patent/CN116778370A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs

Abstract

The application provides an event processing method, an event processing device, a storage medium and a program product; the embodiment of the application can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, vehicle-mounted and the like, and relates to cloud computing; the method comprises the following steps: receiving a real-time video stream uploaded by image acquisition equipment arranged in a service scene; carrying out event identification on the real-time video stream through an event identification model issued by cloud equipment, and determining monitoring information corresponding to a service scene; monitoring whether the information represents a service scene to generate a target event; when the monitoring information characterizes that a target event occurs in the service scene, generating alarm control information in response to the target event, and sending the alarm control information to alarm equipment arranged in the service scene; and intercepting an event picture corresponding to the target event from the real-time video stream, and uploading the event picture and event information corresponding to the target event to the cloud device. According to the application, the event processing efficiency can be improved.

Description

Event processing method, device, equipment, storage medium and program product
Technical Field
The present application relates to cloud computing technologies, and in particular, to an event processing method, apparatus, device, storage medium, and program product.
Background
The event processing is to monitor various data in the service scene, determine whether an event requiring processing occurs, and perform corresponding processing when determining that the event occurs. Such as whether the generating environment in the production plant needs an alarm or not, or whether the driving road condition of the vehicle is alarmed, etc.
In the related art, video recognition is generally performed based on a cloud device to implement event processing. However, when the event processing is realized by video identification based on the cloud device, due to the fact that video transmission is required, a certain network delay exists in video identification, so that the event processing efficiency is low.
Disclosure of Invention
The embodiment of the application provides an event processing method, an event processing device, event processing equipment, a computer readable storage medium and a program product, which can improve the event processing efficiency.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an event processing method, which is executed by edge equipment and comprises the following steps:
receiving a real-time video stream uploaded by image acquisition equipment arranged in a service scene;
Carrying out event identification on the real-time video stream through an event identification model issued by cloud equipment, and determining monitoring information corresponding to the service scene; the monitoring information characterizes whether a target event occurs in the service scene;
when the monitoring information characterizes the service scene to generate the target event, generating alarm control information in response to the target event, and sending the alarm control information to alarm equipment arranged in the service scene; the alarm control information is used for controlling the alarm equipment to alarm aiming at the target event;
and capturing an event picture corresponding to the target event from the real-time video stream, and uploading the event picture and event information corresponding to the target event to the cloud device.
The embodiment of the application provides an event processing method, which is executed by cloud equipment and comprises the following steps:
receiving an event picture corresponding to a target event reported by edge equipment and event information corresponding to the target event; the event picture is intercepted from the real-time video stream by the edge equipment, and the target event is determined by identifying the real-time video stream uploaded by the image acquisition equipment through an event identification model issued by the cloud equipment by the edge equipment;
Based on the event picture and the event information, integrating to obtain scene information corresponding to a service scene; the scene information includes: at least one of an event list of the service scene, event early warning information of the service scene, and detail information of the target event;
receiving an information viewing request sent by client equipment;
and responding to the information viewing request, and transmitting the scene information to the client equipment.
An embodiment of the present application provides an event processing method, which is executed by a client device and includes:
responding to the triggering operation detected on the information viewing identifier of the displayed information viewing interface, generating an information viewing request, and sending the information viewing request to the cloud device;
receiving scene information returned by the cloud device aiming at the information viewing request;
displaying the scene information in an information viewing area of the information viewing interface;
the scene information includes: at least one of an event list of a service scene, event early warning information of the service scene and detail information of a target event.
The embodiment of the application provides an event processing device, which comprises:
The first receiving module is used for receiving the real-time video stream uploaded by the image acquisition equipment arranged in the service scene;
the event identification module is used for carrying out event identification on the real-time video stream through an event identification model issued by the cloud equipment and determining monitoring information corresponding to the service scene; the monitoring information characterizes whether a target event occurs in the service scene;
the alarm control module is used for responding to the target event to generate alarm control information when the monitoring information characterizes the target event of the service scene, and sending the alarm control information to alarm equipment arranged in the service scene; the alarm control information is used for controlling the alarm equipment to alarm aiming at the target event;
the picture intercepting module is used for intercepting an event picture corresponding to the target event from the real-time video stream;
the first sending module is used for uploading the event information corresponding to the event picture and the target event to the cloud device.
In some embodiments of the present application, the first receiving module is further configured to receive a node deployment instruction and the trained event recognition model sent by the cloud device; the node deployment instruction is used for instructing an edge device to process an event aiming at the service scene;
The first receiving module is further configured to receive a real-time video stream uploaded by the image capturing device in the service scene in response to the node deployment instruction.
In some embodiments of the application, the recognition result includes: the occurrence probability of the target event in the service scene; the first receiving module is further configured to receive a corresponding reference judgment threshold value of the target event issued by the cloud device;
the event recognition module is further configured to perform event recognition on the real-time video stream through the event recognition model, so as to obtain the occurrence probability of the target event in the service scene; and determining the monitoring information corresponding to the service scene according to the magnitude relation between the occurrence probability and the reference judgment threshold.
In some embodiments of the present application, the first sending module is further configured to upload the real-time video stream and the event information corresponding to the target event to the cloud device.
In some embodiments of the present application, the first sending module is further configured to write the event frame into a local folder, and write the event information corresponding to the target event into an information database; the information database is queried regularly through a message channel service to obtain the event information, and the event picture is searched in the local folder according to the event information; and publishing the event picture and the event information to a message queue through the message channel service so that the cloud device can acquire the event picture and the event information from the message queue.
The embodiment of the application provides an event processing device, which comprises:
the second receiving module is used for receiving an event picture corresponding to a target event reported by the edge equipment and event information corresponding to the target event; the event picture is intercepted from the real-time video stream by the edge equipment, and the target event is determined by identifying the real-time video stream uploaded by the image acquisition equipment through an event identification model issued by the cloud equipment by the edge equipment;
the information integration module is used for integrating scene information corresponding to a service scene based on the event picture and the event information; the scene information includes: at least one of an event list of the service scene, event early warning information of the service scene, and detail information of the target event;
the second receiving module is further used for receiving an information viewing request sent by the client device;
and the second sending module is used for responding to the information viewing request and sending the scene information to the client equipment.
In some embodiments of the application, the event processing apparatus further comprises: the system comprises a data acquisition module, a model training module and a node configuration module;
The data module is used for acquiring an initial recognition model and training data;
the model training module is used for carrying out event recognition on the training data by utilizing the initial recognition model to obtain an initial recognition result, and continuously adjusting parameters of the initial recognition model according to a loss value between the initial recognition result and label information corresponding to the training data until reaching a training ending condition to obtain the event recognition model;
the node configuration module is used for generating a node deployment instruction aiming at the edge equipment; the node deployment instruction is used for indicating the edge equipment to process events aiming at the service scene;
the second sending module is further configured to send the node deployment instruction and the event identification model to the edge device.
In some embodiments of the present application, the node configuration module is further configured to determine, for the target event, a corresponding reference judgment threshold; the reference judgment threshold value is used for determining whether the business scene has the target event or not;
the second sending module is further configured to send the reference judgment threshold to the edge device.
In some embodiments of the present application, the second receiving module is further configured to receive a real-time video stream uploaded by the edge device, and the event information corresponding to the target event;
the information integration module is further configured to integrate the scene information corresponding to the service scene based on the real-time video stream and the event information.
The embodiment of the application provides an event processing device, which comprises:
the operation response module is used for responding to the triggering operation detected on the information viewing identifier of the displayed information viewing interface and generating an information viewing request;
the third sending module is used for sending the information viewing request to the cloud device;
the third receiving module is used for receiving scene information returned by the cloud device aiming at the information viewing request;
the information display module is used for displaying the scene information in an information viewing area of the information viewing interface; the scene information includes: at least one of an event list of a service scene, event early warning information of the service scene and detail information of a target event.
The embodiment of the application provides an event processing device, which comprises:
A first memory for storing executable instructions;
the first processor is configured to implement the event processing method at the edge device side according to the embodiment of the present application when executing the executable instructions stored in the first memory.
The embodiment of the application provides an event processing device, which comprises:
a second memory for storing executable instructions;
and the second processor is used for realizing the event processing method at the cloud equipment side when executing the executable instructions stored in the second memory.
The embodiment of the application provides an event processing device, which comprises:
a third memory for storing executable instructions;
and the third processor is used for realizing the event processing method at the client equipment side when executing the executable instructions stored in the third memory.
The embodiment of the application provides a computer readable storage medium, which stores executable instructions for realizing the event processing method at the edge device side provided by the embodiment of the application when a first processor is caused to execute, or for realizing the event processing method at the cloud device side provided by the embodiment of the application when a second processor is caused to execute, or for realizing the event processing method at the client device side provided by the embodiment of the application when a third processor is caused to execute.
The embodiment of the application provides a computer program product, which comprises a computer program or an instruction, wherein the computer program or the instruction realizes an event processing method at an edge device side when being executed by a first processor, or realizes an event processing method at a cloud device side when being executed by a second processor, or realizes an event processing method at a client device side when being executed by a third processor.
The embodiment of the application has the following beneficial effects: the edge equipment receives the real-time video stream uploaded by the image acquisition equipment, and obtains the processing capacity of the real-time video stream through an event recognition model issued by the cloud equipment, so that the real-time video stream is immediately subjected to event recognition through the event recognition model, the network delay caused by video transmission can be reduced, monitoring information can be obtained in a short time, when the monitoring information characterizes a target event in a service scene, the edge equipment can directly control an alarm device in the service scene to alarm the target event, and finally, the related information of the target event is directly reported to the cloud equipment without consuming time to transmit the video to the cloud equipment for event recognition and response, thereby improving the efficiency of event recognition and response in the service scene, and improving the efficiency of event processing.
Drawings
FIG. 1 is a schematic diagram of an event processing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an edge device implemented as an intelligent gateway according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a cloud device implemented as a server according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a client device according to an embodiment of the present application when the client device is implemented as a terminal;
FIG. 5 is a schematic flow chart of an event processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an information viewing interface provided by an embodiment of the present application;
fig. 7 is a schematic illustration showing scene information provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of another process of event processing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a system for monitoring a production plant provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a node management platform provided by an embodiment of the present application;
fig. 11 is a schematic diagram of uplink and downlink data transmission according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a particular order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) The internet of things (Internet of Things, IOT) refers to the real-time collection of any object or process that needs to be monitored, connected, and interacted with through various devices and technologies such as various information sensors, radio frequency identification technologies, global positioning systems, infrared sensors, laser scanners, etc. Various needed information such as sound, light, heat, electricity, mechanics, chemistry, biology, position and the like is collected, and the connection of objects and the connection of objects and people are realized through various possible network access, so that intelligent sensing, identification and management of objects and processes are realized.
2) The intelligent transportation system (Intelligence Traffic System, ITS), also called intelligent transportation system (Intelligence Transportation System), is to apply advanced optional technologies (information technology, computer technology, data communication technology, sensor technology, electronic control technology, automatic control theory, operation study, artificial intelligence technology, etc.) to transportation, service control and vehicle manufacture effectively and comprehensively, and strengthen the connection among vehicles, roads and users, thereby forming a kind of safety, efficiency and environment improvement. An energy-saving comprehensive transportation system.
3) The intelligent vehicle-road cooperative system (Intelligent Vehicle Infrastructure Cooperative System, IVICS), which is simply called a vehicle-road cooperative system, is one development direction of an Intelligent Transportation System (ITS). The vehicle-road cooperative system adopts advanced wireless communication, new generation internet and other technologies, carries out vehicle-to-vehicle and vehicle-to-road dynamic real-time information interaction in all directions, and develops active safety control and road cooperative management on the basis of full-time empty dynamic traffic information acquisition and fusion, thereby fully realizing effective cooperation of people, vehicles and roads, ensuring traffic safety and improving traffic efficiency, and further forming a safe, efficient and environment-friendly road traffic system.
4) Cloud Computing (Cloud Computing) refers to the delivery and usage model of an IT infrastructure, meaning that the required resources are obtained in an on-demand, scalable manner over a network. Generalized cloud computing refers to the delivery and usage patterns of services, meaning on-demand services over a network. Such services may be IT, software, internet related, or other services. Cloud Computing is a product of fusion of traditional computer and network technology developments such as Grid Computing (Grid Computing), distributed Computing (Distributed Computing), parallel Computing (Parallel Computing), utility Computing (Utility Computing), grid storage (Network Storage Technologies), virtualization (Virtualization), load balancing (Load balancing), and the like.
5) Cloud computing platforms refer to platforms that provide computing, networking, and storage capabilities based on services of hardware resources and software resources. The cloud computing platform has very powerful computing function and can provide various services facing business scenes.
6) And the edge calculation, namely the processing of data, the running of application programs and even the realization of some functions are put down on the nodes of the network edge by the cloud computing platform, namely the local calculation and analysis are carried out on the side close to the terminal, so that the real-time performance of the calculation is higher.
The edge calculation can be applied to the application of the Internet of things, the intelligent traffic system, the intelligent vehicle road system and the like. Taking unmanned as an example, the number of sensors on the vehicle is large, and data can be continuously collected and uploaded to the cloud computing platform when the vehicle runs. However, some data of the vehicle needs to be responded in real time or filtered in real time, if the data is transmitted to the cloud computing platform and the computing result is fed back by the cloud computing platform, higher time delay may be caused, and the time delay may be reduced by using edge computing.
7) An edge computing platform for creating, on local computing hardware, local edge computing nodes that can connect terminals (e.g., ioT devices) and can forward, store, and analyze data of the terminals, and extending computing power of cloud computing platforms such as cloud storage, big data, artificial intelligence, security, etc., to a platform of edge nodes closest to the data source.
8) The edge gateway is an intelligent gateway with edge computing capability, and can share computing resources deployed on the cloud computing platform so as to realize data processing, real-time response and the like of the terminal.
9) Artificial intelligence (Artificial Intelligence, AI) is a new scientific technology to study, develop theories, methods, techniques and application systems for simulating, extending and expanding human intelligence. Artificial intelligence has many fields of application, including visual recognition based on artificial intelligence, data analysis, and the like.
10 Based on the visual identification of artificial intelligence, the video image is intercepted depending on the video acquisition, the video image is processed by an artificial intelligence visual image algorithm, and the service is processed according to the processing result.
11 An audible and visual alarm (Audible and Visual Alarm) and a device for simultaneously emitting audible and visual alarm signals. The audible and visual alarm has wide application fields, such as industrial production fields of machine production workshops and the like, and traffic transportation fields of signal lamps and the like.
12 In response to a condition or state that is indicative of a dependency of an operation performed, one or more operations performed may be in real-time or with a set delay when the dependency is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
With the development of cloud computing, more and more applications are realized through cloud computing. For example, the internet of things, the intelligent transportation system or the intelligent vehicle-road cooperative system all need to collect data generated by devices such as sensors by the client device, then transmit the data to the cloud device for operation, and respond to a control instruction determined by the cloud device according to the operation to realize a corresponding function, for example, send out an audible and visual alarm to a production workshop or control vehicle braking and the like.
The event processing is to monitor various data in the service scene, determine whether an event requiring processing occurs, and perform corresponding processing when determining that the event occurs. Such as whether the generating environment in the production plant needs an alarm or not, or whether the driving road condition of the vehicle is alarmed, etc.
In the related art, data acquired by devices such as a sensor in a service scene are transmitted to an edge device which is closer to the service scene for processing. However, the edge device in the related art does not have the capability of processing the video with large data volume and complex situation, so in the related art, the video with respect to the service scene is realized by the following several recognition modes: the method comprises the steps that a camera sends collected videos to cloud equipment, and the cloud equipment performs video identification to realize event processing; the other is to integrate an identification algorithm in a camera, and shoot and complete video identification by the camera so as to realize event processing; still another way is to deploy a server locally, and the local server performs video recognition on the video acquired by the camera to implement event processing.
However, since the recognition algorithm of the camera is single and the algorithm expansion capability is poor, inconvenience is caused for event processing; meanwhile, the accuracy of event processing is low, and the event processing is difficult to extend to other scenes; when the local server is used for video identification, the problems of uncontrollable server performance and reporting service and inconvenient updating and operation and maintenance of an identification algorithm exist, so in the related technology, the event processing is generally realized by carrying out video identification based on cloud equipment.
However, when the event processing is realized by video identification based on the cloud device, due to the fact that video transmission is required, a certain network delay exists in video identification, so that the event processing efficiency is low.
The embodiment of the application provides an event processing method, an event processing device, event processing equipment and a computer readable storage medium, which can improve the event processing efficiency. In the following, exemplary applications of the edge device, the cloud device, and the client device provided by the embodiments of the present application are described, where the edge device provided by the embodiments of the present application may be implemented as an intelligent gateway, the client device may be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and the cloud device may be implemented as a server. In the following, an exemplary application of the event processing system will be described.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of an event processing system according to an embodiment of the present application. To support an event processing application, in the event processing system 100, the intelligent gateway 400 (edge device) is connected to the server 200 (cloud device) through the network 300, and the server 200 is connected to the terminal 500 (client device) through the network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two.
The intelligent gateway 400 is used for receiving the real-time video stream uploaded by the camera 400-1 (image acquisition device) in the service scene; event identification is carried out on the real-time video stream through an event identification model issued by the server 200, monitoring information corresponding to the service scene is determined, and whether the monitoring information represents a target event in the service scene or not is represented; when the detection information characterizes that the service scene has a target event, generating alarm control information in response to the target event, and sending the alarm control information to alarm equipment 400-2 arranged in the service scene, wherein the alarm control information is used for controlling the alarm equipment to alarm for the target event; and intercepting an event picture corresponding to the target event from the real-time video stream, and uploading the event picture and event information corresponding to the target event to the server 200.
The server 200 is configured to receive an event frame corresponding to a target event reported by the intelligent gateway 400, and event information corresponding to the target event; the event picture is intercepted from the real-time video stream by the intelligent gateway 400, and the target event is identified and determined by the intelligent gateway 400 on the real-time video stream uploaded by the camera 400-1 through the event identification model issued by the server 200; based on the event picture and the event information, integrating to obtain scene information corresponding to the service scene; the scene information includes: at least one of a list of events sent by the service scene, event early warning information of the service scene, and details of the target event; and in response to the information viewing request sent by the terminal 500, the scene information is issued to the terminal 500.
The terminal 500 is configured to generate an information viewing request in response to a triggering operation monitored on an information viewing identifier of the information viewing interface presented by the graphical interface 510; transmitting an information viewing request to the server 200; the receiving server 200 receives the scene information returned by the information viewing request, and displays the scene information in the information viewing area of the information viewing interface.
In some embodiments, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal 500 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart home appliance, a vehicle-mounted terminal, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present invention.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an edge device implemented as an intelligent gateway according to an embodiment of the present application, and the intelligent gateway 400 shown in fig. 2 includes: at least one first processor 410, a first memory 450, at least one first network interface 420, and a first user interface 430. The various components in the intelligent gateway 400 are coupled together by a first bus system 440. It is appreciated that the first bus system 440 is used to enable connected communication between these components. The first bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as first bus system 440 in fig. 2.
The first processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose processor may be a microprocessor or any conventional processor or the like.
The first user interface 430 includes one or more first output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of media content. The first user interface 430 also includes one or more first input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The first memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. The first memory 450 optionally includes one or more storage devices physically remote from the first processor 410.
The first memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Me mory). The first memory 450 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, the first memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
A first operating system 451 including system programs, such as a framework layer, a core library layer, a driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
A first network communication module 452 for reaching other computing devices via one or more (wired or wireless) first network interfaces 420, the exemplary first network interface 420 comprising: bluetooth, wireless compatibility authentication (Wi-Fi), universal serial bus (USB, universal Serial Bus), and the like;
a first presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more first output devices 431 (e.g., a display screen, a speaker, etc.) associated with the first user interface 430;
a first input processing module 454 for detecting one or more user inputs or interactions from one of the one or more first input devices 432 and translating the detected inputs or interactions.
In some embodiments, the event processing device provided in the embodiments of the present application may be implemented in software, and fig. 2 shows the event processing device 455 stored in the first memory 450, which may be software in the form of a program and a plug-in, and includes the following software modules: the first receiving module 4551, the event recognition module 4552, the alarm control module 4553, the screen capturing module 4554 and the first transmitting module 4555 are logical, so that any combination or further splitting may be performed according to the implemented functions. The functions of the respective modules will be described hereinafter.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a cloud device implemented as a server according to an embodiment of the present application, and a server 200 shown in fig. 3 includes: at least one second processor 210, a second memory 250, at least one second network interface 220, and a second user interface 430. The various components in server 200 are coupled together by a second bus system 240. It is appreciated that the second bus system 240 is used to enable connected communications between these components. The second bus system 240 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 3 as the second bus system 240.
The second processor 210 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., where the general purpose processor may be a microprocessor or any conventional processor, etc.
The second user interface 230 includes one or more second output devices 231, including one or more speakers and/or one or more visual displays, that enable presentation of media content. The second user interface 230 also includes one or more second input devices 232 including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The second memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. The second memory 250 optionally includes one or more storage devices physically remote from the second processor 210.
The second memory 250 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Me mory). The second memory 250 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, the secondary memory 250 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
A second operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
A second network communication module 252 for reaching other computing devices via one or more (wired or wireless) second network interfaces 220, the exemplary second network interface 220 comprising: bluetooth, wireless compatibility authentication (Wi-Fi), universal serial bus (USB, universal Serial Bus), and the like;
a second rendering module 253 for enabling the rendering of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more second output devices 231 (e.g., a display screen, a speaker, etc.) associated with the second user interface 230;
a second input processing module 254 for detecting one or more user inputs or interactions from one of the one or more second input devices 232 and translating the detected inputs or interactions.
In some embodiments, the event processing device provided in the embodiments of the present application may be implemented in software, and fig. 3 shows the event processing device 255 stored in the second memory 250, which may be software in the form of a program and a plug-in, and includes the following software modules: the second receiving module 2551, the information integrating module 2552, the second transmitting module 2553, the data obtaining module 2554, the model training module 2555 and the node configuration module 2556 are logical, and thus may be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be described hereinafter.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a client device according to an embodiment of the present application when implemented as a terminal, and a terminal 500 shown in fig. 3 includes: at least one third processor 510, a third memory 550, at least one third network interface 520, and a third user interface 530. The various components in terminal 500 are coupled together by a third bus system 540. It is appreciated that the third bus system 540 is used to enable connected communications between these components. The third bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as a third bus system 540 in fig. 4.
The third processor 510 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., where the general purpose processor may be a microprocessor or any conventional processor, etc.
The third user interface 530 includes one or more third output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The third user interface 530 also includes one or more third input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The third memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. The third memory 550 may optionally include one or more storage devices physically remote from the third processor 510.
The third memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Me mory). The third memory 550 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, the third memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
A third operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
A third network communication module 552 for reaching other computing devices via one or more (wired or wireless) third network interfaces 520, the exemplary third network interface 520 comprising: bluetooth, wireless compatibility authentication (Wi-Fi), universal serial bus (USB, universal Serial Bus), and the like;
a third rendering module 553 for enabling the rendering of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more third output devices 531 (e.g., a display screen, speakers, etc.) associated with a third user interface 530;
a third input processing module 554 for detecting one or more user inputs or interactions from one of the one or more third input devices 532 and translating the detected inputs or interactions.
In some embodiments, the event processing device provided in the embodiments of the present application may be implemented in software, and fig. 4 shows the event processing device 555 stored in the third memory 550, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the operation response module 5551, the third transmission module 5552, the third reception module 5553, and the information presentation module 5554 are logical, and thus may be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be described hereinafter.
In some embodiments, the intelligent gateway (edge device), the terminal (client device) or the server (cloud device) may implement the event processing method provided by the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; may be a Native Application (APP), i.e. a program that needs to be installed in an operating system to be run, such as an event handling APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiment of the application can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, vehicle-mounted and the like. The event processing method provided by the embodiment of the present application will be described below in connection with exemplary applications and implementations of the edge device, the cloud device, and the client device provided by the embodiment of the present application.
Referring to fig. 5, fig. 5 is a schematic flow chart of an event processing method according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 5.
S101, the edge equipment receives a real-time video stream uploaded by the image acquisition equipment arranged in the service scene.
The embodiment of the application is realized under the condition of carrying out event processing on a business scene, for example, monitoring abnormal events of a production workshop, monitoring traffic flow in an urban road and the like. The image acquisition device is arranged in the service scene, and can acquire pictures of the service scene in real time, so that a real-time video stream of the service scene is obtained, and the real-time video stream is uploaded to the edge device. The edge equipment receives the real-time video stream uploaded by the image acquisition equipment so as to facilitate real-time event processing based on the real-time video stream.
It will be appreciated that the image acquisition device may be a color camera or an infrared camera; the present application is not limited herein, and may be a monocular camera, a binocular camera, a fisheye camera, or the like.
The image acquisition device may be fixed in a business scenario, for example, a color camera is fixed on a wall surface of a production shop; the image acquisition device can also move in the business scene, for example, the image acquisition device is implemented as a small-sized camera which can be held by hand, and the image acquisition device acquires images of different angles of the business scene along with the movement of a worker holding the image acquisition device, so that a real-time video stream is obtained.
In the embodiment of the application, the edge device is a device between the cloud device and the image acquisition device and the alarm device arranged in the service scene. In other words, the edge device may be equivalent to a gateway node when the image capturing device and the alarm device access the cloud device, that is, the image capturing device and the alarm device are not directly in communication with the cloud device any more, but access the gateway node for processing, and generate a specific response.
S102, the edge device carries out event identification on the real-time video stream through an event identification model issued by the cloud device, and monitoring information corresponding to the service scene is determined.
The edge device can identify the real-time video stream by calling an event identification model to obtain an identification result, then determine monitoring information corresponding to the service scene based on the identification result so as to determine whether a target event needing to be alarmed occurs in the real-time video stream, and generate corresponding monitoring information for the service scene. That is, the monitoring information characterizes whether a business scenario has a target event.
In some embodiments, the event recognition model is generated and trained by the cloud device; of course, in other embodiments, the edge device may also self-train or fine tune the model that is ultimately used to identify the real-time video stream for the event identification model issued by the cloud device.
The event recognition model may be a convolutional neural network (Convolutional Neural Networks, CNN) model, a Long Short-Term Memory artificial neural network (LSTM) model, or the like, and the embodiment of the application is not limited herein.
It can be appreciated that in the embodiment of the present application, because the edge device is closer to the service scene than the cloud device, the network delay required for transmitting the real-time video stream is less, and thus, the time required for event recognition is reduced.
And S103, when the monitoring information characterizes that the service scene has a target event, the edge equipment responds to the target event, generates alarm control information and sends the alarm control information to alarm equipment arranged in the service scene.
The edge device reads the generated monitoring information, determines that an event requiring alarm or human intervention occurs in the service scene when the monitoring information represents the occurrence of a target event in the service scene, and accordingly responds to the target event, generates alarm control information and sends the alarm control information to alarm devices arranged in the service scene so as to control the alarm devices to alarm against the target event occurring in the service scene. That is, in the embodiment of the present application, the alarm control information is used to control the alarm device to alarm against the target event.
It will be appreciated that the target event may be that the vehicle flow exceeds a flow threshold, that the time the vehicle is stationary on the road exceeds a time threshold, or that an abnormality in production in the production plant has occurred, etc. Specific target events may be set according to actual situations, and embodiments of the present application are not limited herein.
In some embodiments, the alarm device may be an audible and visual alarm, where the alarm control information may be to turn on the audible and visual alarm to alarm a service scenario; in other embodiments, the alarm device may also be an alarm display, where the alarm control information may invoke displaying a preset prompt screen to implement an alarm for a service scenario.
S104, the edge device intercepts an event picture corresponding to the target event from the real-time video stream, and uploads the event picture and event information corresponding to the target event to the cloud device.
After the edge device completes event identification on the real-time video stream, an event picture corresponding to the target event can be intercepted from the real-time video stream, corresponding event information is generated aiming at the target event, and the intercepted event picture and the event information are uploaded to the cloud device through a network. The cloud end equipment receives an event picture corresponding to a target event reported by the edge equipment and event information corresponding to the target event; for the cloud equipment, an event picture is intercepted from the real-time video stream by the equipment, and a target event is determined by identifying the real-time video stream uploaded by the image acquisition equipment through an event identification model issued by the edge equipment.
It may be understood that the event frame may refer to a video clip that is cut from a real-time video stream, or may be a video frame that is cut from a real-time video stream, which is not limited in this embodiment of the present application.
In some embodiments, the edge device may clip out the video clip between the start time stamp and the end time stamp as an event frame corresponding to the target event according to the start time stamp and the end time stamp of the target event.
In other embodiments, the edge device may further perform image matching on a preset frame corresponding to the target event and each video frame of the real-time video stream, and determine the video frame matched with the preset frame as an event frame of the target event.
It should be noted that, the event information corresponding to the target event may include information such as an occurrence time and a duration event of the target event, may include information such as an event type and an event name of the target event, and may also include information such as a processing department and a responsible person corresponding to the target event.
In some embodiments, when the edge device identifies that the service scene has a target event, the edge device may immediately acquire a current time, determine the current event as an occurrence time of the target event, or acquire a start video frame and an end video frame of the target event, and respectively acquire timestamps corresponding to the start video frame and the end video frame, thereby obtaining a start timestamp and an end timestamp, and then determine a time difference between the start timestamp and the end timestamp as a duration event of the target event.
In other embodiments, the edge device may obtain information such as an event category and an event name corresponding to the target event from a type library or a name library of the event, and then find out a processing department and a responsible person corresponding to the target event from a database storing the processing department and the responsible person according to the event category or the event name.
S105, the cloud device integrates scene information corresponding to the service scene based on the event picture and the event information.
After receiving the event picture and the event information corresponding to the target event, the cloud device analyzes the situation of the service scene by using the event picture and the event information. For example, classifying and counting the occurred events in the service scene to form an event list; or by combining the event picture and the event information at the moment, predicting the time when the service scene is likely to generate a target event in the future, generating event early warning information for the service scene according to the predicted time, or predicting the occurrence probability of a subsequent event caused by the target event, so as to obtain the event early warning information; or generating text description by using the event picture and the event information, thereby obtaining the detail information of the target event. It can be seen that, in the embodiment of the present application, the scene information includes: at least one of a list of events occurring in the business scenario, event pre-warning information of the business scenario, and details of the target event.
It will be appreciated that the event list may include, in addition to the target events identified by the present run, target events identified during the historical time, or other events identified during the historical time. The details of the target event may include time information, cause analysis, development change, and handling measures of the target event, which are not limited herein.
S106, the client device responds to the triggering operation detected on the information viewing identifier of the displayed information viewing interface, generates an information viewing request and sends the information viewing request to the cloud device.
Scene information generated by the cloud device can be viewed by staff through the client device. When a worker operates the client device, enters the information viewing interface and performs triggering operation on an information viewing identifier of the information viewing interface, the client device can determine that the worker has a requirement for viewing scene information, so that an information viewing request is generated, the information viewing request is sent to the cloud device through a network, and the cloud device receives the information viewing request sent by the client device.
It will be appreciated that the information viewing interface may be entered when a worker manipulates a trigger icon on the main menu of the client device, or when the worker speaks a statement "enter interface to view information".
The triggering operation for the information viewing identifier may be a click operation, a double click operation, a long press operation, or a sliding operation, which is not limited herein. The size and the position of the information viewing identifier can be set according to actual requirements, and the embodiment of the application is not limited herein.
Fig. 6 is a schematic diagram of an information viewing interface according to an embodiment of the present application. In the upper half of the information viewing interface 6-1, basic information 6-11 of the business scenario, such as the number of the production plant, is shown: geographic location of AXX 6-111, production shop: XX way XX No. 6-112, etc. In the middle part of the information viewing interface 6-1, an information viewing identifier 6-12 is set, and when a worker clicks the information viewing identifier 6-12, the client device wants the cloud device to send an information viewing request.
S107, the cloud device responds to the information viewing request sent by the client device and sends the scene information to the client device.
After receiving the information viewing request sent by the client device, the cloud device acquires scene information corresponding to the service and sends the scene information to the client device through a network. And the client equipment receives scene information returned by the cloud equipment aiming at the information viewing request.
In other words, in the embodiment of the application, the edge device monitors and responds to the target event aiming at the real-time video stream 'nearby' in the service scene acquired by the image acquisition device, then the edge device uploads the related information of the target event to the cloud device, the cloud device continues to process the information to become scene information suitable for the staff to check, and finally the scene information is sent to the client device when the staff needs to check the scene information, so that the staff can check the scene information conveniently.
S108, the client device displays the scene information in an information viewing area of the information viewing interface.
After the client device receives the scene information of the service scene, the received scene information is displayed in an information viewing area of the information viewing interface, so that a worker can conveniently view the scene information from the information viewing area.
It will be appreciated that the size and location of the information viewing area may be set according to practical situations, and embodiments of the present application are not limited herein.
Exemplary, based on fig. 6, referring to fig. 7, fig. 7 is a schematic illustration of scenario information provided by an embodiment of the present application. In the lower half of the information viewing interface 6-1, an information viewing area 7-1 is provided. The client device will display the scene information returned after the worker clicks on the information viewing identifier 6-12 in the information viewing area 7-1. The scene information includes: the reason for the occurrence of the target event: event early warning information of unknown personnel entering workshops 7-11 and business scenes: subsequent events that interfere with production may occur, please handle 7-12 in time. In this way, the worker can respond quickly.
It can be understood that, compared with the related art, since the edge device does not have the processing capability of video, the edge device is used for carrying out video recognition to realize video processing mostly, in the embodiment of the application, the edge device receives the real-time video stream uploaded by the image acquisition device, and obtains the processing capability of the real-time video stream through the event recognition model issued by the cloud device, so as to instantly recognize the event of the real-time video stream through the event recognition model, thereby reducing the network delay caused by video transmission, obtaining the monitoring information in a shorter time, and when the monitoring information characterizes the service scene, the edge device can directly control the alarm device in the service scene to alarm the target event, and finally directly report the related information of the target event to the cloud device without consuming time to transmit the video to the cloud device, so as to recognize and respond the event in the service scene, thereby improving the efficiency of event recognition and response in the service scene, and improving the efficiency of event processing. In addition, in the embodiment of the application, after the edge equipment uploads the related information of the target event to the cloud equipment, the cloud equipment can further process the related information of the target event, so that the related information of the target event can be utilized to generate scene information which is more beneficial to viewing, and when a viewing request sent by the client equipment is received, the scene information is returned to a worker using the client equipment for viewing, so that an interface for viewing the information of the target event can be provided for the worker, and the use of the worker is facilitated.
It should be noted that, in other embodiments of the present application, when the monitoring information characterizes that the service scene has a target event, the edge device may also intercept an event picture of the target event object from the real-time video stream, upload event information corresponding to the event picture and the target event to the cloud device, then respond to the target event, generate alarm control information, and send the alarm control information to the alarm device set in the service scene to control the alarm device to alarm.
In addition, when the monitoring information characterizes that the service scene does not have the target event, the edge device can continuously receive the video stream sent by the image acquisition device and continuously perform event identification on the received video stream.
Referring to fig. 8, fig. 8 is another flow chart of the event processing method according to the embodiment of the application. In some embodiments of the present application, before the edge device receives the real-time video stream uploaded by the image capturing device disposed in the service scene, i.e. before S101, the method may further include: S109-S112, as follows:
s109, the cloud device acquires an initial recognition model and training data.
The cloud device acquires an initial recognition model and training data from a storage space of the cloud device or a network. It will be appreciated that the initial recognition model may be an untrained model, i.e., the parameters in the initial recognition model are derived by random initialization. The initial recognition model may also be an unsupervised pre-trained model, and embodiments of the present application are not limited in this regard.
S110, the cloud device performs event recognition on the training data by using the initial recognition model to obtain an initial recognition result, and continuously adjusts parameters of the initial recognition model according to the loss value between the initial recognition result and the label information corresponding to the training data until the training end condition is reached, so as to obtain the event recognition model.
After the initial recognition model and the training data are obtained, the cloud device inputs the training data into the initial recognition model for forward reasoning, and the obtained reasoning result is the initial recognition result of the training data. Then, the cloud device calculates a loss value of the tag information corresponding to the initial recognition result and the training data, and back propagates the obtained loss value to determine an updated component of the parameter of the initial recognition model, and then adjusts the parameter of the initial recognition model by using the updated component, so that one iteration is completed. And the method is repeated in a circulating way until the training ending condition is reached, and the model obtained by the cloud device is the event identification model.
It is to be understood that the training ending condition may be set to reach the threshold number of iterations, for example, 10000 times, or may be set to reach the threshold accuracy, for example, 99.9% or the like, which is not limited herein.
S111, aiming at the edge equipment, the cloud equipment generates a node deployment instruction.
Before the edge device processes the event for the service scene, the edge device is first deployed and authorized by the cloud device. When the cloud device is to deploy the edge device as the edge device of the service scene, a node deployment instruction is generated for the edge device, so that the edge device is instructed to process the event of the service scene through the node deployment instruction. That is, in the embodiment of the present application, the node deployment instruction is used to instruct the edge device to perform event processing for the service scenario.
S112, the cloud device transmits the node deployment instruction and the event identification model to the edge device.
The cloud device sends the node deployment instruction and the event identification model to the edge device through the network so as to realize task deployment and resource deployment of the edge device. The edge equipment receives a node deployment instruction and a trained event recognition model sent by the cloud equipment.
In this case, the receiving, by the edge device, the real-time video stream uploaded by the image capturing device set in the service scene, that is, the specific implementation process of S101, may include: s1011, as follows:
S1011, the edge equipment responds to the node deployment instruction and receives the real-time video stream uploaded by the image acquisition equipment arranged in the service scene.
In the embodiment of the application, the cloud device can deploy the edge device as the node for processing the event of the service scene and send the event processing model to the edge device, namely, the node deployment and the resource sinking are carried out, thereby being convenient for directly carrying out the event processing on the edge device, improving the efficiency of the event processing,
it should be noted that, in some embodiments of the present application, after the cloud device issues the node deployment instruction and the event recognition model to the edge device, the working state of the edge device may also be monitored, for example, the memory usage rate of the edge device, the current working state, etc., so that an abnormality of the edge device may be found in time.
In other embodiments of the present application, in order not to affect the system configuration of the edge device, a container is created internally for the event processing task, so that the operating system image of the edge device is loaded into the memory through the container, and then the system configuration is performed on the operating system image, so that the configuration parameters of the operating system image are adapted to the event processing task, that is, the operating system image in the container can perform the event processing task. At this time, the edge device receives the node deployment instruction and the event recognition model sent by the cloud device through the container, and in the operating system mirror image running in the container, the edge device responds to the node deployment instruction to receive the real-time video stream uploaded by the image acquisition device, and in the operating system mirror image running in the container, the event recognition is performed on the real-time video stream through the event recognition model.
In some embodiments of the application, the recognition result includes: the occurrence probability of the target event in the service scene, at this time, the cloud device needs to set a judgment threshold value for the target event in combination with the actual situation of the service scene. In this way, after the cloud device issues the event recognition model to the edge device, the edge device performs event recognition on the real-time video stream through the event recognition model issued by the cloud device, and before determining the monitoring information corresponding to the service scene, that is, after S112 and before S102, the method may further include:
s113, the cloud device determines a corresponding reference judgment threshold value according to the target event.
After the cloud device deploys the edge device as a processing node connected with the edge device, a corresponding judgment threshold value needs to be determined according to a target event in the service scene, namely when the probability value of the target event in the service scene needs to be determined to what extent, the occurrence of the target event in the service scene is confirmed. Thus, the benchmark decision threshold is used to determine whether a traffic scenario has a target event.
It can be appreciated that the reference judgment threshold value can be set by a worker through the client device connected with the cloud device, so that the cloud device can directly acquire the reference judgment threshold value sent by the client device aiming at the target event. The reference judgment threshold may also be obtained by analyzing, by the cloud device, a matching picture of the target event through an artificial intelligence technology, for example, predicting a probability value of the matching picture of the target event, where the predicted probability value is the reference judgment threshold, and the embodiment of the present application is not limited herein.
S114, the cloud device transmits the reference judgment threshold to the edge device.
And the cloud device sends the reference judgment threshold value to the edge device through a network. And the edge equipment receives the corresponding reference judgment threshold value of the target event issued by the cloud equipment.
It is to be understood that the reference judgment threshold may be set manually according to actual requirements, for example, may be set to 0.6, or may be set to 0.8, etc. The reference judgment threshold may also be obtained by performing processes such as averaging or maximum value taking on a historical probability corresponding to the target event in the historical period (the probability identified by the event identification model in the historical period), and the embodiment of the application is not limited in detail herein.
In this case, the edge device performs event recognition on the real-time video stream through the event recognition model issued by the cloud device, and determines the monitoring information corresponding to the service scene, that is, the specific implementation process of S102, will become: S1021-S1022 as follows:
s1021, the edge device carries out event recognition on the real-time video stream through the event recognition model to obtain the occurrence probability of the target event in the service scene.
The edge device inputs the real-time video stream into the event recognition model, so that forward reasoning is carried out on the real-time video stream through the event recognition model, event recognition is completed, and the probability value obtained by the forward reasoning is the occurrence probability of the target event in the service scene. For example, the edge device performs recognition of road congestion on the real-time video stream through a congestion recognition model (event recognition model) to determine the probability of road congestion (target event) in a traffic scene; for another example, the edge device performs the identification of the entering person through a face recognition model (event recognition model) by using a real-time video stream to determine the probability of the non-staff entering the entrance gate (target event) of the office area.
S1022, the edge device determines monitoring information corresponding to the service scene according to the magnitude relation between the occurrence probability and the reference judgment threshold.
The edge device compares the occurrence probability with the reference judgment threshold value, when the occurrence probability is larger than or equal to the reference judgment threshold value, the edge device determines that the monitoring information represents that the service scene has the target event, and when the occurrence probability is smaller than the reference judgment threshold value, the edge device determines that the monitoring information represents that the service scene has no target event.
According to the embodiment of the application, the cloud device can perform reference judgment threshold configuration for the target event and send the configured reference judgment threshold to the edge device, so that the edge device can judge whether the target event occurs in the service scene in real time according to the reference judgment threshold, and the judgment efficiency of the target event is improved.
In some embodiments of the present application, after the edge device sends the alarm control information to the alarm device set in the service scenario, before the cloud device receives the information viewing request sent by the client device, that is, after S103 and before S106, the method may further include: S115-S116, as follows:
S115, the edge device uploads the real-time video stream and event information corresponding to the target event to the cloud device.
In the embodiment of the application, the edge equipment can upload the complete real-time video stream and event information through a network. And the cloud device receives the real-time video stream uploaded by the edge device and event information corresponding to the target event.
S116, the cloud device integrates scene information corresponding to the service scene based on the real-time video stream and the event information.
The cloud device integrates the real-time video stream and the event information into scene information, so that when an information viewing request sent by the client device is received, the scene information containing the real-time video stream is issued to the client device, and an interface for viewing real-time conditions about service scenes on line is provided for the client device.
In the embodiment of the application, the edge device can upload the complete real-time video stream to the cloud device, so that the cloud device can provide an interface for the client device to view the condition of the service scene online, thereby improving the information quantity of the scene information.
In some embodiments of the present application, uploading the event information corresponding to the event frame and the target event to the cloud device, that is, the specific implementation process of S104 may include: S1041-S1043 as follows:
S1041, the edge device writes the event picture into a local folder, and writes event information corresponding to the target event into an information database.
The information database is a database configured locally to the edge device, so that the edge device stores the event frame and the event information in a local storage space after completing event identification on the real-time video stream and obtaining the event frame and the event information.
S1042, the edge device queries the information database at regular time through the message channel service to obtain the event information, and searches the event picture in the local folder according to the event information.
The edge device obtains the data to be uploaded from the information database at regular time through the running message channel service, so as to obtain event information, and searches the corresponding event picture from the local folder according to the serial number or the time stamp of the event information.
S1043, the edge device issues the event picture and the event information to the message queue through the message channel service, so that the cloud device can acquire the event picture and the event information from the message queue.
Then, the edge device continues to issue the event picture and the event information to the message queue through the message channel service, and the subsequent cloud device reads the message queue to obtain the event picture and the event information.
It is to be understood that the message queue may be disposed in another message server or may be disposed in an edge device, and embodiments of the present application are not limited herein.
In the embodiment of the application, the edge device can issue the event information and the event information to the message queue through the message service channel so that the cloud device can acquire the event information and the event information from the message queue to finish uploading the event information and the event information.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The embodiment of the application is completed in the scene of monitoring the production workshop, so as to ensure the safety of the production workshop.
Fig. 9 is a schematic diagram of a system for monitoring a production plant according to an embodiment of the present application. Referring to fig. 9, in the system, a cloud end 9-1 is connected to a production end 9-2 and a production end 9-3 respectively provided in different production workshops through a network. In the production end 9-2, an edge node 9-21 (an edge device) is provided, and the edge node 9-21 is connected to a camera 9-23 (image pickup device) via a wired network 9-22 and to a camera 9-25 (image pickup device) via a wireless network 9-24. In the production end 9-3, an edge node 9-31 (another edge device) is arranged, and cameras 9-32, 9-33 and 9-34 (all are image acquisition devices) arranged in a production workshop are connected with the edge node 9-31 through a switch 9-35 and an NVR (Network V ideo Recoder, network hard disk recorder) 9-36, and meanwhile, the edge node 9-31 is also connected with an audible and visual alarm 9-38 (alarm device) through a gateway 9-37 and connected with an alarm screen 9-39 (alarm device) through the gateway 9-37.
The edge nodes 9-31 (also called edge gateways) are deployed by the node management platform 9-11, can access videos collected by various cameras, and infer the videos to determine whether an event (target event) which hinders safety production occurs, and when the event occurs, control the opening of the audible and visual alarm 9-38 and the alarm screen 9-39 to alarm in a manner of sound, flashing, and the like. And the processed data (event pictures and event information) are uploaded to the cloud 9-1. It will be appreciated that the edge node 9-31 may access the camera through OPC protocol, MQTT protocol and ModBus protocol, and may access the audible and visual alarm 9-8 through ModBus protocol.
The cloud 9-1 is provided with a node management platform 9-11 (also called an edge computing platform), an Internet of things product support 9-12, a visual detection service 9-13, a video platform 9-14 and other bottom layer functional modules, and a model market 9-15, a safe production early warning 9-16, an Internet of things service platform 9-17, a low-code program 9-18 and other modules for providing services for a mobile terminal and a Web terminal (client equipment).
The node management platform 9-11 is used for expanding cloud computing capabilities of an AI model, storage, security and the like into edge nodes of a production end so that edge computing can process and respond to data of a camera. At the same time, the node management platform 9-11 is also used to deploy edge nodes.
The Internet of things product support 9-12 comprises device connection 9-121, product management 9-122, device management 9-123, device connection 9-124, object model 9-124, device authentication 9-125, device debugging 9-126, rules engine 9-127, data storage 9-128 and other submodules. The sub-modules are uniformly connected with the Internet of things equipment, and connection management is carried out 9-129, so that the support of Internet of things products can be realized.
The visual inspection service 9-13 is configured to receive a file picture (event picture) reported by the edge node 9-31, store the file picture in the database 9-132 by using the security data storage 9-131 sub-module, and further process the file picture by using the security data processing 9-133 sub-module to obtain information (scene information) such as an alarm preview, an alarm history list, and the like.
The video platforms 9-14 are used to provide video stream access capability, i.e. to be able to receive video (real-time video stream) of the production plant transmitted by the edge nodes 9-31 and to perform some processing. The video platform 9-14 receives the uploaded video via the video access 9-141 sub-module, and then is processed by the video analysis 9-142 sub-module, for example, re-encoded into a video suitable for online playing, etc., for online playing via the video playing 9-143 sub-module.
The model market 9-15 includes two modules, an algorithm market 9-151 and a data processing 9-152, capable of generating models that can be video-inferred to deploy the models into the edge nodes 9-131 and 9-121 through the node management platform 9-11.
The security production pre-warning 9-16 includes providing, via the Web side, viewing capabilities of the history alert record 9-161, alert preview 9-162, and security alert large screen 9-163, and setting thresholds (reference judgment thresholds) at the time of model identification by using a rule engine and linkage to deploy into edge nodes via the node management platform 9-11. Meanwhile, the safety production early warning 9-16 can also be linked with the Internet of things service platform 9-17, when the reasoning result uploaded by the edge node 9-31 triggers a rule engine, the safety production early warning 9-16 issues a command to the Internet of things service platform 9-17 to control each Internet of things device connected with the cloud 9-1, for example, device monitoring 9-171, data acquisition 9-172, report generation 9-173, device popularization 9-174 and the like are performed. And when the edge node 9-31 fails, triggering the audible and visual alarm 9-38 to alarm to notify the staff.
The low-code program 9-18 is used for opening the functions of the device monitoring 9-181, the security alarm 9-182, the alarm history 9-183, the on-line monitoring 9-184 and the like (all can be regarded as scene information) to the mobile terminal according to the request (information viewing request) sent by the mobile terminal.
Next, a process of the node management platform and deploying the edge nodes will be described.
Referring to fig. 10, fig. 10 is a schematic diagram of a node management platform according to an embodiment of the present application. The node management platform 10-1 provides the functions of management 10-13, scheduling 10-14, monitoring 10-15 and operation and maintenance 10-16 through the custom application 10-11 and the universal component 10-12, communicates with the gateway node through the custom component 10-17 (comprising video access 10-171, message distribution 10-172 and protocol conversion 10-173), realizes the nano-tube of the gateway node, downloads model resources for the gateway node through the container arrangement, and enables the model resources to be the edge node 10-2 to complete the node management 10-3. Meanwhile, the node management platform 10-1 can monitor the node state and the application state of the edge node 10-2, so as to monitor 10-4 for the resource of the edge node 10-2. The node management platform 10-1 can also download software applications and services on the cloud end to the edge node 10-2 to realize application deployment 10-5 of the edge node; and for a plurality of different edge nodes 10-2, cooperative call can be performed to perform cooperative scheduling 10-6 for a scene with more calculation requirements. In this way, the edge node 10-2 is able to implement the edge application 10-21, data transmission 10-22, data filtering 10-23, reasoning 10-24 and device access 10-25 (access camera 10-6 and sensor 10-7, etc.).
When the node management platform deploys the video access component to the edge node, the edge node can access the video stream, and after the video stream is accessed, the video stream is visually processed, the processing result is uploaded to Redis, and the image access is uploaded to COS, so that data reasoning and reporting are completed. Video streams can be accessed through multiple modes of USB, NVR, switches, etc.
In the cooperation of the edge node and the cloud, uplink and downlink data transmission is involved. Fig. 11 is a schematic diagram of uplink and downlink data transmission according to an embodiment of the present application. When the data goes up, the camera 11-1 collects video, and transmits the video to the visual reasoning service 11-3 through the video access service 11-2 of the edge node, after the reasoning is finished, the reasoning result is written into the Redis 11-4 (information database), and meanwhile, the screenshot is saved to the local File 11-5 (local folder). The security channel service 11-6 (message channel service) queries 11-7R edis at regular time to obtain data, queries the alarm file COS11-8 according to the time stamp, writes 11-9 the address of the alarm file COS into the message queue 11-10, and completes data reporting. The cloud security service channel 11-11 subscribes 11-12 production end data, analyzes the alarm data from the message queue 11-10, and writes the alarm data into the security production detection service. When the data is sent down, the cloud end sends the data to be sent to the appointed theme of the message queue 11-10 through the cloud end security server channel 11-11, namely, the data is distributed and controlled to send 11-13 messages to the message queue 11-10, and after receiving ACK (receipt character), the data is indicated to be successfully written into the message queue. The security channel service 11-6 of the edge node subscribes to the control data, and after the monitoring event receives the data, the subject subscription data is obtained to perform service processing, and meanwhile, ACK is performed, which indicates that the service processing is completed.
It will be appreciated that in the embodiments of the present application, related data such as user information, i.e. real-time video streams, needs to be licensed or agreed upon by the user when the embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of related data needs to comply with relevant laws and regulations and standards of relevant countries and regions.
Continuing with the description below of an exemplary architecture of the event processing device 455 implemented as a software module provided by an embodiment of the present application, in some embodiments, as shown in fig. 2, the software module stored in the event processing device 455 of the first memory 450 may include:
a first receiving module 4551, configured to receive a real-time video stream uploaded by an image capturing device disposed in a service scene;
the event recognition module 4552 is configured to perform event recognition on the real-time video stream through an event recognition model issued by the cloud device, and determine monitoring information corresponding to the service scenario; the monitoring information characterizes whether a target event occurs in the service scene;
the alarm control module 4553 is configured to generate alarm control information in response to the target event when the monitoring information characterizes the service scenario in which the target event occurs, and send the alarm control information to an alarm device disposed in the service scenario; the alarm control information is used for controlling the alarm equipment to alarm aiming at the target event;
The picture intercepting module 4554 is configured to intercept an event picture corresponding to the target event from the real-time video stream;
and the first sending module 4555 is configured to upload the event frame and event information corresponding to the target event to the cloud device.
In some embodiments of the present application, the first receiving module 4551 is further configured to receive a node deployment instruction and the trained event recognition model sent by the cloud device; the node deployment instruction is used for instructing an edge device to process an event aiming at the service scene;
the first receiving module 4551 is further configured to receive a real-time video stream uploaded by the image capturing device set in the service scenario in response to the node deployment instruction.
In some embodiments of the application, the recognition result includes: the occurrence probability of the target event in the service scene; the first receiving module 4551 is further configured to receive a corresponding reference judgment threshold of the target event issued by the cloud device;
the event recognition module 4552 is further configured to perform event recognition on the real-time video stream through the event recognition model, so as to obtain the occurrence probability of the target event in the service scenario; and determining the monitoring information corresponding to the service scene according to the magnitude relation between the occurrence probability and the reference judgment threshold.
In some embodiments of the present application, the first sending module 4555 is further configured to upload the real-time video stream and the event information corresponding to the target event to the cloud device.
In some embodiments of the present application, the first sending module 4555 is further configured to write the event frame to a local folder, and write the event information corresponding to the target event to an information database; the information database is queried regularly through a message channel service to obtain the event information, and the event picture is searched in the local folder according to the event information; and publishing the event picture and the event information to a message queue through the message channel service so that the cloud device can acquire the event picture and the event information from the message queue.
Continuing with the description below of an exemplary architecture of the event processing device 255 implemented as a software module provided by an embodiment of the present application, in some embodiments, as shown in fig. 3, the software module stored in the event processing device 255 of the second memory 250 may include:
the second receiving module 2551 is configured to receive an event frame corresponding to a target event reported by an edge device, and event information corresponding to the target event; the event picture is intercepted from the real-time video stream by the edge equipment, and the target event is determined by identifying the real-time video stream uploaded by the image acquisition equipment through an event identification model issued by the cloud equipment by the edge equipment;
The information integration module 2552 is configured to integrate, based on the event picture and the event information, scene information corresponding to a service scene; the scene information includes: at least one of an event list of the service scene, event early warning information of the service scene, and detail information of the target event;
the second receiving module 2551 is further configured to receive an information viewing request sent by the client device;
and the second sending module 2553 is configured to send the scene information to the client device in response to the information viewing request.
In some embodiments of the present application, the event processing device 255 further includes: a data acquisition module 2554, a model training module 2555, and a node configuration module 2556;
the data module 2554 is configured to obtain an initial recognition model and training data;
the model training module 2555 is configured to perform event recognition on the training data by using the initial recognition model to obtain an initial recognition result, and continuously adjust parameters of the initial recognition model according to a loss value between the initial recognition result and tag information corresponding to the training data until reaching a training end condition, to obtain the event recognition model;
The node configuration module 2557 is configured to generate a node deployment instruction for the edge device; the node deployment instruction is used for indicating the edge equipment to process events aiming at the service scene;
the second sending module 2553 is further configured to issue the node deployment instruction and the event recognition model to the edge device.
In some embodiments of the present application, the node configuration module 2557 is further configured to determine, for the target event, a corresponding reference judgment threshold; the reference judgment threshold value is used for determining whether the business scene has the target event or not;
the second sending module 2553 is further configured to send the reference judgment threshold to the edge device.
In some embodiments of the present application, the second receiving module 2551 is further configured to receive a real-time video stream uploaded by the edge device, and the event information corresponding to the target event;
the information integration module 2552 is further configured to integrate the scene information corresponding to the service scene based on the real-time video stream and the event information.
Continuing with the description below of an exemplary architecture of the event processing device 555 implemented as a software module provided by an embodiment of the present application, in some embodiments, as shown in fig. 4, the software module stored in the event processing device 555 of the third memory 550 may comprise:
An operation response module 5551, configured to generate an information viewing request in response to a trigger operation detected on the information viewing identifier of the presented information viewing interface;
a third sending module 5552, configured to send the information viewing request to a cloud device;
a third receiving module 5553, configured to receive scene information returned by the cloud device for the information viewing request;
an information display module 5554, configured to display the scene information in an information viewing area of the information viewing interface; the scene information includes: at least one of an event list of the service scene, event early warning information of the service scene, and detail information of the target event.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The first processor of the edge device reads the computer instructions from the computer readable storage medium, and the first processor executes the computer instructions, so that the computer device executes the event processing method at the edge device side provided by the embodiment of the application; the second processor of the cloud device reads the computer instruction from the computer readable storage medium, and the second processor executes the computer instruction, so that the computer device executes the event processing method of the cloud device side provided by the embodiment of the application; the third processor of the client device reads the computer instructions from the computer readable storage medium, and the third processor executes the computer instructions, so that the computer device executes the event processing method at the client device side provided by the embodiment of the application.
The embodiment of the application provides a computer readable storage medium storing executable instructions, wherein the executable instructions are stored, when the executable instructions are executed by a first processor, the first processor is caused to execute the event processing method on the edge device side provided by the embodiment of the application, when the executable instructions are executed by a second processor, the second processor is caused to execute the event processing method on the cloud device side provided by the embodiment of the application, and when the executable instructions are executed by a third processor, the third processor is caused to execute the event processing method on the client device side provided by the embodiment of the application.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (html, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to execute on one computing device (edge device, cloud device, or client device) or on multiple computing devices located at one site, or on multiple computing devices distributed across multiple sites and interconnected by a communication network.
In summary, through the embodiment of the application, the edge device receives the real-time video stream uploaded by the image acquisition device, and carries out event recognition on the real-time video stream, so that monitoring information can be obtained in a shorter time due to network delay caused by video transmission, and when the monitoring information characterizes that a target event occurs in a service scene, the edge device can directly control an alarm device in the service scene to alarm the target event, and finally directly report the related information of the target event to the cloud device without consuming time to transmit the video to the cloud device for event recognition and response, thereby improving the efficiency of event recognition and response in the service scene and also improving the efficiency of event processing. In addition, in the embodiment of the application, after the edge equipment uploads the related information of the target event to the cloud equipment, the cloud equipment can further process the related information of the target event, so that the related information of the target event can be utilized to generate scene information which is more beneficial to viewing, and when a viewing request sent by the client equipment is received, the scene information is returned to a worker using the client equipment for viewing, so that an interface for viewing the information of the target event can be provided for the worker, and the use of the worker is facilitated.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. An event processing method, wherein the method is performed by an edge device, and comprises:
receiving a real-time video stream uploaded by image acquisition equipment arranged in a service scene;
carrying out event identification on the real-time video stream through an event identification model issued by cloud equipment, and determining monitoring information corresponding to the service scene; the monitoring information characterizes whether a target event occurs in the service scene;
when the monitoring information characterizes the service scene to generate the target event, generating alarm control information in response to the target event, and sending the alarm control information to alarm equipment arranged in the service scene; the alarm control information is used for controlling the alarm equipment to alarm aiming at the target event;
and capturing an event picture corresponding to the target event from the real-time video stream, and uploading the event picture and event information corresponding to the target event to the cloud device.
2. The method of claim 1, wherein prior to receiving the real-time video stream uploaded by the image capture device disposed in the business scenario, the method further comprises:
receiving a node deployment instruction and the trained event identification model sent by the cloud device; the node deployment instruction is used for instructing an edge device to process an event aiming at the service scene;
the receiving the real-time video stream uploaded by the image acquisition equipment arranged in the service scene comprises the following steps:
and responding to the node deployment instruction, and receiving a real-time video stream uploaded by the image acquisition equipment arranged in the service scene.
3. The method of claim 2, wherein the recognition result comprises: the occurrence probability of the target event in the service scene; the event recognition model issued by the cloud device performs event recognition on the real-time video stream, and before determining the monitoring information corresponding to the service scene, the method further includes:
receiving a corresponding reference judgment threshold value of the target event issued by the cloud device;
the event recognition model issued by the cloud device performs event recognition on the real-time video stream, and determines monitoring information corresponding to the service scene, including:
Carrying out event recognition on the real-time video stream through the event recognition model to obtain the occurrence probability of the target event in the service scene;
and determining the monitoring information corresponding to the service scene according to the magnitude relation between the occurrence probability and the reference judgment threshold.
4. A method according to any one of claims 1 to 3, wherein after the sending of alarm control information to an alarm device arranged in the traffic scenario, the method further comprises:
and uploading the real-time video stream and the event information corresponding to the target event to the cloud device.
5. A method according to any one of claims 1 to 3, wherein uploading the event information corresponding to the event frame and the target event to the cloud device includes:
writing the event picture into a local folder, and writing the event information corresponding to the target event into an information database;
the information database is queried regularly through a message channel service to obtain the event information, and the event picture is searched in the local folder according to the event information;
And publishing the event picture and the event information to a message queue through the message channel service so that the cloud device can acquire the event picture and the event information from the message queue.
6. An event processing method, wherein the method is executed by a cloud device and comprises:
receiving an event picture corresponding to a target event reported by edge equipment and event information corresponding to the target event; the event picture is intercepted from the real-time video stream by the edge equipment, and the target event is determined by identifying the real-time video stream uploaded by the image acquisition equipment through an event identification model issued by the cloud equipment by the edge equipment;
based on the event picture and the event information, integrating to obtain scene information corresponding to a service scene; the scene information includes: at least one of an event list of the service scene, event early warning information of the service scene, and detail information of the target event;
receiving an information viewing request sent by client equipment;
and responding to the information viewing request, and transmitting the scene information to the client equipment.
7. The method according to claim 6, wherein before receiving the event frame corresponding to the target event reported by the edge device and the event information corresponding to the target event, the method further comprises:
acquiring an initial recognition model and training data;
carrying out event recognition on the training data by utilizing the initial recognition model to obtain an initial recognition result, and continuously adjusting parameters of the initial recognition model according to a loss value between the initial recognition result and label information corresponding to the training data until reaching a training ending condition to obtain the event recognition model;
generating a node deployment instruction aiming at the edge equipment; the node deployment instruction is used for indicating the edge equipment to process events aiming at the service scene;
and issuing the node deployment instruction and the event identification model to the edge equipment.
8. The method of claim 7, wherein after the issuing of the node deployment instruction and the event recognition model to the edge device, the method further comprises:
determining a corresponding reference judgment threshold value according to the target event; the reference judgment threshold value is used for determining whether the business scene has the target event or not;
And the reference judgment threshold value is issued to the edge equipment.
9. The method according to any one of claims 6 to 8, wherein prior to receiving the information viewing request sent by the client device, the method further comprises:
receiving a real-time video stream uploaded by the edge equipment and the event information corresponding to the target event;
and integrating the scene information corresponding to the service scene based on the real-time video stream and the event information.
10. A method of event processing, the method performed by a client device, comprising:
responding to the triggering operation detected on the information viewing identifier of the displayed information viewing interface, generating an information viewing request, and sending the information viewing request to the cloud device;
receiving scene information returned by the cloud device aiming at the information viewing request;
displaying the scene information in an information viewing area of the information viewing interface;
the scene information includes: at least one of an event list of a service scene, event early warning information of the service scene and detail information of a target event.
11. An event processing apparatus, characterized in that the event processing apparatus comprises:
The first receiving module is used for receiving the real-time video stream uploaded by the image acquisition equipment arranged in the service scene;
the event identification module is used for carrying out event identification on the real-time video stream through an event identification model issued by the cloud equipment and determining monitoring information corresponding to the service scene; the monitoring information characterizes whether a target event occurs in the service scene;
the alarm control module is used for responding to the target event to generate alarm control information when the monitoring information characterizes the target event of the service scene, and sending the alarm control information to alarm equipment arranged in the service scene; the alarm control information is used for controlling the alarm equipment to alarm aiming at the target event;
the picture intercepting module is used for intercepting an event picture corresponding to the target event from the real-time video stream;
the first sending module is used for uploading the event information corresponding to the event picture and the target event to the cloud device.
12. An event processing apparatus, characterized in that the event processing apparatus comprises:
the second receiving module is used for receiving an event picture corresponding to a target event reported by the edge equipment and event information corresponding to the target event; the event picture is intercepted from the real-time video stream by the edge equipment, and the target event is determined by identifying the real-time video stream uploaded by the image acquisition equipment through an event identification model issued by the cloud equipment by the edge equipment;
The information integration module is used for integrating scene information corresponding to a service scene based on the event picture and the event information; the scene information includes: at least one of an event list of the service scene, event early warning information of the service scene, and detail information of the target event;
the second receiving module is further used for receiving an information viewing request sent by the client device;
and the second sending module is used for responding to the information viewing request and sending the scene information to the client equipment.
13. An event processing apparatus, characterized in that the event processing apparatus comprises:
the operation response module is used for responding to the triggering operation detected on the information viewing identifier of the displayed information viewing interface and generating an information viewing request;
the third sending module is used for sending the information viewing request to the cloud device;
the third receiving module is used for receiving scene information returned by the cloud device aiming at the information viewing request;
the information display module is used for displaying the scene information in an information viewing area of the information viewing interface; the scene information includes: at least one of an event list of a service scene, event early warning information of the service scene and detail information of a target event.
14. An edge device, the edge device comprising:
a first memory for storing executable instructions;
a first processor for implementing the event processing method of any of claims 1 to 5 when executing executable instructions stored in said first memory.
15. A computer readable storage medium storing executable instructions, wherein the executable instructions when executed by a first processor implement the event processing method of any of claims 1 to 5, or when executed by a second processor implement the event processing method of any of claims 6 to 9, or when executed by a third processor implement the event processing method of claim 10.
CN202210220854.5A 2022-03-08 2022-03-08 Event processing method, device, equipment, storage medium and program product Pending CN116778370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210220854.5A CN116778370A (en) 2022-03-08 2022-03-08 Event processing method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210220854.5A CN116778370A (en) 2022-03-08 2022-03-08 Event processing method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116778370A true CN116778370A (en) 2023-09-19

Family

ID=87990066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210220854.5A Pending CN116778370A (en) 2022-03-08 2022-03-08 Event processing method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116778370A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319490A (en) * 2023-10-31 2023-12-29 广东利通科技投资有限公司 Artificial intelligence application cooperative control system and method for intelligent highway

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319490A (en) * 2023-10-31 2023-12-29 广东利通科技投资有限公司 Artificial intelligence application cooperative control system and method for intelligent highway
CN117319490B (en) * 2023-10-31 2024-04-16 广东利通科技投资有限公司 Artificial intelligence application cooperative control system and method for intelligent highway

Similar Documents

Publication Publication Date Title
US11627208B2 (en) Method for management of intelligent internet of things, system and server
Cheng et al. FogFlow: Easy programming of IoT services over cloud and edges for smart cities
CN109375594B (en) City safety wisdom management and control platform
US20210389293A1 (en) Methods and Systems for Water Area Pollution Intelligent Monitoring and Analysis
EP2688296B1 (en) Video monitoring system and method
WO2017094442A1 (en) Data flow control apparatus and data flow control method
Cheng et al. Geelytics: Enabling on-demand edge analytics over scoped data sources
US11720627B2 (en) Systems and methods for efficiently sending video metadata
CN106790515B (en) Abnormal event processing system and application method thereof
KR102441372B1 (en) Smart integrated management system for electric vehicle charging stations
Rao et al. Real-time object detection with tensorflow model using edge computing architecture
CN113378616A (en) Video analysis method, video analysis management method and related equipment
CN115049057B (en) Model deployment method and device, electronic equipment and storage medium
CN104506526B (en) A kind of hunting camera, the method being monitored to it, system and background system
CN116778370A (en) Event processing method, device, equipment, storage medium and program product
Prakash et al. Smart city video surveillance using fog computing
US20230267147A1 (en) Systems and methods for searching for events within video content
CN112183498A (en) Edge calculation system based on animal identification
US20230205817A1 (en) Systems and methods for identifying events within video content using intelligent search query
CN110958396B (en) Holder control method and device, electronic equipment and storage medium
CN110443910B (en) Method, system, device and storage medium for monitoring state of unmanned device
CN113489939A (en) Intelligent monitoring method and system for power transmission line construction site
CN117437565A (en) Video data processing method, device, storage medium, equipment and product
KR20240051701A (en) System for utilizing multipurpose dark data based on user interface
CN116050813B (en) Control method and equipment for photovoltaic operation and maintenance system and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication