CN117372967A - Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things - Google Patents

Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things Download PDF

Info

Publication number
CN117372967A
CN117372967A CN202311662049.9A CN202311662049A CN117372967A CN 117372967 A CN117372967 A CN 117372967A CN 202311662049 A CN202311662049 A CN 202311662049A CN 117372967 A CN117372967 A CN 117372967A
Authority
CN
China
Prior art keywords
street lamp
lamp node
data
node
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311662049.9A
Other languages
Chinese (zh)
Other versions
CN117372967B (en
Inventor
曾二林
陈斌
罗达祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Shenchuang Photoelectric Technology Co ltd
Original Assignee
Guangdong Shenchuang Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Shenchuang Photoelectric Technology Co ltd filed Critical Guangdong Shenchuang Photoelectric Technology Co ltd
Priority to CN202311662049.9A priority Critical patent/CN117372967B/en
Publication of CN117372967A publication Critical patent/CN117372967A/en
Application granted granted Critical
Publication of CN117372967B publication Critical patent/CN117372967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of the Internet of things, and discloses a remote monitoring method, device, equipment and medium based on intelligent street lamps of the Internet of things, which are used for improving the accuracy of remote monitoring of intelligent street lamps based on the Internet of things. Comprising the following steps: analyzing the motion foreground image of the video frame data of each street lamp node to obtain a pre-motion Jing Tuji corresponding to each street lamp node; image channel stitching is carried out on the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node, so that an initial motion guidance atlas of each street lamp node is obtained; performing optical flow information analysis on the video frame data of each street lamp node to obtain optical flow information data of each street lamp node; carrying out image correction on the initial motion guide atlas corresponding to each street lamp node to obtain a target motion guide atlas of each street lamp node; and carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, and generating an abnormal motion analysis report.

Description

Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things
Technical Field
The invention relates to the technical field of the Internet of things, in particular to a remote monitoring method, device, equipment and medium based on intelligent street lamps of the Internet of things.
Background
With the continuous acceleration of the urban process, challenges in urban traffic, security, energy management, and the like are increasing. Conventional street lamp systems only provide lighting functions, and lack intelligent and comprehensive data collection and analysis capabilities. Accordingly, to address these urban challenges, researchers and engineers began to explore intelligent street lamp systems that incorporate the latest technology.
In the prior art, however, intelligent street light cameras typically have limited field of view and angle, which results in some areas not being covered or blind spots being monitored. Thus, the system misses some important events or situations, affecting accuracy. Poor weather conditions, such as heavy fog, heavy rain or snow, affect the visibility of the camera and obscure or obscure the image. This makes it difficult for the system to accurately monitor events or movements under these conditions. Varying lighting conditions, such as sunrise, sunset and strong lighting differences, lead to camera exposure problems, making certain areas too bright or too dim for accurate capture of events. Street lamp systems are located in urban environments where there are many moving elements, such as vehicles, pedestrians, etc. This results in a system misinformation or false omission event because it is difficult to distinguish between a true abnormal event and regular motion, i.e., the accuracy of the existing scheme is low.
Disclosure of Invention
The invention provides a remote monitoring method, device, equipment and medium based on an intelligent street lamp of the Internet of things, which are used for improving the accuracy of remote monitoring of the intelligent street lamp based on the Internet of things.
The first aspect of the invention provides a remote monitoring method based on an intelligent street lamp of the Internet of things, which comprises the following steps: splitting street lamp nodes of a preset intelligent street lamp networking to obtain a plurality of street lamp nodes, and collecting light intensity data and video monitoring data of each street lamp node;
video frame extraction is carried out on the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and gray scale processing is carried out on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node;
respectively analyzing a motion foreground graph based on a moving target for the video frame data of each street lamp node to obtain a pre-motion Jing Tuji corresponding to each street lamp node;
respectively splicing the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node;
Performing optical flow information analysis on video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node;
respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node;
and carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to a preset street lamp control terminal.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the extracting video frames of the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and performing gray scale processing on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node includes:
video frame rate matching is carried out on each street lamp node respectively, so that video frame rate data of each street lamp node are obtained;
Based on the video frame rate data of each street lamp node, video frame extraction is carried out on the video monitoring data of each street lamp node, so that the video frame data of each street lamp node is obtained;
performing RGB pixel calculation on the video frame data of each street lamp node respectively to obtain an RGB pixel value set of each video frame data;
and respectively carrying out weighted average processing on the RGB pixel value sets of each video frame data based on a preset gray weight factor set to obtain gray video data of each street lamp node.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the performing moving foreground graph analysis based on a moving target on the video frame data of each street lamp node to obtain a moving foreground graph set corresponding to each street lamp node includes:
respectively carrying out pixel difference calculation on the video frame data of each street lamp node to obtain difference pixel data of each video frame data;
performing numerical analysis on the difference pixel data based on a preset motion detection threshold value to obtain a numerical analysis result, and performing image region segmentation on each video frame data according to the numerical analysis result to obtain segmented image data of each video frame data;
Respectively carrying out binarization processing on the divided image data of each video frame data to obtain binarized image data of each video frame data;
respectively carrying out connected region identification on the binarized image data of each video frame data to obtain a connected region of each video frame data;
based on the connected region of each video frame data, respectively carrying out image denoising processing on the binarized image of each video frame data to obtain a denoising image set of each video frame data;
and respectively carrying out motion foreground image analysis based on a moving object on the denoising image set of each video frame data to obtain a motion foreground image set corresponding to each street lamp node.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect of the present invention, the performing a motion foreground map analysis based on a moving target on the denoising image set of each video frame data to obtain a motion foreground map set corresponding to each street lamp node includes:
performing pixel superposition processing on the video frame data of each street lamp node and the denoising image set of each video frame data to obtain superposition image data of each street lamp node;
And adjusting the image brightness of the superimposed image data of each street lamp node to obtain a motion foreground image corresponding to each street lamp node.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the performing image channel stitching on the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node includes:
respectively checking the consistency of the image size of the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain a checking result;
performing image size consistency processing on the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node according to the verification result to obtain processed gray video data of each street lamp node and processed pre-motion Jing Tuji corresponding to each street lamp node;
extracting pixel values of a motion foreground image set corresponding to each street lamp node to obtain a first pixel value set;
extracting pixel values of the gray video data of each street lamp node to obtain a second pixel value set;
And based on a preset two-channel template diagram, respectively splicing the gray video data of each street lamp node and the motion foreground image set corresponding to each street lamp node through the first pixel value set and the second pixel value set to obtain an initial motion guide image set corresponding to each street lamp node.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the performing optical flow information analysis on video frame data of each street lamp node based on light intensity data of each street lamp node to obtain optical flow information data of each street lamp node includes:
respectively carrying out image frame pairing on video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain a pairing image frame set corresponding to each street lamp node;
respectively carrying out characteristic point tracking on the paired image frame sets corresponding to each street lamp node to obtain characteristic point position sets corresponding to each paired image frame set;
performing optical flow vector calculation on the feature point position set corresponding to each paired image frame set respectively to obtain optical flow vector data corresponding to each paired image frame set;
And carrying out optical flow information analysis on the video frame data of each street lamp node through the optical flow vector data corresponding to each pairing image frame set to obtain the optical flow information data of each street lamp node.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the performing image correction on the initial motion guidance atlas corresponding to each street lamp node by using the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node includes:
performing displacement calculation on the optical flow information data of each street lamp node to obtain a displacement value corresponding to each street lamp node, and performing displacement direction calibration on the optical flow information data of each street lamp node to obtain displacement direction data of each street lamp node;
and respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node based on the displacement value corresponding to each street lamp node and the displacement direction data of each street lamp node to obtain the target motion guidance atlas of each street lamp node.
The second aspect of the invention provides a remote monitoring system based on intelligent street lamps of the Internet of things, which comprises:
The splitting module is used for splitting street lamp nodes of a preset intelligent street lamp networking to obtain a plurality of street lamp nodes, and collecting light intensity data and video monitoring data of each street lamp node;
the extraction module is used for extracting video frames of the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and carrying out gray scale processing on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node;
the first analysis module is used for respectively carrying out motion foreground graph analysis based on a moving target on the video frame data of each street lamp node to obtain a pre-motion Jing Tuji corresponding to each street lamp node;
the splicing module is used for respectively carrying out image channel splicing on the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guiding atlas corresponding to each street lamp node;
the second analysis module is used for carrying out optical flow information analysis on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node;
The correction module is used for respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node;
the transmission module is used for carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to a preset street lamp control terminal.
The third aspect of the invention provides a remote monitoring device based on an intelligent street lamp of the Internet of things, which comprises: a memory and at least one processor, the memory having instructions stored therein; and the at least one processor calls the instruction in the memory so that the remote monitoring equipment based on the intelligent street lamp of the Internet of things executes the remote monitoring method based on the intelligent street lamp of the Internet of things.
A fourth aspect of the present invention provides a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the above-described remote monitoring method based on the internet of things intelligent street lamp.
In the technical scheme provided by the invention, the intelligent street lamp networking is split into the street lamp nodes to obtain a plurality of street lamp nodes, and the light intensity data and the video monitoring data of each street lamp node are collected; video frame extraction is carried out on the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and gray scale processing is carried out on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node; respectively analyzing the video frame data of each street lamp node based on the motion foreground graph of the moving target to obtain a motion front Jing Tuji corresponding to each street lamp node; respectively splicing the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node; performing optical flow information analysis on the video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node; respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node; and carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to the street lamp control terminal. In the scheme, the video monitoring data acquisition and analysis of the street lamp nodes allow public areas such as roads, streets and the like to be monitored in real time. By analyzing the video frame data and the motion foreground image, a motion target can be detected, the data volume can be reduced by gray processing, and the processing efficiency can be improved. The image channel stitching is helpful for integrating data from different sources, and can track the movement of a moving target more accurately through optical flow information data, so that the accuracy of a target movement guiding diagram can be improved. The analysis target motion guidance atlas can detect abnormal motion conditions, and generation and transmission of an abnormal motion analysis report to the street lamp control terminal can realize timely feedback and response so as to further improve the accuracy of remote monitoring of intelligent street lamps based on the Internet of things.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a remote monitoring method based on intelligent street lamps of the Internet of things in an embodiment of the invention;
fig. 2 is a flowchart of a motion foreground graph analysis based on a moving object for video frame data of each street lamp node in the embodiment of the present invention;
FIG. 3 is a flowchart of a motion foreground graph analysis based on a moving object for each denoised image set of video frame data according to an embodiment of the present invention;
FIG. 4 is a flowchart of image channel stitching for gray video data of each street lamp node and a motion foreground atlas corresponding to each street lamp node in the embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of a remote monitoring system based on intelligent street lamps of the Internet of things in an embodiment of the invention;
fig. 6 is a schematic diagram of an embodiment of a remote monitoring device based on an intelligent street lamp of the internet of things in an embodiment of the invention.
Detailed Description
The embodiment of the invention provides a remote monitoring method, device, equipment and medium based on an intelligent street lamp of the Internet of things, which are used for improving the accuracy of remote monitoring of the intelligent street lamp based on the Internet of things.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For easy understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and an embodiment of a remote monitoring method based on an intelligent street lamp of the present invention includes:
s101, splitting street lamp nodes of a preset intelligent street lamp networking to obtain a plurality of street lamp nodes, and collecting light intensity data and video monitoring data of each street lamp node;
it can be understood that the execution subject of the invention can be a remote monitoring system based on intelligent street lamps of the internet of things, and can also be a terminal or a server, and the execution subject is not limited in the specification. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, networking is performed on preset intelligent street lamps. Each street lamp is connected to a centralized network, typically an internet of things (IoT) network, to enable data acquisition and remote monitoring. Once the street lamps are connected to the network, remote management and control is possible, which is the core of implementing a smart city. Splitting each street lamp node. Each street lamp is regarded as an independent node, and has independent data acquisition and processing capacity. Such splitting may be based on the location, function, or other factors of the street lamp. For example, street lamp nodes may be divided by geographic location to ensure that street lamps of different areas can provide specific data, or by function so that different types of street lamps can provide different kinds of data. Light intensity data and video monitoring data are collected. The light intensity data is data of illumination levels around the street lamp measured by the sensor, and the video monitoring data is obtained by a camera mounted on the street lamp. These two types of data are central to the intelligent street lamp system and they provide important information about the street lamp surroundings. The light intensity data may be used to control the brightness of the street lamps for energy management, while the video monitoring data may be used to monitor traffic, safety and other urban operating conditions.
S102, video frame extraction is carried out on video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and gray scale processing is carried out on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node;
specifically, video frame extraction is performed on the video monitoring data of each street lamp node. This step breaks down the continuous video stream into individual image frames to convert the video data into still image data. Each frame represents an image captured at a certain point in time. For example, if a camera of a street light node captures video at a rate of 30 frames per second, each frame represents an instant per second. These frames will be used for subsequent processing and analysis. And respectively carrying out video frame rate matching on each street lamp node to obtain video frame rate data of each street lamp node. The original video is from different street light nodes, each capturing video at a different frame rate. To ensure consistency of the data, all frames need to be matched to the same frame rate. For example, if a certain street light node captures video at a rate of 30 frames per second, and another node captures video at a rate of 24 frames per second, it is necessary to match their frame rates to the same value. This helps to more easily process and compare data. And carrying out gray scale processing on the video frame data of each street lamp node. This is a critical step because it helps to reduce data complexity, reduce storage requirements, and make the image easier to analyze. The gray scale image still contains information about the darkness of the different areas in the image, which is important for identifying moving objects and other analysis tasks. And carrying out RGB pixel calculation on the video frame data of each street lamp node. The RGB values, i.e., the red, green and blue channel values, for each pixel point in each frame are calculated. The values of these channels reflect the color and brightness of the different parts of the image. This is to obtain more detailed image information for further analysis. And respectively carrying out weighted average processing on the RGB pixel value sets of each frame based on a preset gray weight factor set to obtain gray video data of each street lamp node. These data will be used for various analysis tasks such as motion detection, abnormal event detection, environmental monitoring, etc.
S103, respectively analyzing a motion foreground graph based on a moving target for the video frame data of each street lamp node to obtain a pre-motion Jing Tuji corresponding to each street lamp node;
specifically, pixel difference calculation is performed on video frame data of each street lamp node. The pixel values between adjacent frames are compared to detect changes in the image. If a pixel differs significantly between two frames, it will be marked as a difference pixel. These difference pixels will be used to determine potential moving objects and changes in the image. And carrying out numerical analysis on the difference pixel data based on a preset motion detection threshold value. This step is used to determine which difference pixels represent real motion and which are false positives due to noise or illumination changes. The motion detection threshold may be adjusted according to the needs of a particular application. Once the difference pixels are analyzed and compared to a threshold, a numerical analysis result may be obtained. And carrying out image region segmentation on each video frame data according to the numerical analysis result. The image is divided into different regions, in which a moving object is contained. These regions are determined based on the location and size of the difference pixels. Each region represents a potential moving object in the image. And respectively carrying out binarization processing on the segmented image of each video frame data. This step converts the image into a black and white binary image, where the moving object is typically white and the background is black. This makes it easier for the moving object to be separated from the image. And carrying out connected region identification on the binarized image. This is done to identify the connectivity of each region to group the pixels that are connected together into one moving object. Each moving object will represent a connected region. And carrying out image denoising processing based on the connected region of each video frame data. This is to remove noise or small unimportant areas, resulting in a more accurate motion perspective. The denoising process may include filtering and morphological operations to improve image quality. And respectively carrying out motion foreground image analysis based on the moving target on the denoising image set of each video frame data. The outline of the moving object is extracted, so that a moving foreground image set corresponding to each street lamp node is obtained. These images will be used for further analysis such as abnormal event detection or traffic monitoring.
And carrying out pixel superposition processing on the video frame data of each street lamp node and the denoising image set thereof. This step superimposes the original video frame with the collection of denoised images to highlight motion foreground and reduce background interference. By the superimposition processing, the moving object becomes more conspicuous, and irrelevant information becomes less conspicuous. And adjusting the image brightness of the superimposed image data of each street lamp node. This is to ensure that the final motion foreground image has proper brightness and contrast, making the moving object easier to identify. Brightness adjustment typically involves modifying the brightness and contrast of pixel values to optimize the visual effect of the image. Finally, through the steps, a motion foreground image corresponding to each street lamp node is obtained. These images represent moving objects and anomalies that the system monitors and can be used for further analysis and decision making. For example, assuming a vehicle enters the field of view of the camera, the pixels from frame to frame may change due to the motion of the vehicle. Through the previous steps, pixel difference calculation, numerical analysis and denoising processing have been performed, and a denoised image set is obtained, which contains information of the moving vehicle. And carrying out pixel superposition processing on the original video frame and the denoising image set. This will emphasize the moving vehicle and reduce the disturbance of other irrelevant information, making the vehicle more obvious. Image brightness adjustment is performed to ensure that the vehicle is clearly visible without being affected by too bright or too dark.
S104, respectively splicing the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node;
specifically, for each street lamp node, respectively checking the consistency of the image sizes of the gray video data and the corresponding motion foreground atlas. This step is to ensure that the two parts of data have the same image size for subsequent merging and processing. If the data are not uniform in size, they need to be adjusted to match. And carrying out image size consistency processing on the gray video data and the motion foreground atlas of each street lamp node according to the size consistency checking result. They are adjusted to the same size for subsequent image channel stitching. This step ensures data compatibility so that they can be properly superimposed. And extracting pixel values of the motion foreground atlas of each street lamp node to obtain a first pixel value set. These pixel values represent information for each pixel point in the motion foreground map. This is to capture the features and position of moving objects. And simultaneously, extracting pixel values of the gray video data of each street lamp node to obtain a second pixel value set. This set contains information for each pixel in the gray scale image. This will provide more comprehensive image data, including background and moving objects. And based on a preset two-channel template diagram, respectively splicing the image channels of the gray video data and the motion foreground image set of each street lamp node through the first pixel value set and the second pixel value set. This step fuses the two pieces of data together to create an initial motion guide atlas for each street lamp node. For example, assume that at a node, a camera captures the motion of a vehicle, along with other background information. The gray scale video data and the motion foreground atlas are size consistency checksum processed to ensure their size matching. Pixel values are extracted from the motion foreground map, and the position of the vehicle is captured. Meanwhile, pixel values, including background and vehicle, are extracted from the gray video data. And superposing the two parts of information together through a preset two-channel template diagram to create an initial motion guide diagram set. This atlas may be used for further analysis such as vehicle tracking, abnormal event detection or traffic monitoring. This process helps integrate data from different sources, providing a more comprehensive view, thereby enhancing city management and security.
S105, carrying out optical flow information analysis on video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node;
specifically, for each street lamp node, image frame pairing is performed on the video frame data based on the light intensity data. Successive video frames are paired to determine an association between them. By pairing, a time sequence relationship between video frames can be established, which is beneficial to analyzing optical flow information. And carrying out characteristic point tracking on the paired image frame set corresponding to each street lamp node. The purpose of this step is to detect and track feature points, such as corner points or edges, in the image in order to establish a motion trajectory of the feature points between successive frames. The movement of these feature points will help determine optical flow information. And carrying out optical flow vector calculation on the feature point position set corresponding to each paired image frame set. Optical flow vectors refer to vectors that describe the direction and speed of motion of feature points from one frame to another. By calculating the optical flow vectors, the displacement between the feature points can be known, and optical flow information can be deduced. And carrying out optical flow information analysis on the video frame data of each street lamp node through the optical flow vector data corresponding to each paired image frame set. This step will provide information about the direction and speed of movement of the object in the image, as well as changes in lighting conditions. This helps to more fully understand the events and conditions occurring within the area monitored by the street light nodes. For example, suppose that in a surveillance area of a certain street light node, the vehicle passes a camera, and the light intensity data indicates that the lighting conditions are changing, e.g. darkening due to sunset. The video frames are image frame paired to establish an association between successive frames. The vehicle position in the image is tracked by feature point tracking. The optical flow vector calculation provides the movement direction and speed information of the vehicle. Analysis of the optical flow information reveals how the vehicle moves under different lighting conditions, which is very important for traffic monitoring and safety.
S106, respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node;
specifically, for each street lamp node, calculating the displacement of the light flow information data. This is achieved by analyzing the size of the optical-flow vectors, which represent the displacement of feature points between successive frames. The displacement magnitude calculation can tell the system how fast the object is moving in the image. Meanwhile, the displacement direction calibration is also required for the optical flow information data. This is achieved by analyzing the direction of the optical-flow vector, which indicates the direction of movement of the object in the image. The displacement direction calibration can tell the system that the object is moving in the left, right, up, down, etc. directions. And carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node based on the displacement size and the displacement direction data. This step uses the displacement information to adjust the motion guidance map to more accurately reflect the speed and direction of motion of the object. By correction, a target motion guide atlas can be obtained. For example, in a monitoring area of a certain street light node, pedestrians move at different speeds and directions. From the optical flow information data, the displacement size and direction of each pedestrian can be calculated. The magnitude of the displacement represents the speed of movement of the pedestrian, while the direction of the displacement represents the direction of movement of them. Based on these displacement information, an image correction can be made to the initial motion guide atlas. If a pedestrian moves left in the image, the corrected image will more accurately show this movement. Also, if another pedestrian moves to the right, their movement will be more accurately reflected in the image.
S107, carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to a preset street lamp control terminal.
It should be noted that, for each street lamp node, the target motion guidance atlas contains information about the motion of the object in the monitored area. These atlases have high accuracy and information amount through the previous optical flow information processing and image correction. These atlases are preprocessed to better identify abnormal events. The preprocessing includes noise removal and moving object segmentation. By employing image processing techniques, noise affecting the analysis results, such as small interference or background noise in the image, can be identified and eliminated. The moving object segmentation will help separate the object from the background for more accurate analysis. Abnormal movement condition analysis needs to take into account different types of abnormal conditions. This includes: moving object detection: the system detects moving objects, such as pedestrians, vehicles, or other objects, within the monitored area. By means of a moving object detection algorithm, it is possible to determine which objects are moving, as well as their position and velocity; abnormal behavior detection: the system detects unusual behavior such as rapid movement, stagnation, or entry into forbidden areas. These abnormal behaviors suggest potential problems or security threats; area intrusion detection: if someone or an object enters the forbidden area, the system should be able to detect and trigger an alarm. This is very important for security and monitoring; the object disappears or appears: the object in the monitored area suddenly disappears or appears indicating an abnormal situation. The system should be able to detect these conditions and record them. Once an abnormal situation is detected, the system will generate an abnormal motion analysis report. This report includes details of the anomaly, such as time, location, anomaly type, and related video shots. This helps city administrators and security personnel to better understand the event that occurs and take corresponding action. For example, assume that the system detects that a vehicle is traveling fast, exceeding the speed limit. The abnormal motion situation analysis module identifies this abnormal situation and generates a report. The report includes a description of the vehicle, location, speed, time stamp and related video shots. This information is transmitted to a preset street lamp control terminal, and the city manager can view this report at any time. If the behavior of the vehicle constitutes a potential hazard, they may take action, such as triggering an alarm or notifying traffic police.
In the embodiment of the invention, the intelligent street lamp networking is split into the street lamp nodes to obtain a plurality of street lamp nodes, and the light intensity data and the video monitoring data of each street lamp node are collected; video frame extraction is carried out on the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and gray scale processing is carried out on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node; respectively analyzing the video frame data of each street lamp node based on the motion foreground graph of the moving target to obtain a motion front Jing Tuji corresponding to each street lamp node; respectively splicing the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node; performing optical flow information analysis on the video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node; respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node; and carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to the street lamp control terminal. In the scheme, the video monitoring data acquisition and analysis of the street lamp nodes allow public areas such as roads, streets and the like to be monitored in real time. By analyzing the video frame data and the motion foreground image, a motion target can be detected, the data volume can be reduced by gray processing, and the processing efficiency can be improved. The image channel stitching is helpful for integrating data from different sources, and can track the movement of a moving target more accurately through optical flow information data, so that the accuracy of a target movement guiding diagram can be improved. The analysis target motion guidance atlas can detect abnormal motion conditions, and generation and transmission of an abnormal motion analysis report to the street lamp control terminal can realize timely feedback and response so as to further improve the accuracy of remote monitoring of intelligent street lamps based on the Internet of things.
In a specific embodiment, the process of executing step S102 may specifically include the following steps:
(1) Video frame rate matching is carried out on each street lamp node respectively, and video frame rate data of each street lamp node are obtained;
(2) Based on the video frame rate data of each street lamp node, video frame extraction is carried out on the video monitoring data of each street lamp node respectively to obtain the video frame data of each street lamp node;
(3) Performing RGB pixel calculation on the video frame data of each street lamp node respectively to obtain an RGB pixel value set of each video frame data;
(4) And respectively carrying out weighted average processing on the RGB pixel value sets of each video frame data based on a preset gray weight factor set to obtain gray video data of each street lamp node.
Specifically, the system performs video frame rate matching on each street lamp node. Ensuring that video frames collected from different cameras are processed at the same rate. Frame rate matching may be achieved by increasing or decreasing the number of frames so that they are consistent. For example, if the frame rate of one street light node is 30 frames per second and the frame rate of another street light node is 25 frames per second, the matching may be done by deleting some of the frames or inserting additional frames so that they all collect frames in the same time interval. Based on the video frame rate data of the street lamp nodes, video frame extraction is performed next. Separate video frames are extracted from the video surveillance data for each node for subsequent processing. The extracted frames will be used for analysis, detection of motion or other specific conditions. And respectively carrying out RGB pixel calculation on the video frame data of each street lamp node. This step involves decomposing the pixels in each video frame into RGB values for the red, green and blue channels. In this way, each pixel has a corresponding RGB value that can be used for subsequent processing. And respectively carrying out weighted average processing on the RGB pixel value set of each video frame data based on the preset gray weight factor set. The gray image is generated by combining the pixel values of the RGB channels, typically using a set of weights to calculate the contribution of each channel to generate the gray value. The weighting factors may be adjusted according to specific needs to better reflect the brightness and contrast of the monitored area.
In a specific embodiment, as shown in fig. 2, the process of performing step S103 may specifically include the following steps:
s201, respectively carrying out pixel difference calculation on video frame data of each street lamp node to obtain difference pixel data of each video frame data;
s202, carrying out numerical analysis on the difference pixel data based on a preset motion detection threshold value to obtain a numerical analysis result, and respectively carrying out image region segmentation on each video frame data according to the numerical analysis result to obtain segmented image data of each video frame data;
s203, respectively carrying out binarization processing on the divided image data of each video frame data to obtain binarized image data of each video frame data;
s204, respectively carrying out connected region identification on the binarized image data of each video frame data to obtain a connected region of each video frame data;
s205, based on the connected region of each video frame data, performing image denoising processing on the binarized image of each video frame data to obtain a denoising image set of each video frame data;
s206, respectively carrying out motion foreground image analysis based on the moving targets on the denoising image set of each video frame data to obtain a motion foreground image set corresponding to each street lamp node.
The pixel difference calculation is performed for the video frame data of each street lamp node. This step involves comparing adjacent video frames and calculating the pixel difference between them. The pixel differences represent changes that occur between two frames, including movement of an object or other changes. And carrying out numerical analysis on the difference pixel data based on a preset motion detection threshold value. The motion detection threshold is a predefined value for determining which differences are considered motion. The selection of the threshold value may be adjusted according to the specific situation. The numerical analysis will determine which pixel differences belong to the motion to generate a numerical analysis result. And carrying out image region segmentation on each video frame data according to the numerical analysis result. This step segments the image into different regions, including moving objects or other regions of interest. The segmented image data is used to more accurately identify moving objects. Binarization processing is performed for each of the divided image data, respectively. Binarization converts pixel values into binary form, typically 0 and 1, in order to more easily identify the object. This step generates binarized image data. And respectively carrying out connected region identification on each piece of binarized image data. The connected regions are pixel regions connected to each other in the image, and generally represent an object or a moving object. By the connected region recognition, the position and shape of each moving object can be determined. And respectively carrying out image denoising processing on each binarized image based on the connected region of each video frame data. The denoising process helps to eliminate noise or small irrelevant areas in the image to improve the detection accuracy of the moving object. And carrying out motion foreground image analysis based on the moving target on the denoising image set of each video frame data. This step will determine the motion foreground map corresponding to each street lamp node, which contains the information of the moving object. These motion foreground maps may be used for subsequent abnormal motion detection and report generation.
In a specific embodiment, as shown in fig. 3, the process of executing step S206 may specifically include the following steps:
s301, performing pixel superposition processing on video frame data of each street lamp node and a denoising image set of each video frame data to obtain superposition image data of each street lamp node;
s302, adjusting the image brightness of the superimposed image data of each street lamp node to obtain a motion foreground image corresponding to each street lamp node.
It should be noted that, for each street lamp node, the system performs pixel superposition processing on the video frame data and the corresponding denoising image set. This is accomplished by adding the original video frame and the denoised image pixel by pixel. The object of the pixel overlay is to enhance the target objects in the image so that they are more clearly visible. This method of processing is particularly important for images taken under low illumination conditions because it reduces noise, improves contrast, and makes the target object easier to identify. The image subjected to the pixel superimposition processing needs to be subjected to image brightness adjustment. Image brightness adjustment is achieved by modifying the brightness values of the pixels so as to enhance the visibility of the target object. This process helps to highlight the motion foreground and improve the quality of the image. By adjusting the brightness, it is possible to ensure that the target object is more noticeable in the image and is not disturbed by low light conditions or other factors. For example, assume that a camera of a road lamp node captures an automobile traveling on a road, but the contour of the automobile is less clear due to low lighting conditions, and noise is present in the image. And the system performs pixel superposition processing on the video frame data of the street lamp node and the corresponding denoising image set. The original video frame and the denoised image are added pixel by pixel. By the pixel superimposition processing, noise in the image is reduced, and the contour of the vehicle becomes clearer. This helps to improve the quality and recognizability of the image. The image subjected to the pixel superimposition processing needs to be subjected to image brightness adjustment. The system will moderately increase the brightness of the image to ensure that the contour and shape of the vehicle is brighter and more striking. This adjustment helps to highlight the motion foreground, making the car easier to recognize in the image, providing better visibility even in low light conditions.
In a specific embodiment, as shown in fig. 4, the process of executing step S104 may specifically include the following steps:
s401, respectively carrying out image size consistency verification on gray video data of each street lamp node and a motion foreground atlas corresponding to each street lamp node to obtain a verification result;
s402, performing image size consistency processing on gray video data of each street lamp node and a motion foreground atlas corresponding to each street lamp node through a verification result to obtain processed gray video data of each street lamp node and processed pre-motion Jing Tuji corresponding to each street lamp node;
s403, extracting pixel values of a motion foreground image set corresponding to each street lamp node to obtain a first pixel value set;
s404, extracting pixel values of gray video data of each street lamp node to obtain a second pixel value set;
s405, based on a preset two-channel template diagram, image channel stitching is carried out on gray video data of each street lamp node and a motion foreground image set corresponding to each street lamp node through a first pixel value set and a second pixel value set, so that an initial motion guide image set corresponding to each street lamp node is obtained.
Specifically, for each street lamp node, the system performs image size consistency verification on the gray video data and the motion foreground atlas of the street lamp node respectively. The purpose of this step is to check whether the images are uniform in size to ensure that they can be effectively channel stitched. If inconsistent dimensions are found, the system records the verification result and prepares for subsequent processing. And according to the verification result, the system respectively carries out image size consistency processing on the gray video data of each street lamp node and the motion foreground atlas. The images are resized to a consistent size so that they can be effectively processed and stitched. This processing step ensures consistency of the data, regardless of its original size. And the system extracts pixel values of the motion foreground atlas corresponding to each street lamp node to obtain a first pixel value set. These pixel values typically represent key features in the image, such as the position of a moving object. Meanwhile, the system also extracts pixel values of the gray video data of each street lamp node to obtain a second pixel value set. These pixel values typically contain luminance and color information of the image. And using a preset two-channel template diagram, and respectively splicing the image channels of the gray video data and the motion foreground diagram set of each street lamp node by the system through the first pixel value set and the second pixel value set. This step combines the two sets of data into an initial motion guide atlas that contains motion foreground information and gray scale information. For example, assume that the image captured by each street light node is not uniform in size, as some nodes are mounted at different heights and angles. The system performs image size consistency verification on the data of each node and finds that some data sizes are not matched. The system performs image size consistency processing on inconsistent data, adjusting them to the same size. This ensures that the data can be effectively compared and combined in subsequent processing steps. The system extracts pixel values of a motion foreground atlas for each node to obtain a first set of pixel values, which contains information about the moving object. Meanwhile, the method also extracts the pixel value of the gray video data of each node to obtain a second pixel value set, wherein the second pixel value set contains the brightness and color information of the image. The system creates an initial motion guide atlas for each node by combining the first set of pixel values and the second set of pixel values using a preset two-channel template map. The guide atlas contains motion foreground information and gray scale information, which is helpful for subsequent motion analysis and anomaly detection.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Respectively carrying out image frame pairing on video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain a pairing image frame set corresponding to each street lamp node;
(2) Characteristic point tracking is carried out on the paired image frame sets corresponding to the street lamp nodes respectively, and characteristic point positions corresponding to the paired image frame sets are obtained;
(3) Performing optical flow vector calculation on the feature point position set corresponding to each paired image frame set respectively to obtain optical flow vector data corresponding to each paired image frame set;
(4) And carrying out optical flow information analysis on the video frame data of each street lamp node through the optical flow vector data corresponding to each paired image frame set to obtain the optical flow information data of each street lamp node.
Specifically, the system uses the light intensity data of each street lamp node to pair the image frames of the video frame data of each street lamp node. The system finds paired frames with similar light conditions for comparison under different conditions. These paired frames are combined into a set for subsequent processing. And for the paired image frame set of each street lamp node, the system carries out characteristic point tracking. This step aims at finding key feature points in the image that can track motion between different frames. The location of the feature points is recorded for subsequent analysis. The system performs optical flow vector calculations for the feature point location set for each paired image frame set. Optical flow is a technique that describes the direction and speed of movement of pixels in an image and can be used to analyze the motion of an object. This step generates optical flow vector data including displacement information of each feature point. The system uses the optical flow vector data of each paired image frame set to perform optical flow information analysis on the video frame data of each street lamp node. This analysis process allows the system to identify the direction and speed of motion, changes, and objects, thereby generating optical flow information data. For example, suppose that the system selects two paired image frames with similar light conditions, one from daytime and one from night, based on the light intensity data. The system performs feature point tracking on the two image frames. It finds some key feature points such as traffic signs and pedestrians. The locations of these feature points are recorded. The system uses the locations of these feature points to calculate optical flow vector data. These vectors describe the direction and speed of movement of the feature points over different time periods. The optical flow analysis shows the direction of movement of pedestrians and the speed of the vehicle, providing information about urban street activity. Through analysis of the optical flow information, the system identifies abnormal movements, such as sudden pedestrians rushing into the road or abrupt changes in speed of the vehicle. These anomalies are recorded and used to generate an abnormal movement analysis report that is transmitted to a preset street lamp control terminal, enabling the maintenance personnel to take the necessary actions quickly.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) Calculating the displacement of the optical flow information data of each street lamp node to obtain a displacement value corresponding to each street lamp node, and calibrating the displacement direction of the optical flow information data of each street lamp node to obtain the displacement direction data of each street lamp node;
(2) And respectively carrying out image correction on the initial motion guide atlas corresponding to each street lamp node based on the displacement value corresponding to each street lamp node and the displacement direction data of each street lamp node to obtain the target motion guide atlas of each street lamp node.
Specifically, the system calculates the displacement of the optical flow information data of each street lamp node. The system measures the displacement of each pixel to know the moving distance of the object in the image. This produces a displacement value describing the displacement intensity of the optical flow information. Meanwhile, the system performs displacement direction calibration on the optical flow information data. This step determines the direction of displacement, i.e. the direction of movement of the object in the image. This produces displacement direction data describing the direction of movement of the optical flow information. Based on the displacement value and the displacement direction data, the system carries out image correction on the initial motion guidance atlas of each street lamp node. This is a critical step because it can correct the optical flow information to accurately reflect the movement of the object. Thus, a target motion guide atlas for each street lamp node is obtained. For example, assume that there is a car passing through the image from left to right. The optical flow information shows that the car is moving right in the image, but the displacement data is not accurate enough to provide enough information. In this case, the system uses the displacement magnitude calculation and displacement direction data to understand the direction and speed of movement of the vehicle. By the displacement, the system knows the moving distance of the car in the image, and the displacement direction data tells the system that the car is moving to the right. The system uses this information for image correction. The method adjusts the optical flow information to accurately represent the movement of the automobile, and ensures that the motion guidance atlas reflects the actual situation. This modified motion guide atlas is used for subsequent analysis, such as abnormal motion detection or object tracking. Through the process, the system can more accurately understand the movement and change of the object, and the monitoring precision of urban street activities is improved. This is very helpful for detecting unusual events or analyzing traffic flow.
The remote monitoring method based on the intelligent street lamp of the internet of things in the embodiment of the invention is described above, and the remote monitoring system based on the intelligent street lamp of the internet of things in the embodiment of the invention is described below, referring to fig. 5, one embodiment of the remote monitoring system based on the intelligent street lamp of the internet of things in the embodiment of the invention comprises:
the splitting module 501 is configured to split street lamp nodes of a preset intelligent street lamp networking to obtain a plurality of street lamp nodes, and collect light intensity data and video monitoring data of each street lamp node;
the extracting module 502 is configured to extract video frames of the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and perform gray scale processing on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node;
the first analysis module 503 is configured to perform motion foreground graph analysis based on a moving target on the video frame data of each street lamp node, so as to obtain a pre-motion Jing Tuji corresponding to each street lamp node;
the stitching module 504 is configured to stitch the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node in an image channel to obtain an initial motion guidance atlas corresponding to each street lamp node;
The second analysis module 505 is configured to perform optical flow information analysis on the light intensity data of each street lamp node, so as to obtain optical flow information data of each street lamp node;
the correction module 506 is configured to perform image correction on the initial motion guidance atlas corresponding to each street lamp node through optical flow information data of each street lamp node, so as to obtain a target motion guidance atlas of each street lamp node;
the transmission module 507 is configured to perform abnormal motion condition analysis on the target motion guidance atlas of each street lamp node, generate an abnormal motion analysis report, and transmit the abnormal motion analysis report to a preset street lamp control terminal.
Splitting street lamp nodes of the intelligent street lamp networking through the cooperative cooperation of the components to obtain a plurality of street lamp nodes, and collecting light intensity data and video monitoring data of each street lamp node; video frame extraction is carried out on the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and gray scale processing is carried out on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node; respectively analyzing the video frame data of each street lamp node based on the motion foreground graph of the moving target to obtain a motion front Jing Tuji corresponding to each street lamp node; respectively splicing the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node; performing optical flow information analysis on the video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node; respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node; and carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to the street lamp control terminal. In the scheme, the video monitoring data acquisition and analysis of the street lamp nodes allow public areas such as roads, streets and the like to be monitored in real time. By analyzing the video frame data and the motion foreground image, a motion target can be detected, the data volume can be reduced by gray processing, and the processing efficiency can be improved. The image channel stitching is helpful for integrating data from different sources, and can track the movement of a moving target more accurately through optical flow information data, so that the accuracy of a target movement guiding diagram can be improved. The analysis target motion guidance atlas can detect abnormal motion conditions, and generation and transmission of an abnormal motion analysis report to the street lamp control terminal can realize timely feedback and response so as to further improve the accuracy of remote monitoring of intelligent street lamps based on the Internet of things.
The above fig. 5 describes the remote monitoring system based on the intelligent street lamp of the internet of things in the embodiment of the present invention in detail from the angle of the modularized functional entity, and the following describes the remote monitoring device based on the intelligent street lamp of the internet of things in detail from the angle of the hardware processing.
Fig. 6 is a schematic structural diagram of a remote monitoring device based on an internet of things intelligent street lamp according to an embodiment of the present invention, where the remote monitoring device 600 based on the internet of things intelligent street lamp may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage media 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the internet of things-based intelligent street lamp remote monitoring apparatus 600. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the internet of things-based intelligent street lamp remote monitoring device 600.
The internet of things-based intelligent street lamp remote monitoring device 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as WindowsServe, macOSX, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the architecture of the internet of things-based intelligent street lamp remote monitoring device shown in fig. 6 is not limiting and may include more or fewer components than shown, or may be a combination of certain components or a different arrangement of components.
The invention also provides a remote monitoring device based on the intelligent street lamp of the Internet of things, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor executes the steps of the remote monitoring method based on the intelligent street lamp of the Internet of things in the embodiments.
The invention also provides a computer readable storage medium which can be a nonvolatile computer readable storage medium, and the computer readable storage medium can also be a volatile computer readable storage medium, wherein the computer readable storage medium stores instructions, and when the instructions run on a computer, the instructions cause the computer to execute the steps of the remote monitoring method based on the intelligent street lamp of the Internet of things.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or passed as separate products, may be stored in a computer readable storage medium. Based on the understanding that the technical solution of the present invention may be embodied in essence or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The remote monitoring method based on the intelligent street lamp of the Internet of things is characterized by comprising the following steps of:
splitting street lamp nodes of a preset intelligent street lamp networking to obtain a plurality of street lamp nodes, and collecting light intensity data and video monitoring data of each street lamp node;
video frame extraction is carried out on the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and gray scale processing is carried out on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node;
respectively analyzing a motion foreground graph based on a moving target for the video frame data of each street lamp node to obtain a pre-motion Jing Tuji corresponding to each street lamp node;
Respectively splicing the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node;
performing optical flow information analysis on video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node;
respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node;
and carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to a preset street lamp control terminal.
2. The method for remotely monitoring intelligent street lamps based on the internet of things according to claim 1, wherein the steps of respectively extracting video frames from the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and gray-scale processing the video frame data of each street lamp node to obtain gray-scale video data of each street lamp node comprise the following steps:
Video frame rate matching is carried out on each street lamp node respectively, so that video frame rate data of each street lamp node are obtained;
based on the video frame rate data of each street lamp node, video frame extraction is carried out on the video monitoring data of each street lamp node, so that the video frame data of each street lamp node is obtained;
performing RGB pixel calculation on the video frame data of each street lamp node respectively to obtain an RGB pixel value set of each video frame data;
and respectively carrying out weighted average processing on the RGB pixel value sets of each video frame data based on a preset gray weight factor set to obtain gray video data of each street lamp node.
3. The remote monitoring method based on the intelligent street lamp of the internet of things according to claim 1, wherein the moving foreground graph analysis based on the moving target is performed on the video frame data of each street lamp node to obtain the moving foreground graph set corresponding to each street lamp node, and the method comprises the following steps:
respectively carrying out pixel difference calculation on the video frame data of each street lamp node to obtain difference pixel data of each video frame data;
Performing numerical analysis on the difference pixel data based on a preset motion detection threshold value to obtain a numerical analysis result, and performing image region segmentation on each video frame data according to the numerical analysis result to obtain segmented image data of each video frame data;
respectively carrying out binarization processing on the divided image data of each video frame data to obtain binarized image data of each video frame data;
respectively carrying out connected region identification on the binarized image data of each video frame data to obtain a connected region of each video frame data;
based on the connected region of each video frame data, respectively carrying out image denoising processing on the binarized image of each video frame data to obtain a denoising image set of each video frame data;
and respectively carrying out motion foreground image analysis based on a moving object on the denoising image set of each video frame data to obtain a motion foreground image set corresponding to each street lamp node.
4. The remote monitoring method based on the intelligent street lamp of the internet of things according to claim 3, wherein the performing the motion foreground map analysis based on the moving target on the denoising image set of each video frame data to obtain the motion foreground map set corresponding to each street lamp node respectively includes:
Performing pixel superposition processing on the video frame data of each street lamp node and the denoising image set of each video frame data to obtain superposition image data of each street lamp node;
and adjusting the image brightness of the superimposed image data of each street lamp node to obtain a motion foreground image corresponding to each street lamp node.
5. The method for remotely monitoring the intelligent street lamp based on the internet of things according to claim 1, wherein the steps of respectively performing image channel stitching on gray video data of each street lamp node and a motion foreground atlas corresponding to each street lamp node to obtain an initial motion guidance atlas corresponding to each street lamp node comprise the following steps:
respectively checking the consistency of the image size of the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain a checking result;
performing image size consistency processing on the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node according to the verification result to obtain processed gray video data of each street lamp node and processed pre-motion Jing Tuji corresponding to each street lamp node;
Extracting pixel values of a motion foreground image set corresponding to each street lamp node to obtain a first pixel value set;
extracting pixel values of the gray video data of each street lamp node to obtain a second pixel value set;
and based on a preset two-channel template diagram, respectively splicing the gray video data of each street lamp node and the motion foreground image set corresponding to each street lamp node through the first pixel value set and the second pixel value set to obtain an initial motion guide image set corresponding to each street lamp node.
6. The method for remotely monitoring intelligent street lamps based on the internet of things according to claim 1, wherein the analyzing optical flow information of the video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain the optical flow information data of each street lamp node comprises the following steps:
respectively carrying out image frame pairing on video frame data of each street lamp node based on the light intensity data of each street lamp node to obtain a pairing image frame set corresponding to each street lamp node;
respectively carrying out characteristic point tracking on the paired image frame sets corresponding to each street lamp node to obtain characteristic point position sets corresponding to each paired image frame set;
Performing optical flow vector calculation on the feature point position set corresponding to each paired image frame set respectively to obtain optical flow vector data corresponding to each paired image frame set;
and carrying out optical flow information analysis on the video frame data of each street lamp node through the optical flow vector data corresponding to each pairing image frame set to obtain the optical flow information data of each street lamp node.
7. The method for remotely monitoring the intelligent street lamp based on the internet of things according to claim 1, wherein the image correction is performed on the initial motion guidance atlas corresponding to each street lamp node by the optical flow information data of each street lamp node to obtain the target motion guidance atlas of each street lamp node, respectively, and the method comprises the following steps:
performing displacement calculation on the optical flow information data of each street lamp node to obtain a displacement value corresponding to each street lamp node, and performing displacement direction calibration on the optical flow information data of each street lamp node to obtain displacement direction data of each street lamp node;
and respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node based on the displacement value corresponding to each street lamp node and the displacement direction data of each street lamp node to obtain the target motion guidance atlas of each street lamp node.
8. Remote monitering system based on thing networking wisdom street lamp, a serial communication port, remote monitering system based on thing networking wisdom street lamp includes:
the splitting module is used for splitting street lamp nodes of a preset intelligent street lamp networking to obtain a plurality of street lamp nodes, and collecting light intensity data and video monitoring data of each street lamp node;
the extraction module is used for extracting video frames of the video monitoring data of each street lamp node to obtain video frame data of each street lamp node, and carrying out gray scale processing on the video frame data of each street lamp node to obtain gray scale video data of each street lamp node;
the first analysis module is used for respectively carrying out motion foreground graph analysis based on a moving target on the video frame data of each street lamp node to obtain a pre-motion Jing Tuji corresponding to each street lamp node;
the splicing module is used for respectively carrying out image channel splicing on the gray video data of each street lamp node and the motion foreground atlas corresponding to each street lamp node to obtain an initial motion guiding atlas corresponding to each street lamp node;
the second analysis module is used for carrying out optical flow information analysis on the light intensity data of each street lamp node to obtain optical flow information data of each street lamp node;
The correction module is used for respectively carrying out image correction on the initial motion guidance atlas corresponding to each street lamp node through the optical flow information data of each street lamp node to obtain a target motion guidance atlas of each street lamp node;
the transmission module is used for carrying out abnormal motion condition analysis on the target motion guide atlas of each street lamp node, generating an abnormal motion analysis report and transmitting the abnormal motion analysis report to a preset street lamp control terminal.
9. Remote monitoring equipment based on thing networking wisdom street lamp, a serial communication port, remote monitoring equipment based on thing networking wisdom street lamp includes: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the internet of things intelligent street lamp-based remote monitoring device to perform the internet of things intelligent street lamp-based remote monitoring method of any one of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, wherein the instructions when executed by a processor implement the internet of things-based intelligent street lamp remote monitoring method of any one of claims 1-7.
CN202311662049.9A 2023-12-06 2023-12-06 Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things Active CN117372967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311662049.9A CN117372967B (en) 2023-12-06 2023-12-06 Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311662049.9A CN117372967B (en) 2023-12-06 2023-12-06 Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things

Publications (2)

Publication Number Publication Date
CN117372967A true CN117372967A (en) 2024-01-09
CN117372967B CN117372967B (en) 2024-03-26

Family

ID=89402621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311662049.9A Active CN117372967B (en) 2023-12-06 2023-12-06 Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things

Country Status (1)

Country Link
CN (1) CN117372967B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105592304A (en) * 2015-12-31 2016-05-18 成都移动魔方科技有限公司 Remote automatic data acquisition method
CN105654507A (en) * 2015-12-24 2016-06-08 北京航天测控技术有限公司 Vehicle outer contour dimension measuring method based on image dynamic feature tracking
CN110992381A (en) * 2019-12-17 2020-04-10 嘉兴学院 Moving target background segmentation method based on improved Vibe + algorithm
CN111369584A (en) * 2020-03-07 2020-07-03 候丽 Moving object detection method applied to urban intelligent street lamp
CN111444854A (en) * 2020-03-27 2020-07-24 科大讯飞(苏州)科技有限公司 Abnormal event detection method, related device and readable storage medium
CN111523397A (en) * 2020-03-31 2020-08-11 深圳市奥拓电子股份有限公司 Intelligent lamp pole visual identification device, method and system and electronic equipment
CN113450579A (en) * 2021-08-30 2021-09-28 腾讯科技(深圳)有限公司 Method, device, equipment and medium for acquiring speed information
CN113469993A (en) * 2021-07-16 2021-10-01 浙江大华技术股份有限公司 Method and device for detecting abnormal object in motion state and electronic equipment
CN114066761A (en) * 2021-11-22 2022-02-18 青岛根尖智能科技有限公司 Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection
CN115393782A (en) * 2021-05-18 2022-11-25 长沙智能驾驶研究院有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115439424A (en) * 2022-08-23 2022-12-06 成都飞机工业(集团)有限责任公司 Intelligent detection method for aerial video image of unmanned aerial vehicle
CN116996665A (en) * 2023-09-28 2023-11-03 深圳天健电子科技有限公司 Intelligent monitoring method, device, equipment and storage medium based on Internet of things
CN117115210A (en) * 2023-10-23 2023-11-24 黑龙江省农业科学院农业遥感与信息研究所 Intelligent agricultural monitoring and adjusting method based on Internet of things

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654507A (en) * 2015-12-24 2016-06-08 北京航天测控技术有限公司 Vehicle outer contour dimension measuring method based on image dynamic feature tracking
CN105592304A (en) * 2015-12-31 2016-05-18 成都移动魔方科技有限公司 Remote automatic data acquisition method
CN110992381A (en) * 2019-12-17 2020-04-10 嘉兴学院 Moving target background segmentation method based on improved Vibe + algorithm
CN111369584A (en) * 2020-03-07 2020-07-03 候丽 Moving object detection method applied to urban intelligent street lamp
CN111444854A (en) * 2020-03-27 2020-07-24 科大讯飞(苏州)科技有限公司 Abnormal event detection method, related device and readable storage medium
CN111523397A (en) * 2020-03-31 2020-08-11 深圳市奥拓电子股份有限公司 Intelligent lamp pole visual identification device, method and system and electronic equipment
CN115393782A (en) * 2021-05-18 2022-11-25 长沙智能驾驶研究院有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113469993A (en) * 2021-07-16 2021-10-01 浙江大华技术股份有限公司 Method and device for detecting abnormal object in motion state and electronic equipment
CN113450579A (en) * 2021-08-30 2021-09-28 腾讯科技(深圳)有限公司 Method, device, equipment and medium for acquiring speed information
CN114066761A (en) * 2021-11-22 2022-02-18 青岛根尖智能科技有限公司 Method and system for enhancing frame rate of motion video based on optical flow estimation and foreground detection
CN115439424A (en) * 2022-08-23 2022-12-06 成都飞机工业(集团)有限责任公司 Intelligent detection method for aerial video image of unmanned aerial vehicle
CN116996665A (en) * 2023-09-28 2023-11-03 深圳天健电子科技有限公司 Intelligent monitoring method, device, equipment and storage medium based on Internet of things
CN117115210A (en) * 2023-10-23 2023-11-24 黑龙江省农业科学院农业遥感与信息研究所 Intelligent agricultural monitoring and adjusting method based on Internet of things

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何楠楠;杜军平;: "智能视频监控中高效运动目标检测方法研究", 北京工商大学学报(自然科学版), no. 04, pages 34 - 51 *

Also Published As

Publication number Publication date
CN117372967B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN103069434B (en) For the method and system of multi-mode video case index
US10127448B2 (en) Method and system for dismount detection in low-resolution UAV imagery
CN105744232B (en) A kind of method of the transmission line of electricity video external force damage prevention of Behavior-based control analytical technology
CN103106766B (en) Forest fire identification method and forest fire identification system
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN111967393A (en) Helmet wearing detection method based on improved YOLOv4
CN111800507A (en) Traffic monitoring method and traffic monitoring system
CN110264495B (en) Target tracking method and device
CN103366156A (en) Road structure detection and tracking
WO2013186662A1 (en) Multi-cue object detection and analysis
KR102272295B1 (en) Method for improving recognition ratio of vehicle license plate employing depth information of image
CN103456024B (en) A kind of moving target gets over line determination methods
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
KR102282800B1 (en) Method for trackig multi target employing ridar and camera
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN111753612B (en) Method and device for detecting casting object and storage medium
CN111931726B (en) Traffic light detection method, device, computer storage medium and road side equipment
CN106339657A (en) Straw incineration monitoring method and device based on monitoring video
KR20150034398A (en) A Parking Event Detection System Based on Object Recognition
CN116385948B (en) System and method for early warning railway side slope abnormality
CN103793921B (en) Moving object extraction method and moving object extraction device
CN103152558A (en) Intrusion detection method based on scene recognition
KR102434154B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN117372967B (en) Remote monitoring method, device, equipment and medium based on intelligent street lamp of Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant