CN118097192B - Gateway data processing method and system based on cloud edge cooperation - Google Patents
Gateway data processing method and system based on cloud edge cooperation Download PDFInfo
- Publication number
- CN118097192B CN118097192B CN202410487115.1A CN202410487115A CN118097192B CN 118097192 B CN118097192 B CN 118097192B CN 202410487115 A CN202410487115 A CN 202410487115A CN 118097192 B CN118097192 B CN 118097192B
- Authority
- CN
- China
- Prior art keywords
- gray value
- image
- original image
- median
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 58
- 238000012544 monitoring process Methods 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims description 36
- 239000013598 vector Substances 0.000 claims description 35
- 238000000605 extraction Methods 0.000 claims description 24
- 238000005192 partition Methods 0.000 claims description 21
- 230000002159 abnormal effect Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000001228 spectrum Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 abstract description 17
- 230000005856 abnormality Effects 0.000 abstract description 11
- 230000007246 mechanism Effects 0.000 abstract description 7
- 238000009826 distribution Methods 0.000 abstract description 6
- 238000004590 computer program Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 9
- 238000003860 storage Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y10/00—Economic sectors
- G16Y10/75—Information technology; Communication
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Business, Economics & Management (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention is applicable to the technical field of image data processing, and provides a gateway data processing method and system based on cloud edge cooperation, wherein the gateway data processing method based on cloud edge cooperation comprises the following steps: acquiring an original image acquired by the camera and a pre-stored standard image; comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; if the pixel number of the plurality of continuous difference pixel points is larger than the second threshold value, calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median and the pixel number; and if the difference score is greater than a third threshold, sending the original image to the cloud device. In the scheme, the performance of the monitoring system in the aspects of abnormality identification accuracy, processing efficiency and response speed is effectively improved through intelligent image processing and an ingenious load distribution mechanism.
Description
Technical Field
The invention belongs to the technical field of image data processing, and particularly relates to a gateway data processing method and system based on cloud edge cooperation.
Background
The internet of things (Internet of Things, ioT for short) is a technology that connects various physical devices through the internet so that they can collect and exchange data. The physical devices can be various daily necessities, such as household appliances, automobiles, industrial machines and the like, and can realize mutual communication and data exchange through sensors, software and network connection, thereby realizing intelligent and automatic functions.
The development of the internet of things technology enables various devices to realize functions of remote monitoring, automatic control, data acquisition, analysis and the like. Through the internet of things, people can more conveniently manage and control various devices, improve production efficiency, improve life quality and even create a brand new business model.
However, in a large internet of things system, for example: in the intelligent water service internet of things system or the intelligent charging pile internet of things system, a large amount of to-be-processed data are generated due to the large number of edge devices, and the large amount of to-be-processed data tend to easily cause long processing time or downtime of cloud devices, so that how to share the processing load of the cloud devices becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a gateway data processing method and a gateway data processing system based on cloud edge cooperation, so as to solve the technical problem that a large amount of data to be processed often easily causes long processing time or downtime of cloud equipment.
The first aspect of the embodiment of the invention provides a gateway data processing method based on cloud edge cooperation, which is applied to an internet of things monitoring system, wherein the internet of things monitoring system comprises cloud equipment, gateway equipment and a camera, and the gateway data processing method based on cloud edge cooperation comprises the following steps:
Acquiring an original image acquired by the camera and a pre-stored standard image;
Comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
if the number of pixels of the plurality of continuous difference pixel points is larger than a second threshold value, extracting a first average gray value and a first gray value median in the original image, and extracting a second average gray value and a second gray value median in the standard image;
calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median and the number of pixels;
If the difference score is greater than a third threshold, the original image is sent to the cloud device; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow.
Further, the step of calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels includes:
Substituting the first average gray value, the first gray value median, the second average gray value, the second gray value median and the pixel number into a preset formula to obtain the difference score;
The preset formula is as follows:
;
Wherein, ,,;
Wherein,The difference score is indicated as a function of the difference score,The first average gray value is represented,Represents the median of said first gray value,Representing the second average gray value of the color spectrum,Represents the median of said second gray value,Representing the number of pixels in a pixel array,The weight of the difference intensity is represented,AndRepresenting the adjustment factor.
Further, after the step of comparing the original image data with the standard image pixel by pixel to obtain a plurality of continuous differential pixel points, the method further includes:
and if the pixel number of the plurality of continuous difference pixel points is not greater than the second threshold value, returning to the step of acquiring the original image acquired by the camera and the pre-stored standard image and the subsequent step.
Further, after the step of sending the original image to the cloud device if the difference score is greater than a third threshold, the method further includes:
receiving an original image sent by the gateway equipment;
inputting the original image into an abnormal recognition model to obtain a recognition result output by the abnormal recognition model;
If the identification result is abnormal, sending an alarm instruction to the gateway equipment and pushing alarm information to a user terminal;
And the gateway equipment sends a control instruction to the alarm device after receiving the alarm instruction.
Further, the step of inputting the original image into an anomaly recognition model to obtain a recognition result output by the anomaly recognition model includes:
Equally dividing an original image into four partition images;
dividing the partitioned image into a plurality of sub-image areas according to a plurality of dividing scales, and obtaining image coordinates corresponding to the plurality of sub-image areas;
Linearly converting the sub-image region into embedded vectors, and respectively inputting a plurality of embedded vectors corresponding to the four partition images into a feature extraction layer to obtain feature data output by the feature extraction layer;
Downsampling and fusing the feature data corresponding to each of the multiple segmentation scales output by the same feature extraction layer to obtain fused feature data;
Combining a plurality of fusion characteristic data according to the corresponding relation between the fusion characteristic data and the partition image to obtain target characteristic data;
inputting the target feature data into a global feature extraction layer to obtain final feature data;
and inputting the final characteristic data into a full-connection layer and a classifier to obtain the identification result.
Further, the feature extraction layer comprises a plurality of fransformer blocks, each of the fransformer blocks comprising a plurality of fransformer layers, each of the fransformer layers for processing a different embedded vector;
wherein each of the Transformer blocks adopts a residual value connection structure.
Further, the step of linearly converting the sub-image region into embedded vectors, and respectively inputting a plurality of embedded vectors corresponding to the four partition images into the feature extraction layer to obtain feature data output by the feature extraction layer includes:
inputting a plurality of embedded vectors corresponding to each partition image into a first transducer block to obtain first characteristic data corresponding to each of the plurality of embedded vectors;
moving the image coordinates corresponding to the plurality of sub-image areas to a preset direction and a preset step length to obtain new image coordinates corresponding to the plurality of embedded vectors respectively;
Extracting second characteristic data corresponding to each of a plurality of new image coordinates according to the corresponding relation between the first characteristic data and the embedded vector; wherein, the blank characteristic value in the second characteristic data is filled by a specific numerical value;
inputting the second characteristic data corresponding to each new image coordinate to a second transducer block to obtain third characteristic data corresponding to each embedded vector;
And repeatedly executing the step of moving the image coordinates corresponding to the plurality of sub-image areas to a preset direction and step length to obtain new image coordinates corresponding to the plurality of embedded vectors and the subsequent step until all the transform layers are processed, and outputting the feature data.
A second aspect of an embodiment of the present invention provides a gateway data processing device based on cloud edge collaboration, including:
the acquisition unit is used for acquiring an original image acquired by the camera and a pre-stored standard image;
The comparison unit is used for comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
The extraction unit is used for extracting a first average gray value and a first gray value median in the original image and extracting a second average gray value and a second gray value median in the standard image if the number of pixels of a plurality of continuous difference pixel points is larger than a second threshold;
A calculating unit configured to calculate a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels;
the sending unit is used for sending the original image to cloud equipment if the difference score is larger than a third threshold value; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow.
A third aspect of an embodiment of the present invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method of the first aspect when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
The fifth aspect of the embodiment of the invention provides an internet of things monitoring system, which comprises cloud equipment, gateway equipment and a camera;
the camera is used for collecting an original image;
The gateway equipment is used for acquiring an original image acquired by the camera and a pre-stored standard image;
the gateway equipment is used for comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
the gateway device is configured to extract a first average gray value and a first gray value median in the original image, and extract a second average gray value and a second gray value median in the standard image if the number of pixels of the plurality of consecutive differential pixel points is greater than a second threshold;
the gateway device is configured to calculate a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels;
the gateway device is configured to send the original image to the cloud device if the difference score is greater than a third threshold;
The cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the application, the original image acquired by the camera and the pre-stored standard image are acquired; comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value; if the number of pixels of the plurality of continuous difference pixel points is larger than a second threshold value, extracting a first average gray value and a first gray value median in the original image, and extracting a second average gray value and a second gray value median in the standard image; calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median and the number of pixels; if the difference score is greater than a third threshold, the original image is sent to the cloud device; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow. In the above scheme, since the gateway device cannot afford the anomaly identification with a large computational effort, the anomaly probability (determined by the difference score) is determined by preprocessing the image before sending the image data to the cloud device. Only when the identified difference score exceeds a set threshold, the image data is sent to the cloud device for further processing. And when the difference score does not exceed the set threshold, cloud equipment processing is not needed. The method not only reduces the processing load of the cloud equipment and avoids unnecessary data transmission, but also improves the response speed and efficiency of the whole system. Through intelligent image processing and smart load distribution mechanism, the performance of the monitoring system in the aspects of abnormality identification accuracy, processing efficiency and response speed is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 shows a schematic flow chart of a gateway data processing method based on cloud edge collaboration provided by the invention;
fig. 2 is a schematic diagram of a gateway data processing device based on cloud edge collaboration according to an embodiment of the present invention;
fig. 3 shows a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The embodiment of the invention provides a gateway data processing method and a gateway data processing system based on cloud edge cooperation, which are used for solving the technical problem that a large amount of data to be processed often easily causes long processing time or downtime of cloud equipment
Firstly, the invention provides a gateway data processing method based on cloud edge cooperation. The gateway data processing method is applied to an Internet of things monitoring system, and the Internet of things monitoring system comprises cloud equipment, gateway equipment and a camera. Referring to fig. 1, fig. 1 shows a schematic flow chart of a gateway data processing method based on cloud edge collaboration provided by the invention. As shown in fig. 1, the gateway data processing method based on cloud edge collaboration may include the following steps:
step 101: acquiring an original image acquired by the camera and a pre-stored standard image;
in the detection system of the Internet of things, cloud equipment is connected with gateway equipment, and the gateway equipment is connected with a camera. After the camera acquires the original image, the original image is sent to the gateway equipment. The gateway device is used for load distribution and determining whether to send the original image to the cloud device for processing.
The pre-stored standard image refers to an image in which no abnormality exists for comparison with the original image. The standard image may be a normal image acquired in advance in different periods, or may be a normal image obtained by executing the monitoring process last time.
Step 102: comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
If there is an abnormality in the original image, there is often a large difference between the image of the partial region and the standard image, so in order to determine whether there is an abnormality in the original image, in this embodiment, by comparing the original image with the standard image pixel by pixel, the differential pixel point is accurately identified, and in this way, a small change can be effectively identified.
Since certain noise points or interference often exist in the image acquisition process, certain differences exist in the normal image, in order to improve accuracy, whether further abnormal judgment is needed to be carried out on the original image is determined by setting a second threshold. Further, it is ensured that the system triggers the subsequent flow only when the change is sufficiently obvious, so that false alarms can be effectively reduced.
Optionally, step a is further included after step 102: if the number of pixels of the plurality of consecutive differential pixels is not greater than the second threshold, the step of acquiring the original image acquired by the camera and the pre-stored standard image and the subsequent steps (i.e. executing steps 101 to 105) are performed.
Step 103: if the number of pixels of the plurality of continuous difference pixel points is larger than a second threshold value, extracting a first average gray value and a first gray value median in the original image, and extracting a second average gray value and a second gray value median in the standard image;
when the number of pixels of the plurality of continuous difference pixel points is larger than a second threshold value, calculating a difference score by acquiring a first average gray value and a first gray value median in the original image and a second average gray value and a second gray value median in the standard image. The difference score is used to characterize the magnitude of the probability of anomalies occurring in the original image. Only when the identified difference score exceeds a set threshold, the image data is sent to the cloud device for further processing. The method not only reduces the processing load of the cloud equipment and avoids unnecessary data transmission, but also ensures that the cloud resources are more efficiently used for processing events with higher abnormal probability, thereby improving the response speed and efficiency of the whole system.
Step 104: calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median and the number of pixels;
Specifically, step 104 specifically includes: substituting the first average gray value, the first gray value median, the second average gray value, the second gray value median and the pixel number into a preset formula to obtain the difference score;
The preset formula is as follows:
;
Wherein, ,,;
Wherein,The difference score is indicated as a function of the difference score,The first average gray value is represented,Represents the median of said first gray value,Representing the second average gray value of the color spectrum,Represents the median of said second gray value,Representing the number of pixels in a pixel array,The weight of the difference intensity is represented,AndRepresenting the adjustment factor.
It is to be noted that,AndIs an adjustment parameter for adjusting the sensitivity of the difference score according to the importance of the number of difference pixels in the actual application.The influence degree of the differential strength weight can be controlledFor further adjusting the score variation due to the increase in the number of difference pixels.
This formula balances the effects of the average gray scale difference and the median gray scale difference by squaring and squaring, while the differential intensity weightsAnd a final adjustment factor to ensure that the difference score reasonably reflects this change when the number of difference pixels increases significantly. At the same time by adjustingAndThe sensitivity of the score can be fine tuned according to different application requirements.
Step 105: if the difference score is greater than a third threshold, the original image is sent to the cloud device; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow.
And if the difference score is greater than a third threshold, sending the original image to the cloud device. If the difference score is not greater than the third threshold, the original image is not required to be sent, and the steps 101 to 105 are repeatedly executed according to the preset monitoring frequency.
Only when the identified difference score exceeds a set third threshold, the image data is sent to the cloud device for further processing. The method not only reduces the processing load of the cloud equipment and avoids unnecessary data transmission, but also ensures that the cloud resources are more efficiently used for processing events with higher abnormal probability, thereby improving the response speed and efficiency of the whole system.
And when the cloud device receives the original image, high-precision identification is performed, and then an alarm process is triggered after the abnormality is identified, wherein the alarm process comprises the steps of sending an alarm instruction to the gateway device and pushing alarm information to the user terminal. After receiving the alarm instruction, the gateway device sends a control instruction to the alarm device, wherein the control instruction is used for controlling the acousto-optic unit to send out a response alarm.
In this embodiment, the original image acquired by the camera and the pre-stored standard image are acquired; comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value; if the number of pixels of the plurality of continuous difference pixel points is larger than a second threshold value, extracting a first average gray value and a first gray value median in the original image, and extracting a second average gray value and a second gray value median in the standard image; calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median and the number of pixels; if the difference score is greater than a third threshold, the original image is sent to the cloud device; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow. In the above scheme, since the gateway device cannot afford the anomaly identification with a large computational effort, the anomaly probability (determined by the difference score) is determined by preprocessing the image before sending the image data to the cloud device. Only when the identified difference score exceeds a set threshold, the image data is sent to the cloud device for further processing. And when the difference score does not exceed the set threshold, cloud equipment processing is not needed. The method not only reduces the processing load of the cloud equipment and avoids unnecessary data transmission, but also improves the response speed and efficiency of the whole system. Through intelligent image processing and smart load distribution mechanism, the performance of the monitoring system in the aspects of abnormality identification accuracy, processing efficiency and response speed is effectively improved.
Optionally, step 105 is followed by steps 106 to 109:
step 106: receiving an original image sent by the gateway equipment;
step 107: inputting the original image into an abnormal recognition model to obtain a recognition result output by the abnormal recognition model;
specifically, step 107 specifically includes steps 1071 to 1077:
step 1071: equally dividing an original image into four partition images;
Step 1072: dividing the partitioned image into a plurality of sub-image areas according to a plurality of dividing scales, and obtaining image coordinates corresponding to the plurality of sub-image areas;
since the anomaly identification model in this embodiment employs a transducer layer, the computational complexity of the self-attention mechanism is proportional to the square of the length of the input sequence in the transducer layer. For image processing tasks, if self-attention is applied directly to the entire image, the amount of computation can be very large, since the image can be regarded as a sequence of a large number of pixels. By dividing the image into multiple sub-image regions and performing self-attention calculations independently within each sub-image region, the computational complexity can be significantly reduced, making model training and reasoning more efficient.
Performing self-attention calculations directly on the entire image consumes a significant amount of memory resources, especially for high resolution images. The regional method reduces the data volume processed simultaneously by limiting the self-attention calculation range, thereby reducing the consumption of the memory and enabling the model to operate efficiently on the existing hardware resources.
Images typically contain a large amount of local information and details that are important to understand the overall content of the image. By segmenting the image into multiple sub-image regions, the model may focus on capturing details and local features within each cell region, and then gradually integrating these local information to understand the overall image. The bottom-up processing mode is more in line with the hierarchical structure of visual information.
Dividing the image into sub-image regions also provides design flexibility, for example, the computational efficiency and performance of the model can be balanced by adjusting the size of the window. Smaller sub-image areas may provide greater computational efficiency and sensitivity to details, while larger sub-image areas may capture more extensive context information. In addition, through hierarchical design, the model can use sub-image areas with different sizes on different levels, so that the processing capacity of the model on the features with different scales of images is further improved.
Illustratively, it is assumed that the original image is equally divided into a first divided image, a second divided image, a third divided image, and a fourth divided image. According to the multiple dividing scales, the first partition image is equally divided into multiple sub-image areas, the second partition image is equally divided into multiple sub-image areas, the third partition image is equally divided into multiple sub-image areas, and the fourth partition image is equally divided into multiple sub-image areas.
Step 1073: linearly converting the sub-image region into an embedded vector;
illustratively, the specific logic for linearly converting the sub-image region into an embedded vector is as follows:
Let the input image size be h×w×c, where H and W are the height and width of the image, respectively, and C is the channel number (for RGB images, c=3). First, the image is divided into sub-image areas of equal size, which may be 2×2,4×4,8×8 or 16×16, etc. If 4 x 4 is chosen as the sub-image area size, then the dimension of each sub-image area is 4 x C. Each sub-image region is then flattened into a one-dimensional vector. Continuing with the example of a sub-image region of 4 x 4, the flattened dimension is 4 x c=48 (for an RGB image of c=3). Each flattened sub-image region is converted into an embedded vector of fixed dimension by one linear layer (fully connected layer). This linear layer corresponds to a learnable matrix that maps input vectors of dimensions 4 x c onto a new dimension D, where D is the predetermined embedding dimension. This mapping may be achieved by matrix multiplication plus an offset. Assuming a 224 x 3 image with 4 x 4 being chosen as the sub-region size, the embedding dimension is set to d=96.
Step 1074: respectively inputting a plurality of embedded vectors corresponding to the four partition images into a feature extraction layer to obtain feature data output by the feature extraction layer
Specifically, step 1074 specifically includes steps B1 to B5:
step B1: inputting a plurality of embedded vectors corresponding to each partition image into a first transducer block to obtain first characteristic data corresponding to each of the plurality of embedded vectors;
Wherein each transducer block comprises a plurality of transducer layers, each for processing a different embedded vector. I.e. multiple embedded vectors are handled by different Transformer layers.
Step B2: moving the image coordinates corresponding to the plurality of sub-image areas to a preset direction and a preset step length to obtain new image coordinates corresponding to the plurality of embedded vectors respectively;
The preset step size refers to the number of pixels per movement. The preset direction may be left, right, up and down.
Step B3: extracting second characteristic data corresponding to each of a plurality of new image coordinates according to the corresponding relation between the first characteristic data and the embedded vector; wherein, the blank characteristic value in the second characteristic data is filled by a specific numerical value;
illustratively, the logic for moving the sub-image region to obtain the second feature data is as follows:
Assuming that the preset step length is 1 pixel point, the preset direction is right, and assuming that the feature data corresponding to the original image is (the following matrix is only a simple version example):
。
The characteristic data corresponding to the four sub-image areas are as follows 、、AndMoving 1 pixel point to the right, filling the blank position with 0, and obtaining the characteristic data corresponding to the four sub-image areas as、、And。
It will be appreciated that where only information internal to a single sub-image region is of interest, the field of view of the model is limited to the single sub-image region. Such limitations may prevent the model from understanding a greater range of contextual relationships in the image. By passing information between adjacent sub-image regions, the model is able to "see" and integrate the relationships between more distant pixels, which is important for understanding complex patterns and structures in the image. Information transfer between adjacent windows can help the model better identify and locate objects. As information flows between different sub-image regions, the model may learn and fuse features from multiple sub-image regions, which may not only enhance the model's understanding of features within the current sub-image region, but may also enrich and expand those features by combining information of neighboring sub-image regions. This diversification of features enables the model to handle a variety of complex visual scenarios more flexibly and robustly. The alternate changing of the position of the sub-image areas by the moving sub-image area mechanism corresponds to the establishment of a connection across the sub-image areas between different layers. The design promotes information exchange between low-level and high-level features, helps the model to abstract deep features and simultaneously can keep clues of detail information, thereby improving the overall representation capability of the model.
Step B4: inputting the second characteristic data corresponding to each new image coordinate to a second transducer block to obtain third characteristic data corresponding to each embedded vector;
step B5: and repeatedly executing the step of moving the image coordinates corresponding to the plurality of sub-image areas to a preset direction and step length to obtain new image coordinates corresponding to the plurality of embedded vectors and the subsequent step until all the transform layers are processed, and outputting the feature data.
For the subsequent feature data, steps B2 to B5 are circularly performed, and each time steps B2 to B5 are performed, the sub-image area is moved to the right by 1 pixel point, so that the model can learn and fuse features from a plurality of sub-image areas.
Step 1075: downsampling and fusing the feature data corresponding to each of the multiple segmentation scales output by the same feature extraction layer to obtain fused feature data;
And carrying out downsampling treatment on the high-scale feature data to obtain feature data consistent with the low-scale feature data, and fusing the feature data with the low-scale feature data to obtain fused feature data.
Step 1076: combining a plurality of fusion characteristic data according to the corresponding relation between the fusion characteristic data and the partition image to obtain target characteristic data;
and combining the corresponding fusion characteristic data according to the sequence of the partition images to obtain target characteristic data.
Step 1077: inputting the target feature data into a global feature extraction layer to obtain final feature data;
Step 1078: and inputting the final characteristic data into a full-connection layer and a classifier to obtain the identification result.
In the embodiment, the original image is equally divided into four partition images, and then the partition images are subdivided into a plurality of sub-image areas according to a plurality of segmentation scales, so that the multi-scale segmentation method can more comprehensively capture detail features in the images, and the understanding and analysis capability of the model on each part of the images is enhanced. By performing linear conversion and feature extraction on sub-image areas with different scales and performing downsampling fusion on the obtained feature data, the method can combine the feature advantages under different scales, improve the comprehensive characterization capability of the model on the image, and enable the final target feature data to be more representative and discrimination. The feature processing flow comprises application of a feature extraction layer and a global feature extraction layer, and key information of the image is effectively extracted and integrated, so that powerful feature support is provided for subsequent classification and recognition tasks. The final characteristic data after fine processing is input into the full-connection layer and the classifier, so that the high-precision identification of the image abnormality can be realized. The method utilizes the strong learning ability of the deep network structure, and improves the adaptability and the recognition accuracy of the model in a complex environment. In summary, the embodiment remarkably improves the efficiency and accuracy of anomaly identification through careful image segmentation, multi-scale feature fusion, efficient feature processing and accurate anomaly identification flow, and is suitable for various occasions requiring high-precision image analysis.
Step 108: if the identification result is abnormal, sending an alarm instruction to the gateway equipment and pushing alarm information to a user terminal;
step 109: and the gateway equipment sends a control instruction to the alarm device after receiving the alarm instruction.
In the embodiment, by efficiently receiving and processing the original image sent by the gateway device and performing depth analysis on the image by using the anomaly identification model, the anomaly in the image can be accurately identified. Once the recognition result shows that there is an abnormality, the system automatically performs two key operations: and sending an alarm instruction to the gateway equipment and pushing real-time alarm information to the user terminal. Such a process flow ensures that, in the event of an abnormal situation, the relevant personnel or system can be notified quickly and effectively, and necessary countermeasures can be taken. In addition, after receiving the alarm instruction, the gateway device in the scheme further sends a control instruction to the connected alarm device to trigger a physical alarm system such as alarm sound or light to draw attention of field personnel, so that emergency response capability of abnormal conditions is further enhanced. The automatic abnormality identification and alarm response mechanism greatly improves the response speed and the processing efficiency of the safety monitoring system
Referring to fig. 2, fig. 2 shows a schematic diagram of a gateway data processing device based on cloud-edge coordination according to the present invention, and fig. 2 shows a gateway data processing device based on cloud-edge coordination according to the present invention, where the gateway data processing device based on cloud-edge coordination includes:
An acquiring unit 21, configured to acquire an original image acquired by a camera and a pre-stored standard image;
A comparison unit 22, configured to perform pixel-by-pixel comparison on the original image and the standard image, so as to obtain a plurality of continuous differential pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
An extracting unit 23, configured to extract a first average gray value and a first gray value median in the original image and extract a second average gray value and a second gray value median in the standard image if the number of pixels of the plurality of consecutive differential pixel points is greater than a second threshold;
a calculating unit 24 for calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels;
A sending unit 25, configured to send the original image to a cloud device if the difference score is greater than a third threshold; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow.
According to the gateway data processing device based on cloud edge cooperation, the original image acquired by the camera and the pre-stored standard image are acquired; comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value; if the number of pixels of the plurality of continuous difference pixel points is larger than a second threshold value, extracting a first average gray value and a first gray value median in the original image, and extracting a second average gray value and a second gray value median in the standard image; calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median and the number of pixels; if the difference score is greater than a third threshold, the original image is sent to the cloud device; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow. In the above scheme, since the gateway device cannot afford the anomaly identification with a large computational effort, the anomaly probability (determined by the difference score) is determined by preprocessing the image before sending the image data to the cloud device. Only when the identified difference score exceeds a set threshold, the image data is sent to the cloud device for further processing. And when the difference score does not exceed the set threshold, cloud equipment processing is not needed. The method not only reduces the processing load of the cloud equipment and avoids unnecessary data transmission, but also improves the response speed and efficiency of the whole system. Through intelligent image processing and smart load distribution mechanism, the performance of the monitoring system in the aspects of abnormality identification accuracy, processing efficiency and response speed is effectively improved.
Fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 3, a terminal device 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32 stored in the memory 31 and executable on the processor 30, for example a program based on cloud-edge cooperative gateway data. The steps of the embodiments of the gateway data processing method based on cloud-edge collaboration, such as steps 101 to 105 shown in fig. 1, are implemented by the processor 30 when executing the computer program 32. Or the processor 30, when executing the computer program 32, performs the functions of the units in the device embodiments described above, such as the functions shown in fig. 2.
By way of example, the computer program 32 may be divided into one or more units, which are stored in the memory 31 and executed by the processor 30 to complete the present invention. The one or more units may be a series of computer program instruction segments capable of performing a specific function describing the execution of the computer program 32 in the one terminal device 3. For example, the computer program 32 may be partitioned into units having the following specific functions:
the acquisition unit is used for acquiring an original image acquired by the camera and a pre-stored standard image;
The comparison unit is used for comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
The extraction unit is used for extracting a first average gray value and a first gray value median in the original image and extracting a second average gray value and a second gray value median in the standard image if the number of pixels of a plurality of continuous difference pixel points is larger than a second threshold;
A calculating unit configured to calculate a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels;
the sending unit is used for sending the original image to cloud equipment if the difference score is larger than a third threshold value; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow.
Including but not limited to a processor 30 and a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of one type of terminal device 3 and is not meant to be limiting as to one type of terminal device 3, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the one type of terminal device may also include input and output devices, network access devices, buses, etc.
The Processor 30 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal device 3, such as a hard disk or a memory of the terminal device 3. The memory 31 may also be an external storage device of the terminal device 3, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the one terminal device 3. The memory 31 is used for storing the computer program and other programs and data required for the one roaming control device. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present invention also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present invention provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to a detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is monitored" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon monitoring a [ described condition or event ]" or "in response to monitoring a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (9)
1. The gateway data processing method based on cloud edge cooperation is characterized by being applied to an Internet of things monitoring system, wherein the Internet of things monitoring system comprises cloud equipment, gateway equipment and a camera, and the gateway data processing method based on cloud edge cooperation comprises the following steps:
Acquiring an original image acquired by the camera and a pre-stored standard image;
Comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
if the number of pixels of the plurality of continuous difference pixel points is larger than a second threshold value, extracting a first average gray value and a first gray value median in the original image, and extracting a second average gray value and a second gray value median in the standard image;
calculating a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median and the number of pixels;
if the difference score is greater than a third threshold, the original image is sent to the cloud device; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow;
the step of calculating a difference score from the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels includes:
Substituting the first average gray value, the first gray value median, the second average gray value, the second gray value median and the pixel number into a preset formula to obtain the difference score;
The preset formula is as follows:
;
Wherein, ,,=1+log(1+);
Wherein,The difference score is indicated as a function of the difference score,The first average gray value is represented,Represents the median of said first gray value,Representing the second average gray value of the color spectrum,Represents the median of said second gray value,Representing the number of pixels in a pixel array,The weight of the difference intensity is represented,AndRepresenting the adjustment factor.
2. The method for processing gateway data based on cloud edge collaboration according to claim 1, wherein after the step of comparing the original image with the standard image pixel by pixel to obtain a plurality of consecutive differential pixel points, further comprising:
And if the pixel number of the plurality of continuous difference pixel points is not greater than the second threshold value, returning to the step of acquiring the original image acquired by the camera and the pre-stored standard image and the subsequent step.
3. The method for processing gateway data based on cloud edge collaboration according to claim 1, wherein after the step of sending the original image to the cloud device if the difference score is greater than a third threshold, further comprising:
receiving an original image sent by the gateway equipment;
inputting the original image into an abnormal recognition model to obtain a recognition result output by the abnormal recognition model;
If the identification result is abnormal, sending an alarm instruction to the gateway equipment and pushing alarm information to a user terminal;
And the gateway equipment sends a control instruction to the alarm device after receiving the alarm instruction.
4. The method for processing gateway data based on cloud edge collaboration as claimed in claim 3, wherein said step of inputting said original image into an anomaly recognition model to obtain a recognition result outputted by said anomaly recognition model comprises:
Equally dividing an original image into four partition images;
dividing the partitioned image into a plurality of sub-image areas according to a plurality of dividing scales, and obtaining image coordinates corresponding to the plurality of sub-image areas;
Linearly converting the sub-image region into an embedded vector;
respectively inputting a plurality of embedded vectors corresponding to the four partition images into a feature extraction layer to obtain feature data output by the feature extraction layer;
Downsampling and fusing the feature data corresponding to each of the multiple segmentation scales output by the same feature extraction layer to obtain fused feature data;
Combining a plurality of fusion characteristic data according to the corresponding relation between the fusion characteristic data and the partition image to obtain target characteristic data;
inputting the target feature data into a global feature extraction layer to obtain final feature data;
and inputting the final characteristic data into a full-connection layer and a classifier to obtain the identification result.
5. The cloud edge collaboration-based gateway data processing method of claim 4, wherein the feature extraction layer comprises a plurality of fransformer blocks, each fransformer block comprising a plurality of fransformer layers, each fransformer layer for processing a different embedded vector;
wherein each of the Transformer blocks adopts a residual value connection structure.
6. The method for processing gateway data based on cloud edge collaboration as claimed in claim 5, wherein the step of inputting the plurality of embedded vectors corresponding to the four partition images into the feature extraction layer respectively to obtain the feature data output by the feature extraction layer comprises the steps of:
inputting a plurality of embedded vectors corresponding to each partition image into a first transducer block to obtain first characteristic data corresponding to each of the plurality of embedded vectors;
moving the image coordinates corresponding to the plurality of sub-image areas to a preset direction and a preset step length to obtain new image coordinates corresponding to the plurality of embedded vectors respectively;
Extracting second characteristic data corresponding to each of a plurality of new image coordinates according to the corresponding relation between the first characteristic data and the embedded vector; wherein, the blank characteristic value in the second characteristic data is filled by a specific numerical value;
inputting the second characteristic data corresponding to each new image coordinate to a second transducer block to obtain third characteristic data corresponding to each embedded vector;
And repeatedly executing the step of moving the image coordinates corresponding to the plurality of sub-image areas to a preset direction and step length to obtain new image coordinates corresponding to the plurality of embedded vectors and the subsequent step until all the transform layers are processed, and outputting the feature data.
7. The utility model provides a gateway data processing device based on cloud limit cooperations which characterized in that, a gateway data processing device based on cloud limit cooperations includes:
the acquisition unit is used for acquiring an original image acquired by the camera and a pre-stored standard image;
The comparison unit is used for comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
The extraction unit is used for extracting a first average gray value and a first gray value median in the original image and extracting a second average gray value and a second gray value median in the standard image if the number of pixels of a plurality of continuous difference pixel points is larger than a second threshold;
A calculating unit configured to calculate a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels;
The sending unit is used for sending the original image to cloud equipment if the difference score is larger than a third threshold value; the cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow;
the step of calculating a difference score from the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels includes:
Substituting the first average gray value, the first gray value median, the second average gray value, the second gray value median and the pixel number into a preset formula to obtain the difference score;
The preset formula is as follows:
;
Wherein, ,,=1+log(1+);
Wherein,The difference score is indicated as a function of the difference score,The first average gray value is represented,Represents the median of said first gray value,Representing the second average gray value of the color spectrum,Represents the median of said second gray value,Representing the number of pixels in a pixel array,The weight of the difference intensity is represented,AndRepresenting the adjustment factor.
8. A terminal device, the device comprising: a memory, a processor and a cloud edge co-operative gateway data program stored on the memory and operative on the processor, the cloud edge co-operative gateway data program configured to implement the steps of the cloud edge co-operative gateway data processing method as claimed in any one of claims 1 to 6.
9. The monitoring system of the Internet of things is characterized by comprising cloud equipment, gateway equipment and a camera;
the camera is used for collecting an original image;
The gateway equipment is used for acquiring an original image acquired by the camera and a pre-stored standard image;
the gateway equipment is used for comparing the original image with the standard image pixel by pixel to obtain a plurality of continuous difference pixel points; the difference pixel points are pixel points with pixel value differences of the same pixel positions in the original image and the standard image being larger than a first threshold value;
the gateway device is configured to extract a first average gray value and a first gray value median in the original image, and extract a second average gray value and a second gray value median in the standard image if the number of pixels of the plurality of consecutive differential pixel points is greater than a second threshold;
the gateway device is configured to calculate a difference score according to the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels;
the gateway device is configured to send the original image to the cloud device if the difference score is greater than a third threshold;
The cloud device is used for carrying out anomaly identification on the original image and triggering an alarm flow;
the step of calculating a difference score from the first average gray value, the first gray value median, the second average gray value, the second gray value median, and the number of pixels includes:
Substituting the first average gray value, the first gray value median, the second average gray value, the second gray value median and the pixel number into a preset formula to obtain the difference score;
The preset formula is as follows:
;
Wherein, ,,=1+log(1+);
Wherein,The difference score is indicated as a function of the difference score,The first average gray value is represented,Represents the median of said first gray value,Representing the second average gray value of the color spectrum,Represents the median of said second gray value,Representing the number of pixels in a pixel array,The weight of the difference intensity is represented,AndRepresenting the adjustment factor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410487115.1A CN118097192B (en) | 2024-04-23 | 2024-04-23 | Gateway data processing method and system based on cloud edge cooperation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410487115.1A CN118097192B (en) | 2024-04-23 | 2024-04-23 | Gateway data processing method and system based on cloud edge cooperation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118097192A CN118097192A (en) | 2024-05-28 |
CN118097192B true CN118097192B (en) | 2024-07-19 |
Family
ID=91144083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410487115.1A Active CN118097192B (en) | 2024-04-23 | 2024-04-23 | Gateway data processing method and system based on cloud edge cooperation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118097192B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019153739A1 (en) * | 2018-02-09 | 2019-08-15 | 深圳壹账通智能科技有限公司 | Identity authentication method, device, and apparatus based on face recognition, and storage medium |
CN110675371A (en) * | 2019-09-05 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Scene switching detection method and device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365413B (en) * | 2020-10-30 | 2024-07-26 | 湖北锐世数字医学影像科技有限公司 | Image processing method, device, equipment, system and computer readable storage medium |
-
2024
- 2024-04-23 CN CN202410487115.1A patent/CN118097192B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019153739A1 (en) * | 2018-02-09 | 2019-08-15 | 深圳壹账通智能科技有限公司 | Identity authentication method, device, and apparatus based on face recognition, and storage medium |
CN110675371A (en) * | 2019-09-05 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Scene switching detection method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN118097192A (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107545262B (en) | Method and device for detecting text in natural scene image | |
CN110650316A (en) | Intelligent patrol and early warning processing method and device, electronic equipment and storage medium | |
CN113421305B (en) | Target detection method, device, system, electronic equipment and storage medium | |
CN113447923A (en) | Target detection method, device, system, electronic equipment and storage medium | |
CN110443107B (en) | Image processing method and system for object detection | |
US20180308236A1 (en) | Image Background Subtraction For Dynamic Lighting Scenarios | |
CN111582032A (en) | Pedestrian detection method and device, terminal equipment and storage medium | |
CN113012383A (en) | Fire detection alarm method, related system, related equipment and storage medium | |
CN112330597A (en) | Image difference detection method and device and computer equipment | |
CN102749034A (en) | Railway switch gap offset detection method based on image processing | |
CN113705332A (en) | Method and device for detecting shielding of camera of vehicle-mounted terminal, vehicle-mounted terminal and vehicle | |
EP3376438A1 (en) | A system and method for detecting change using ontology based saliency | |
CN115880260A (en) | Method, device and equipment for detecting base station construction and computer readable storage medium | |
CN115861210A (en) | Transformer substation equipment abnormity detection method and system based on twin network | |
CN111402185B (en) | Image detection method and device | |
CN110597165A (en) | Steel piling monitoring system and steel piling monitoring method | |
CN111985439B (en) | Face detection method, device, equipment and storage medium | |
CN109978855A (en) | A kind of method for detecting change of remote sensing image and device | |
CN118097192B (en) | Gateway data processing method and system based on cloud edge cooperation | |
CN115082326A (en) | Processing method for deblurring video, edge computing equipment and central processor | |
CN115588187B (en) | Pedestrian detection method, device and equipment based on three-dimensional point cloud and storage medium | |
JP7372391B2 (en) | Concepts for detecting anomalies in input data | |
CN105956595A (en) | Image feature extraction method and system | |
EP4332910A1 (en) | Behavior detection method, electronic device, and computer readable storage medium | |
WO2018110377A1 (en) | Video monitoring device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |