CN117132567A - DPI (deep inspection) optical splitter detection method, equipment, storage medium and device - Google Patents

DPI (deep inspection) optical splitter detection method, equipment, storage medium and device Download PDF

Info

Publication number
CN117132567A
CN117132567A CN202311091358.5A CN202311091358A CN117132567A CN 117132567 A CN117132567 A CN 117132567A CN 202311091358 A CN202311091358 A CN 202311091358A CN 117132567 A CN117132567 A CN 117132567A
Authority
CN
China
Prior art keywords
source image
image
dpi
target source
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311091358.5A
Other languages
Chinese (zh)
Inventor
蒲志远
周伟
蒋家驹
吕严
吕艳洁
唐云洁
刘子昂
龚淑蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Zijin Jiangsu Innovation Research Institute Co ltd
China Mobile Communications Group Co Ltd
China Mobile Group Jiangsu Co Ltd
Original Assignee
China Mobile Zijin Jiangsu Innovation Research Institute Co ltd
China Mobile Communications Group Co Ltd
China Mobile Group Jiangsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Zijin Jiangsu Innovation Research Institute Co ltd, China Mobile Communications Group Co Ltd, China Mobile Group Jiangsu Co Ltd filed Critical China Mobile Zijin Jiangsu Innovation Research Institute Co ltd
Priority to CN202311091358.5A priority Critical patent/CN117132567A/en
Publication of CN117132567A publication Critical patent/CN117132567A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention belongs to the technical field of image processing, and discloses a detection method, equipment, a storage medium and a device of a DPI (deep packet inspection) optical splitter, wherein the target source image is aligned with a preset standard image through a first AprilTag code contained in an initial source image corresponding to a target DPI optical splitter to obtain the target source image; performing similarity evaluation on a first key region in the target source image and a second key region in a preset standard image to obtain a similarity evaluation result; according to the detection result of the similarity evaluation, whether the target source image is abnormal or not is detected, and the detection result is obtained.

Description

DPI (deep inspection) optical splitter detection method, equipment, storage medium and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a storage medium, and a device for detecting a DPI optical splitter.
Background
The splitter of the DPI device has service data flow identification and service data flow control capabilities, and is closely related to the data security of the IDC machine room. Because the DPI beam splitter wiring controls the connection of the network equipment and the IDC core network equipment, the wiring of the DPI beam splitter equipment in the IDC machine room is ensured to be unable to make mistakes at any time, inspection is required at daily timing, the wiring is ensured to be correct, and the risks of potential data leakage and the like are avoided.
The existing detection methods of the light splitter of DPI equipment mainly can be divided into two types, namely manual inspection and light splitter detection algorithm shielded by an inorganic cabinet door: 1) Manual inspection is still the method used in most of the existing machine rooms, a special person is set in a fixed time period to walk through the machine room, each beam splitter cabinet is detected, and the beam splitter wiring is ensured to be unchanged. 2) According to the scheme that the detection is carried out by the detection algorithm of the light splitter without cabinet door shielding, the detection is carried out by the algorithm aiming at the light splitter without cabinet door shielding, the original image without shielding is generally compared with the image to be detected, the similarity algorithm or the neural network model is generally selected by the comparison algorithm, and an alarm is sent out if the change is found.
However, manual inspection has the following disadvantages: (1) The large-scale computer lab often has the equipment quantity of ten thousand grades, and the manpower cost of manual inspection is high, and the low temperature of computer lab and noise are uncomfortable and are suitable for the long-time stay. (2) The manual inspection is easy to cause the condition of missed inspection due to fatigue/visual field and the like, the achievable inspection efficiency and frequency are lower, and the inspection is difficult to be carried out at night.
The scheme of the existing spectrometer detection algorithm for shielding the inorganic cabinet door has the following defects: (1) In order to prevent private connection, cabinet doors are installed in existing machine room cabinets, and the application of the algorithm still needs to be manually participated in opening cabinet door shooting images for algorithm comparison, so that the algorithm is difficult to be directly applied to the machine room cabinet with the cabinet doors closed in a daily state. (2) The algorithm judges whether the optical splitters change or not by detecting the number and the position of the optical splitters, is greatly constrained by prior characteristics, and cannot flexibly cope with the changes of other types of DPI optical splitters.
Therefore, the existing DPI beam splitter detection scheme has high inspection cost and is limited by scenes, so that the change of the multi-type DPI beam splitter cannot be flexibly handled, and the inspection efficiency is low, so that the use of users is affected.
Disclosure of Invention
The invention mainly aims to provide a DPI (deep packet inspection) beam splitter detection method, equipment, a storage medium and a device, and aims to solve the technical problems that the inspection cost of the existing DPI beam splitter detection scheme is high, and the change of a plurality of types of DPI beam splitters cannot be flexibly dealt with due to scene limitation.
In order to achieve the above object, the present invention provides a DPI spectroscope detection method, which includes the steps of:
Aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI optical splitter to obtain a target source image;
performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result;
and detecting whether the target source image is abnormal according to the similarity evaluation result, and obtaining a detection result.
Optionally, the step of aligning the target source image with a preset standard image based on a first april tag code included in an initial source image corresponding to the target DPI optical splitter to obtain the target source image includes:
comparing a first april tag code contained in an initial source image corresponding to a target DPI light splitter with a second april tag code contained in a preset standard image to obtain a comparison result;
and aligning the target source image with the preset standard image according to the comparison result to obtain a target source image.
Optionally, the step of comparing the first april tag code contained in the initial source image corresponding to the target DPI optical splitter with the second april tag code contained in the preset standard image to obtain a comparison result includes:
Determining the distribution type of the AprilTag codes based on the first AprilTag codes contained in the initial source image corresponding to the target DPI spectroscope;
selecting a second april tag code from a preset standard image according to the distribution type;
comparing the first april tag code with the second april tag code according to preset angular points to obtain an angular point comparison result;
the step of aligning the target source image with the preset standard image according to the comparison result to obtain a target source image comprises the following steps:
calculating a homography matrix according to the characteristic point group contained in the corner comparison result;
and aligning the target source image with the preset standard image based on the homography matrix to obtain a target source image.
Optionally, before the step of performing similarity evaluation on the first key region in the target source image and the second key region in the preset standard image to obtain a similarity evaluation result, the method further includes:
dividing a first key region in the target source image according to a preset dividing proportion to obtain divided sub-dividing blocks;
performing horizontal and vertical segmentation on the target source image according to boundary corner point information corresponding to the sub-segmentation blocks to obtain segmented sub-image blocks;
Performing binarization processing on the target source image based on a threshold sequence corresponding to the sub-image block to obtain an initial mask;
and denoising the initial mask to obtain the Jing Zhedang mask in front of the cabinet door in the first key region.
Optionally, the step of performing similarity evaluation on the first key area in the target source image and the second key area in the preset standard image to obtain a similarity evaluation result includes:
performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image based on the cabinet front Jing Zhedang mask and a preset similarity calculation formula to obtain a similarity evaluation result;
the preset similarity calculation formula comprises:
wherein,refers to a target source image, T i,roi The second key region in the preset standard image is referred to; />The mask is Jing Zhedang in front of the cabinet door, and P is the similarity comparison result of the key area.
Optionally, the step of detecting whether the target source image has an abnormality according to the similarity evaluation result, and obtaining a detection result includes:
performing binarization processing on the equipment image blocks contained in the similarity evaluation result based on an Ojin method to obtain binarized image blocks;
Performing open operation on the binarized image block to remove a noise area, and obtaining a denoised binarized image block;
and detecting whether the target source image is abnormal or not according to the denoised binarized image block, and obtaining a detection result.
Optionally, the step of detecting whether the target source image has an abnormality according to the denoised binarized image block, and obtaining a detection result includes:
extracting a similarity map subarea from the denoised binarized image block;
and detecting whether the target source image is abnormal or not based on the similarity map subarea, and obtaining a detection result.
In addition, to achieve the above object, the present invention also proposes a DPI beam splitter detection device comprising a memory, a processor and a DPI beam splitter detection program stored on the memory and executable on the processor, the DPI beam splitter detection program being configured to implement the steps of DPI beam splitter detection as described above.
In addition, to achieve the above object, the present invention also proposes a storage medium having stored thereon a DPI spectroscope detection program which, when executed by a processor, implements the steps of the DPI spectroscope detection method as described above.
In addition, to achieve the above object, the present invention also proposes a DPI spectroscope detection device including:
the image alignment module is used for aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to the target DPI optical splitter to obtain the target source image;
the similarity evaluation module is used for performing similarity evaluation on the first key region in the target source image and the second key region in the preset standard image to obtain a similarity evaluation result;
and the anomaly detection module is used for detecting whether the target source image is abnormal according to the similarity evaluation result to obtain a detection result.
The method comprises the steps of aligning a target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI beam splitter to obtain the target source image; performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result; according to the similarity evaluation result, whether the target source image is abnormal or not is detected, and a detection result is obtained.
Drawings
FIG. 1 is a schematic diagram of a DPI optical splitter detection apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flow chart of a first embodiment of the method for detecting DPI beam splitters of the present invention;
FIG. 3 is a schematic diagram showing the distribution of AprilTag codes in a first embodiment of the DPI beam splitter detection method of the present invention;
FIG. 4 is a schematic view of the positioning of a focus region of interest in a first embodiment of the detection method of the DPI beam splitter of the present invention;
fig. 5 is a flow chart of detection of a passive optical splitter of a DPI device for shielding a scene according to a first embodiment of the detection method of a DPI optical splitter of the present invention;
FIG. 6 is a block expansion schematic diagram of a second embodiment of the DPI beam splitter detection method of the present invention;
FIG. 7 is a flow chart of a third embodiment of a method for detecting DPI beam-splitters of the present invention;
FIG. 8 is a flowchart of a third embodiment of a method for detecting DPI beam splitters according to the present invention;
figure 9 is a block diagram of a first embodiment of the DPI beam-splitter detection apparatus of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a DPI optical splitter detection device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the DPI spectroscopic detection device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), and the optional user interface 1003 may also include a standard wired interface, a wireless interface, and the wired interface for the user interface 1003 may be a USB interface in the present invention. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the DPI spectrometer detection device and may include more or fewer components than shown, or certain components may be combined, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a DPI splitter detection program may be included in a memory 1005 identified as a computer storage medium.
In the DPI optical splitter detection device shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server, and performing data communication with the background server; the user interface 1003 is mainly used for connecting user equipment; the DPI spectrometer detection device invokes, through the processor 1001, a DPI spectrometer detection program stored in the memory 1005, and executes a DPI spectrometer detection method provided by the embodiment of the invention.
Based on the above hardware structure, an embodiment of the detection method of the DPI optical splitter of the present invention is provided.
Referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of a detection method of a DPI optical splitter according to the present invention.
In this embodiment, the DPI beam splitter detection method includes the following steps:
step S10: and aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to the target DPI optical splitter to obtain the target source image.
It should be noted that, the execution body in this embodiment may be an apparatus including a beam splitter abnormality detection system, where the beam splitter abnormality detection system includes a DPI beam splitter detection function, such as: the computer, tablet, mobile phone or notebook may be any other device capable of achieving the same or similar functions, and in this embodiment and the following embodiments, the detection method of the DPI spectrometer of the present invention will be described by taking the computer as an example.
It should be understood that the target DPI optical splitter may refer to a DPI optical splitter in the machine room that needs to perform security detection, where the optical splitter may be a device built in the machine room, each device in the machine room has its own number, and for the machine room with number i, there is a standard plug wire template image T captured in advance at the shooting point of the machine room i i (preset standard image) where at the time of patrol at time t: according to the inspection route, the equipment advances to a preset shooting point of the equipment cabinet i, and an image of the equipment cabinet is shot(initial source image) and a preset standard image T i Or the source image photographed at the previous inspection>And comparing to judge whether the state of the light splitter changes at the current moment so as to ensure the safety of the light splitter of the DPI equipment of the machine room. In practical application, the template image T can be selected according to application requirements i Whether the previous patrol captured the image +.>The comparison is carried out, and the scheme is that +.>And T is i The comparison is described as an example.
It can be understood that, because of errors between the inspection stop point and the direction of each photographing and the standard position of the photographed template image, it is difficult to ensure that the photographing positions are completely consistent, and differences exist in photographing angles, the initial source image needs to be obtainedAligned to a preset standard image T i . In the prior art, the feature extraction methods such as SIFT and the like have serious mismatching conditions on the cabinet image with single integral gray level distribution, so the scheme proposes to use april tag codes to obtain high-precision feature points to assist alignment. The april tag code is similar to a fast response matrix pattern code and can be used for camera calibration, target size estimation monocular distance measurement and the like.
The first april tag code is an april tag code of a physical mark on a cabinet corresponding to the DPI optical splitter, the second april tag code is an april tag code of a standard position in a pre-stored template image, and more accurate characteristic point alignment can be achieved by carrying out image alignment on the first april tag code and the second april tag code.
In a specific implementation, aligning a target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI optical splitter to obtain an aligned target source image.
Further, the step S10 further includes: comparing a first april tag code contained in an initial source image corresponding to a target DPI light splitter with a second april tag code contained in a preset standard image to obtain a comparison result; and aligning the target source image with the preset standard image according to the comparison result to obtain a target source image.
It should be noted that, by comparing the first april tag code included in the initial source image with the second april tag code included in the preset standard image, whether the initial source image has a position difference from the preset standard image is determined according to the comparison result, so that the initial source image is aligned to the template image, and the aligned target source image is obtained.
Further, according to the difference of the requirements of the machine room, there may be different planning situations in the distribution of the april tag codes, so that the distribution type of the april tag codes in the initial source image needs to be determined, and the step of comparing the first april tag code contained in the initial source image corresponding to the target DPI splitter with the second april tag code contained in the preset standard image to obtain a comparison result includes: determining the distribution type of the AprilTag codes based on the first AprilTag codes contained in the initial source image corresponding to the target DPI spectroscope; selecting a second april tag code from a preset standard image according to the distribution type; and comparing the first april tag code with the second april tag code according to a preset corner point to obtain a corner point comparison result.
It should be noted that, to further describe the distribution situation of the april tag code, reference may be made to an april tag code distribution diagram shown in fig. 3, where, to ensure the quality of alignment, adaptive processing is performed for different distribution situations, by determining the distribution type of the april tag code in the initial source image, determining a corresponding preset standard image, and selecting a second april tag code from the corresponding preset standard image, so as to compare the second april tag code with the first april tag code according to preset corner points, and determine whether the initial source image needs to be adjusted according to the comparison result of the corner points, for further description, reference is made to three distribution situations shown in fig. 3, where, for the three situations, the specific alignment steps may include:
Step A: k auxiliary april tag codes are attached to a cabinet door, and firstly, the april tag codes in a source image and a template image are identified and positioned: identifying an initial source imageThe four corner points of the april tag code in the k-th identified april tag code are marked as +.>Identifying template image T i The four corner points of the april tag code in the k-th identified april tag code are marked as +.>
And (B) step (B): obtaining a source image from the step AAnd template image T i AprilTag code identified in k-th group of the set, four pairs of characteristic point pairs +.>Together 4*k sets of feature point pairs can be generated. Solving the characteristic equation of the homography matrix requires at least 4 pairs of characteristic points, and the adaptation solution is carried out for more robustly adapting to different positioning code distribution conditions.
For the case of fig. 3 (a), only a single location code is used to associate cabinets. Such cases can be limited with matching point pairs, using [ A, B, C, D ] in FIG. 3 (a) as the set of feature points for solving the equation.
For the case of fig. 3 (b), there are multiple location codes on a single side of the cabinet door, associating different equipment information within the cabinet. Under the condition, the characteristic point group [ A ] with wider coverage range can be obtained by utilizing the intervals of different rows of april tag codes i ,D i ,B j ,C j ]Or [ B ] j ,C j ,A k ,D k ]. For example, in FIG. 3 (B) [ A1, D1, B2, C2 ] shown by the gray dashed line box ]、[A2,D2,B3,C3]、[B1,C1,A3,D3]Can be used as a set of feature points for solving the equation.
For the case of fig. 3 (c), there are multiple pairs of location codes on both sides of the cabinet door, correlating different kinds of information for different devices within the cabinet. In such a case, the feature point group with wider coverage can be obtained by using the mutual interval of the april tag codes, and meanwhile, the detection error of a single april tag code can be weakened evenly. First by four corner points of april tag codeCalculate the center of gravity F k Gravity center combination of april tag code pairs of different devices [ F i ,F i+1 ,F j ,F j+1 ]As a set of feature points for solving the equation. For example: in FIG. 3 (c) [ F1, F2, F3, F4 ] shown by a gray dashed box]、[F3,F4,F5,F6]、[F1,F2,F5,F6]The method can be used as a characteristic point group for solving an equation, and the corner point comparison result is determined according to the characteristic point group obtained through calculation.
Further, the step of aligning the target source image with the preset standard image according to the comparison result to obtain a target source image includes: calculating a homography matrix according to the characteristic point group contained in the corner comparison result; and aligning the target source image with the preset standard image based on the homography matrix to obtain a target source image.
The homography matrix H is calculated by the following equation using the feature point group obtained in the above step as an input, and the source image is obtained using H Alignment to template image T i Recording the aligned source image +.>Is->
Wherein the method comprises the steps ofAnd->And the homogeneous coordinates of the feature points before and after homography transformation are represented.
In the specific implementation, because the gray scale of the equipment in the cabinet door of the machine room is close to that of the equipment in the cabinet, and the prospect is small holes which are densely and finely connected, if the characteristic point pairs obtained by using the conventional SIFT and other methods are often low in matching accuracy and less in number, the step is mainly used for optimizing the obtaining of the high-quality characteristic point pairs. Therefore, more accurate image alignment can be realized, so that similarity evaluation is carried out according to the aligned target source images in the later stage, and further, the safety detection of the beam splitter is realized, and the potential safety hazard is avoided.
Step S20: and carrying out similarity evaluation on the first key region in the target source image and the second key region in the preset standard image to obtain a similarity evaluation result.
It should be noted that the first key region of the target source image refers to the acquired source imageIn-image cabinet door region, cabinet door region contains cabinet door and the regional image behind the light splitter overlap, because the light splitter embeds in the cabinet door, consequently can exist and shelter from, leads to there is regional overlapping. The second key area in the preset standard image may be an area image after the cabinet door and the optical splitter overlap when the interface of the optical splitter is in standard connection and the cabinet door of the optical splitter belongs to a standard state. The first and second critical areas need to be located before the similarity evaluation is performed, so that each template image T needs to be located i The important attention area T in the template diagram needs to be extracted roi (the second key area), the area is generally only selected from the target equipment area in the cabinet door, and the target equipment area can be adjusted according to the requirement in specific implementation cases. Known T i And T i,roi Regional scope, target source image after alignment needs to be positionedIn (a first key region), for further explaining the positioning flow of the key region of interest, the positioning schematic diagram of the key region of interest shown in fig. 4 may be combined, where the specific positioning flow is as follows:
an example of two ways of positioning the key attention area (key area) is shown in fig. 4, the peripheral rectangle in fig. 4 represents a cabinet door, the hexagonal grid area is the key attention area, and two types of positioning ways can be selected according to the distribution characteristics of the april tag code. The specification of a cabinet door is W multiplied by H, and a focus attention area T is known i,roi The specification is w×h, and the specification of april tag code is r×r.
Step one: in the case of fig. 4 (1), corresponding to fig. 3 (a), the april tag margin is used to locate the important region of interest. Set the distance T of the AprilTag code at the upper left corner i,roi The horizontal distance from the left side of the area is d, and the distance from the top of the cabinet door is z;
identified in step S20Side length +. > Further, the left and right horizontal direction coordinates +.>And->Vertical coordinates of upper and lower boundary->And->
Step two: the situation in fig. 4 (2) corresponds to the scenario in which the positioning codes of the type (b) and (c) of fig. 3 are associated with devices within the cabinet. The upper edge of the AprilTag code corresponding to each device is arranged to be collinear with the upper edge of the U-position of the device, and the distance T between the right side edge i,roi The left horizontal distance of the region is d.
Similarly, identified in step S20Side length of april tag code in image can be determined
Further determining the left and right horizontal coordinates +.>Andthe vertical coordinate of the upper boundary can be determined by the first positioning code corner point +.>The lower boundary vertical coordinate is determined by the lowest Aprailtag code and cabinet specification +.>Wherein u and u 0 The U-position height of the cabinet and the distance between the bottom U-position baffle and the lower edge of the hexagonal mesh area are respectively corresponding.
Finally, from the aligned target source imageIn terms of the upper left and lower right corner points (x 0 ,y 0 ) And (x) 1 ,y 1 ) Clipping to obtain a focus attention area in the initial source image S>(first critical area).
It can be understood that the similarity between the first key region and the second key region is determined by comparing the first key region with the second key region, so as to judge whether the key region in the target source image is abnormal or not according to the region similarity. The similarity evaluation result includes a similarity between the first key region and the second key region and a similarity image.
Step S30: and detecting whether the target source image is abnormal according to the similarity evaluation result, and obtaining a detection result.
It should be noted that, whether the target source image is abnormal or not is detected through the similarity and the similarity image contained in the similarity evaluation result, and a detection result is obtained, wherein the detection result comprises two situations of abnormal existence and no abnormal existence, and early warning is required for the situation of abnormal existence, so that the safety problem is effectively avoided.
In a specific implementation, to further illustrate the detection flow of the passive optical splitter of the DPI device with the occlusion scene in the present solution, reference may be made to the detection flow chart of the passive optical splitter of the DPI device with the occlusion scene shown in fig. 5, where 101) the inspection acquisition source image S, 102) the alignment of the source image to the standard image, 103) the positioning of the important attention area T in the standard images S and T roi And S is roi 104) detecting the front Jing Zhedang mask M, 105) to T in S roi And S is roi Performing similarity evaluation andgenerating the detection map P, 106) determines whether an early warning occurs based on the detection map P.
According to the embodiment, the target source image is obtained by aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI optical splitter; performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result; according to the similarity evaluation result, whether the target source image is abnormal or not is detected, and a detection result is obtained, and compared with the existing DPI (deep inspection chamber) spectrometer detection scheme, the DPI spectrometer detection scheme is high in inspection cost and limited by scenes, so that the change of the multi-type DPI spectrometer cannot be flexibly handled.
Based on the first embodiment shown in fig. 2, a second embodiment of the detection method of the DPI optical splitter according to the invention is presented.
In this embodiment, in order to avoid the influence of the cabinet door on the detection of the beam splitter, the influence of the foreground shielding mask needs to be eliminated before the similarity evaluation, and before the step S20, the method further includes: dividing a first key region in the target source image according to a preset dividing proportion to obtain divided sub-dividing blocks; performing horizontal and vertical segmentation on the target source image according to boundary corner point information corresponding to the sub-segmentation blocks to obtain segmented sub-image blocks; performing binarization processing on the target source image based on a threshold sequence corresponding to the sub-image block to obtain an initial mask; and denoising the initial mask to obtain the Jing Zhedang mask in front of the cabinet door in the first key region.
It should be noted that, in order to avoid the influence of shielding of the cabinet door when the change condition of the beam splitter is evaluated and judged by the similarity, the foreground area of the cabinet door needs to be positioned in advance, the present scheme provides a block threshold method to realize the positioning of the door panel mask, and for further explanation of the present scheme, reference may be made to a block expansion schematic diagram shown in fig. 6, and the specific implementation is as follows:
Step A:dividing a first key region in a target source image according to a preset dividing proportion to obtain divided sub-divided blocks, namely determining block dividing proportions a and b in the width and height directions to obtain a first key regionCorner point pt of (2) 01 ,pt 02 Obtaining the upper, lower, left and right boundaries of the important attention area, respectively denoted as R up ,R bottom ,R right ,R left
By [ R ] up ,R bottom ]Dividing the high direction into b parts for boundary [ R ] up ,…,R j ,…,R bottom ]By [ R ] right ,R left ]Dividing the width direction into a parts [ R ] for boundary right ,…,R i ,…,R left ]Thereby obtaining a first key regionBoundary corner points of each sub-divided block in the inner.
And (B) step (B): and carrying out horizontal and vertical segmentation on the target source image according to boundary corner point information corresponding to the sub-segmentation blocks to obtain segmented sub-image blocks. In the initial segmentation result in the step A, the sub-segmentation blocks with corner points at the boundary are replaced bySuch that the sub-blocks cover the complete area of the source image in order to be able to more fully utilize the information within the image. As shown in FIG. 6, in FIG. (a) there is +.>Division of the inner subdivision blocks, expansion into the diagram (b)>Is divided into blocks of [0, …, R i ,…,W]And [0, …, R j ,…,H]Obtain->Is a segmentation result of (a).
Step C:recording the sub-image block list obtained in the step B as { s } j J=1, 2, …, a×b }, for each sub-picture block s j Obtaining the segmentation threshold t by using the Ojin method j Obtaining a threshold sequence Th= { t j ,j=1,2,…,a×b}。
Step D: classifying the Th threshold sequence by using a K-means algorithm, sorting the number of samples of each class, and selecting the clustering center of the class with the largest number of samples as the final full-image threshold Th final
Step E: using threshold Th final Will beBinarizing to obtain an initial mask M init For M init Extracting noise region N, M by performing open operation init Obtaining a cabinet door foreground area mask by solving difference set with N>By->pt 01 、pt 02 Mask for important attention area>
In this embodiment, the step S20 further includes: and performing similarity evaluation on the first key region in the target source image and the second key region in the preset standard image based on the cabinet front Jing Zhedang mask and a preset similarity calculation formula to obtain a similarity evaluation result.
It should be noted that the source image has been obtained by the foregoing stepsMask for important focus area>Based on structural similarity can be applied to->And T i,roi And (5) performing similarity evaluation. Because the cabinet door foreground does not belong to the target area, the influence of the cabinet door is further removed by utilizing a binary foreground mask.
The preset similarity calculation formula comprises:
wherein,refers to a target source image, T i,roi The second key region in the preset standard image is referred to; />The mask is Jing Zhedang in front of the cabinet door, and P is the similarity comparison result of the key area.
According to the embodiment, the target source image is obtained by aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI optical splitter; performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result; according to the similarity evaluation result, whether the target source image is abnormal or not is detected, and a detection result is obtained, and compared with the existing DPI (deep inspection chamber) spectrometer detection scheme, the DPI spectrometer detection scheme is high in inspection cost and limited by scenes, so that the change of the multi-type DPI spectrometer cannot be flexibly handled.
Referring to fig. 7, fig. 7 is a schematic flow chart of a first embodiment of a detection method of a DPI optical splitter according to the present invention, and a third embodiment of a detection method of a DPI optical splitter according to the present invention is provided.
In this embodiment, the step S30 further includes:
Step S301: and carrying out binarization processing on the equipment image blocks contained in the similarity evaluation result based on an Ojin method to obtain binarized image blocks.
It should be noted that, for the unmanned inspection scene, the similarity map P needs to be used for early warning judgment, so as to judge whether the early warning needs to be sent out. If an early warning is sent out, a machine room worker can confirm the position where the important attention area changes in an auxiliary mode through a detection chart P displayed in a thermodynamic diagram mode. The Otsu method (maximum inter-class variance method) refers to dividing data in an image into two classes by using a threshold, wherein the gray scale of pixels of the image in one class is smaller than the threshold, and the gray scale of pixels of the image in the other class is larger than or equal to the threshold. If the variance of the gray levels of the pixels in the two classes is larger, it is indicated that the obtained threshold is the best threshold (variance is a measure of the uniformity of the gray level distribution, and the larger the inter-class variance between the background and the foreground is, it is indicated that the larger the difference between the two parts constituting the image is, the smaller the difference between the two parts becomes when the foreground is divided into the background or the background is divided into the foreground by mistake.
It will be appreciated that by using this threshold, the image can be divided into two parts, foreground and background. Compared with the existing other algorithms, the method has the advantages of simple and quick calculation and no influence of image brightness and contrast. In order to avoid sensitivity to image noise, the difference detection is performed after the image is denoised in the scheme.
In the specific implementation, binarization processing is carried out on the equipment image blocks contained in the similarity evaluation result through an Ojin method, so that the binarized image blocks are obtained.
Step S302: and performing open operation on the binarized image block to remove a noise area, and obtaining the denoised binarized image block.
It should be noted that, to further illustrate the anomaly detection process in this embodiment, reference may be made to the early warning determination flow chart shown in fig. 8, where the early warning determination flow is as follows: step A: pre-training a target detection network N pair source imageThe beam splitter device in the list is detected to obtain a device list [ Sp ] 1 ,Sp 2 ,…,Sp k ]And respectively extracting rectangular subareas corresponding to the equipment.
And (B) step (B): performing binarization on P by using an Ojin method, and performing open operation on a binarization result of the P to remove a noise area so as to obtain P';
step C: image block Sp corresponding to each device k Extracting a corresponding similarity graph subarea in P
Step S303: and detecting whether the target source image is abnormal or not according to the denoised binarized image block, and obtaining a detection result.
It should be noted that, whether the target source image is abnormal or not is detected according to the denoised binarized image block P', so as to obtain a detection result.
Further, the step S303 further includes: extracting a similarity map subarea from the denoised binarized image block; and detecting whether the target source image is abnormal or not based on the similarity map subarea, and obtaining a detection result.
It should be noted that, the step C specifically includes: step 1: extraction ofThe coordinates of foreground points, namely the points in the high-difference area, are obtained to obtain a coordinate sequence X, and the X is clustered to generate n clusters.
Step 2: screening clusters obtained by clustering, wherein the number of points in the clusters is smaller than the threshold Th of the percentage of the whole foreground points cls And screening out clusters serving as noise clusters, and obtaining m clusters after screening. Th (Th) cls Can be in the range of [0.01,0.05 ] according to the application requirement]And (5) adjusting in the interval.
Step 3: if the number m of the clusters after screening is more than 0, the kth beam splitter equipment is considered to change, and early warning is carried out; if m=0, no early warning is sent out, and the next device is switched to detect.
According to the embodiment, the target source image is obtained by aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI optical splitter; performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result; performing binarization processing on the equipment image blocks contained in the similarity evaluation result based on an Ojin method to obtain binarized image blocks; performing open operation on the binarized image block to remove a noise area, and obtaining a denoised binarized image block; according to the binarized image block after denoising, whether the target source image is abnormal or not is detected, and a detection result is obtained.
In addition, to achieve the above object, the present invention also proposes a storage medium having stored thereon a DPI spectroscope detection program which, when executed by a processor, implements the steps of the DPI spectroscope detection method as described above.
Referring to fig. 9, fig. 9 is a block diagram showing the structure of a first embodiment of the detection device of the DPI spectrometer of the present invention.
As shown in fig. 9, a DPI optical splitter detection device according to an embodiment of the invention includes:
an image alignment module 10, configured to align, based on a first april tag code included in an initial source image corresponding to a target DPI optical splitter, the target source image with a preset standard image, and obtain a target source image;
the similarity evaluation module 20 is configured to perform similarity evaluation on the first key region in the target source image and the second key region in the preset standard image, so as to obtain a similarity evaluation result;
and the anomaly detection module 30 is configured to detect whether the target source image is abnormal according to the similarity evaluation result, so as to obtain a detection result.
According to the embodiment, the target source image is obtained by aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI optical splitter; performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result; according to the similarity evaluation result, whether the target source image is abnormal or not is detected, and a detection result is obtained, and compared with the existing DPI (deep inspection chamber) spectrometer detection scheme, the DPI spectrometer detection scheme is high in inspection cost and limited by scenes, so that the change of the multi-type DPI spectrometer cannot be flexibly handled.
Further, the image alignment module 10 is further configured to compare the first april tag code contained in the initial source image corresponding to the target DPI optical splitter with the second april tag code contained in the preset standard image, so as to obtain a comparison result; and aligning the target source image with the preset standard image according to the comparison result to obtain a target source image.
Further, the image alignment module 10 is further configured to determine a distribution type of the april tag code based on a first april tag code included in an initial source image corresponding to the target DPI optical splitter; selecting a second april tag code from a preset standard image according to the distribution type; comparing the first april tag code with the second april tag code according to preset angular points to obtain an angular point comparison result;
the image alignment module 10 is further configured to calculate a homography matrix according to the feature point group included in the corner comparison result; and aligning the target source image with the preset standard image based on the homography matrix to obtain a target source image.
Further, the anomaly detection module 30 is further configured to segment the first key region in the target source image according to a preset segmentation ratio, so as to obtain segmented sub-segments; performing horizontal and vertical segmentation on the target source image according to boundary corner point information corresponding to the sub-segmentation blocks to obtain segmented sub-image blocks; performing binarization processing on the target source image based on a threshold sequence corresponding to the sub-image block to obtain an initial mask; and denoising the initial mask to obtain the Jing Zhedang mask in front of the cabinet door in the first key region.
Further, the anomaly detection module 30 is further configured to perform similarity evaluation on the first key region in the target source image and the second key region in the preset standard image based on the cabinet front Jing Zhedang mask and a preset similarity calculation formula, so as to obtain a similarity evaluation result;
the preset similarity calculation formula comprises:
wherein,refers to a target source image, T i,roi The second key region in the preset standard image is referred to; />The mask is Jing Zhedang in front of the cabinet door, and P is the similarity comparison result of the key area.
Further, the anomaly detection module 30 is further configured to perform binarization processing on the device image block included in the similarity evaluation result based on the oxford method, to obtain a binarized image block; performing open operation on the binarized image block to remove a noise area, and obtaining a denoised binarized image block; and detecting whether the target source image is abnormal or not according to the denoised binarized image block, and obtaining a detection result.
Further, the anomaly detection module 30 is further configured to extract a similarity map sub-region from the denoised binarized image block; and detecting whether the target source image is abnormal or not based on the similarity map subarea, and obtaining a detection result.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in this embodiment may refer to the detection method of the DPI spectroscope provided in any embodiment of the present invention, and are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. do not denote any order, but rather the terms first, second, third, etc. are used to interpret the terms as names.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. read only memory mirror (Read Only Memory image, ROM)/random access memory (Random Access Memory, RAM), magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A method for detecting a DPI optical splitter, the method comprising the steps of:
aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to a target DPI optical splitter to obtain a target source image;
performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result;
and detecting whether the target source image is abnormal according to the similarity evaluation result, and obtaining a detection result.
2. The DPI spectroscopy detection method according to claim 1, wherein the step of aligning the target source image with a preset standard image based on a first april tag code included in an initial source image corresponding to the target DPI spectroscopy to obtain the target source image includes:
Comparing a first april tag code contained in an initial source image corresponding to a target DPI light splitter with a second april tag code contained in a preset standard image to obtain a comparison result;
and aligning the target source image with the preset standard image according to the comparison result to obtain a target source image.
3. The DPI spectroscopy detection method according to claim 2, wherein the step of comparing the first april tag code included in the initial source image corresponding to the target DPI spectroscopy with the second april tag code included in the preset standard image to obtain a comparison result includes:
determining the distribution type of the AprilTag codes based on the first AprilTag codes contained in the initial source image corresponding to the target DPI spectroscope;
selecting a second april tag code from a preset standard image according to the distribution type;
comparing the first april tag code with the second april tag code according to preset angular points to obtain an angular point comparison result;
the step of aligning the target source image with the preset standard image according to the comparison result to obtain a target source image comprises the following steps:
calculating a homography matrix according to the characteristic point group contained in the corner comparison result;
And aligning the target source image with the preset standard image based on the homography matrix to obtain a target source image.
4. The DPI spectroscopy detection method according to claim 3, wherein the step of performing similarity assessment on the first key region in the target source image and the second key region in the preset standard image, before obtaining a similarity assessment result, further includes:
dividing a first key region in the target source image according to a preset dividing proportion to obtain divided sub-dividing blocks;
performing horizontal and vertical segmentation on the target source image according to boundary corner point information corresponding to the sub-segmentation blocks to obtain segmented sub-image blocks;
performing binarization processing on the target source image based on a threshold sequence corresponding to the sub-image block to obtain an initial mask;
and denoising the initial mask to obtain the Jing Zhedang mask in front of the cabinet door in the first key region.
5. The DPI spectroscope detection method according to claim 4, wherein the step of performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image to obtain a similarity evaluation result includes:
Performing similarity evaluation on a first key region in the target source image and a second key region in the preset standard image based on the cabinet front Jing Zhedang mask and a preset similarity calculation formula to obtain a similarity evaluation result;
the preset similarity calculation formula comprises:
wherein,refers to a target source image, T i,roi The second key region in the preset standard image is referred to; />The mask is Jing Zhedang in front of the cabinet door, and P is the similarity comparison result of the key area.
6. The DPI spectroscope detection method according to any one of claims 1 to 5, wherein said step of detecting whether there is an abnormality in said target source image based on said similarity evaluation result, and obtaining a detection result, comprises:
performing binarization processing on the equipment image blocks contained in the similarity evaluation result based on an Ojin method to obtain binarized image blocks;
performing open operation on the binarized image block to remove a noise area, and obtaining a denoised binarized image block;
and detecting whether the target source image is abnormal or not according to the denoised binarized image block, and obtaining a detection result.
7. The DPI spectroscope detection method according to claim 6, wherein the step of detecting whether there is an abnormality in the target source image based on the denoised binarized image block, and obtaining a detection result includes:
Extracting a similarity map subarea from the denoised binarized image block;
and detecting whether the target source image is abnormal or not based on the similarity map subarea, and obtaining a detection result.
8. A DPI beam splitter detection device, the DPI beam splitter detection device comprising: a memory, a processor and a DPI spectroscopy detection program stored on the memory and executable on the processor, which when executed by the processor implements the DPI spectroscopy detection method according to any one of claims 1 to 7.
9. A storage medium, wherein a DPI spectrometer detection program is stored on the storage medium, and when executed by a processor, the DPI spectrometer detection program implements the DPI spectrometer detection method according to any one of claims 1 to 7.
10. A DPI beam splitter detection device, the DPI beam splitter detection device comprising:
the image alignment module is used for aligning the target source image with a preset standard image based on a first april tag code contained in an initial source image corresponding to the target DPI optical splitter to obtain the target source image;
the similarity evaluation module is used for performing similarity evaluation on the first key region in the target source image and the second key region in the preset standard image to obtain a similarity evaluation result;
And the anomaly detection module is used for detecting whether the target source image is abnormal according to the similarity evaluation result to obtain a detection result.
CN202311091358.5A 2023-08-28 2023-08-28 DPI (deep inspection) optical splitter detection method, equipment, storage medium and device Pending CN117132567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311091358.5A CN117132567A (en) 2023-08-28 2023-08-28 DPI (deep inspection) optical splitter detection method, equipment, storage medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311091358.5A CN117132567A (en) 2023-08-28 2023-08-28 DPI (deep inspection) optical splitter detection method, equipment, storage medium and device

Publications (1)

Publication Number Publication Date
CN117132567A true CN117132567A (en) 2023-11-28

Family

ID=88852276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311091358.5A Pending CN117132567A (en) 2023-08-28 2023-08-28 DPI (deep inspection) optical splitter detection method, equipment, storage medium and device

Country Status (1)

Country Link
CN (1) CN117132567A (en)

Similar Documents

Publication Publication Date Title
US11774735B2 (en) System and method for performing automated analysis of air samples
US11403839B2 (en) Commodity detection terminal, commodity detection method, system, computer device, and computer readable medium
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN104506857B (en) A kind of camera position deviation detection method and apparatus
JP4970195B2 (en) Person tracking system, person tracking apparatus, and person tracking program
CN110930353A (en) Method and device for detecting state of hole site protection door, computer equipment and storage medium
US11699283B2 (en) System and method for finding and classifying lines in an image with a vision system
CN111325769B (en) Target object detection method and device
US10762372B2 (en) Image processing apparatus and control method therefor
CN103607558A (en) Video monitoring system, target matching method and apparatus thereof
JP2024016287A (en) System and method for detecting lines in a vision system
CN115937746A (en) Smoke and fire event monitoring method and device and storage medium
CN110119675B (en) Product identification method and device
CN110505438B (en) Queuing data acquisition method and camera
CN116168345B (en) Fire detection method and related equipment
CN111402185B (en) Image detection method and device
CN112364884A (en) Method for detecting moving object
JP5983033B2 (en) Position relationship determination program, position relationship determination method, and position relationship determination device
CN117132567A (en) DPI (deep inspection) optical splitter detection method, equipment, storage medium and device
US20230230225A1 (en) Multi-tier pcba integrity validation process
CN116993654A (en) Camera module defect detection method, device, equipment, storage medium and product
CN112101107B (en) Intelligent recognition method for intelligent network connection model vehicle on-loop simulation traffic signal lamp
CN111708907A (en) Target person query method, device, equipment and storage medium
CN106775701B (en) Client automatic evidence obtaining method and system
TWI762365B (en) Image identification method and image surveillance apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination