CN113657336A - Target intelligent matching and identifying method based on multi-dimensional image - Google Patents

Target intelligent matching and identifying method based on multi-dimensional image Download PDF

Info

Publication number
CN113657336A
CN113657336A CN202110979544.7A CN202110979544A CN113657336A CN 113657336 A CN113657336 A CN 113657336A CN 202110979544 A CN202110979544 A CN 202110979544A CN 113657336 A CN113657336 A CN 113657336A
Authority
CN
China
Prior art keywords
target
different
scale
same
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110979544.7A
Other languages
Chinese (zh)
Inventor
琚小明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Jierui Power Technology Co ltd
Original Assignee
Zhejiang Jierui Power Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Jierui Power Technology Co ltd filed Critical Zhejiang Jierui Power Technology Co ltd
Priority to CN202110979544.7A priority Critical patent/CN113657336A/en
Publication of CN113657336A publication Critical patent/CN113657336A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention discloses a target intelligent matching and identifying method based on a multi-dimensional image, which specifically comprises the following steps: s1, screening and sorting images collected by a fixed camera, a robot and an unmanned aerial vehicle camera in a power grid distribution and control inspection area to form corresponding multi-scale target training data; s2, providing a same target matching degree maximization criterion under multiple scales; s3, providing a multidimensional image joint learning model; s4, realizing target matching of different resolutions under different cameras; s5, further identifying the successfully matched image, effectively enhancing the expression information of the target under the low resolution through the information of different scales, and simultaneously judging the same target under different resolutions by using a matching degree maximization criterion so as to learn the optimal measurement model of different scales.

Description

Target intelligent matching and identifying method based on multi-dimensional image
Technical Field
The invention relates to a computer vision technology, in particular to a target intelligent matching and identifying method based on a multi-dimensional image, which is combined with the multi-dimensional image to achieve the purpose of improving the target matching accuracy and the identification accuracy, thereby improving the power grid inspection efficiency.
Background
With the continuous increase of national power grid line mileage, the danger coefficient is high, the time-consuming and labor-consuming traditional manual inspection is unable to adapt to the development requirement, the proposal of a new-era smart power grid plan, robots and unmanned planes which utilize various advanced sensors and advanced algorithms in a large amount are widely applied to power grid inspection, and the intelligent inspection mode has the advantages of low danger coefficient, low cost, low influence by environment and weather factors, becomes an important means for realizing power grid intellectualization, and is also an important direction for the future development of the smart power grid.
However, while having a wide prospect, intelligent inspection is limited by the development of technologies, and also has some defects that cannot be ignored, for example, most robots and unmanned aerial vehicles are inspected according to a fixed route at a fixed time, and may not inspect safety problems in real time. The recognition accuracy is also a great obstacle to limiting unmanned inspection, and compared with manual inspection, the accuracy of unmanned inspection is still at a lower level.
At present, the research aiming at the inspection of robots and unmanned aerial vehicles is mainly on how to improve the inspection efficiency and the identification accuracy. In order to improve the inspection efficiency, many researches are dedicated to apply edge calculation to power grid inspection, and after calculation and screening are carried out through the calculation force of edge equipment, effective information is uploaded to a server for further processing. The inspection efficiency can be effectively improved by the aid of the measures. In the aspect of improving the accuracy, along with the large-scale application of deep learning, the identification accuracy is greatly improved, and even surpasses human beings in some aspects. However, deep learning requires large-scale data to train a model with higher accuracy, and in addition, the environment of the inspection range is complex, images shot by cameras carried by a robot and an unmanned aerial vehicle are not clear enough, and the recognition accuracy is not seriously affected by target characteristics.
A large amount of researches have shown, through the matching of multidimension image, fixed camera and robot, unmanned aerial vehicle's cooperation promptly, can accomplish image intelligence matching when robot, unmanned aerial vehicle only acquire low resolution information, thereby fuse the multiscale information, realize more accurate discernment, promote and patrol and examine efficiency, compare in the past the information acquisition of single camera, this kind of method identification accuracy is higher, and can utilize the fixed camera in key region to accomplish real-time safety and patrol and examine well, fine assurance electric wire netting safety.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a target intelligent matching and identifying method based on a multi-dimensional image.
In order to achieve the purpose, the invention adopts the technical scheme that: a target intelligent matching and recognition method based on a multi-dimensional image is designed, and is characterized by specifically comprising the following steps:
s1, fixing a camera device in the power grid distribution inspection area, collecting multi-dimensional images, screening and sorting the multi-dimensional images to form corresponding multi-scale target training data;
s2, providing a same target matching degree maximization criterion under multiple scales;
s3, providing a multidimensional image joint learning model;
s4, realizing target matching of different resolutions under different cameras;
and S5, further identifying the successfully matched image.
Further, in step S1, the camera device is a fixed camera, a robot, or an unmanned aerial vehicle, and for the same pictures with different scales, at least two groups of fixed cameras with different visual distances and the unmanned aerial vehicle or the robot are used to collect the same pictures, so as to form a plurality of image data with different scales, and the collected image data set is divided into three sub-data sets with different scales, i.e. a large scale, a medium scale, and a small scale, which are formally expressed as
Figure BDA0003228535230000021
Where N denotes that each subdata set is collectedN pictures are collected, and the picture is displayed,
Figure BDA0003228535230000022
respectively representing the extracted features of the same target image at large, medium and small scales, wherein i represents the category of the target.
Further, in the step S2, in the criterion of maximizing the matching degree of the same target under the multi-scale, the maximization of the matching degree is performed by mapping the feature vectors of different scale images of the same target to the same latitude space, and then minimizing the difference of the multi-scale feature vectors in the same latitude space.
Further, the matching degree maximization is realized by mapping the multi-scale feature vectors to the same latitude space and then minimizing the mean difference of the feature vectors of all scales, and the specific matching degree maximization criterion can be formulated as:
Figure BDA0003228535230000031
in the formula (1), K represents the number of object classes, Th,Tm,TsRespectively representing transformation matrices that project three different scale features into the same latitude space,
Figure BDA0003228535230000032
respectively representing the mean value of the characteristic vectors of the image to which the ith target belongs at three different scales.
Further, in step S3, the multi-dimensional image joint learning model is implemented by learning the optimal metric at different scales, and meets the matching degree maximization criterion, and at the same time, it can be ensured that the intra-class distance of the same target is the minimum and the out-class distance of different targets is the maximum at the same scale.
Further, in the step S3, an inter-class and intra-class divergence matrix is introduced for measuring inter-class and intra-class distances, and the formalized inter-class and intra-class matrix is Sb,SwThe formula (2) is:
Figure BDA0003228535230000033
Figure BDA0003228535230000034
wherein Ii,IjRepresenting the feature vectors of the target with different scales to be matched, and satisfying the same target with different scales
Figure BDA0003228535230000035
Figure BDA0003228535230000036
Satisfy different objectives
Figure BDA0003228535230000037
Figure BDA0003228535230000038
Wherein A isi,jExpressed as an affinity matrix, N represents the total number of samples, NKThe number of samples representing the same target, and the large, medium and small inter-class matrixes are represented as
Figure BDA0003228535230000041
Intra-class matrix representation
Figure BDA0003228535230000042
In order to satisfy the maximization criterion of the matching degree of the multi-dimensional image joint learning model, and simultaneously ensure that the intra-class distance of the same target is minimum and the out-class distance of different targets is maximum under the same scale, the model can be expressed as formula (3):
Figure BDA0003228535230000043
wherein, Th,Tm,TsRespectively shows that three different scale features are projected to the same latitudeAnd (3) solving the optimal measurement of the multi-dimensional image joint learning model in three scales of large, medium and small by solving a formula (3), so that the maximum matching degree criterion is met, and simultaneously, the minimum intra-class distance of the same target and the maximum extra-class distance of different targets can be ensured under the same scale, thereby obtaining the optimal conversion matrixes of the three scales.
Further, the target matching of different resolutions under different cameras implemented in S4 is completed by cooperation of at least two fixed cameras of different scales and the inspection robot or the unmanned aerial vehicle, and the specific steps are as follows:
s1, starting to match the target of the inspection robot or the unmanned aerial vehicle within the distance range which can only obtain the low-resolution target image, and setting the camera at the moment as CamaThe low resolution target is OaTwo fixed cameras around the target are CambAnd CamcHigh and medium resolution images can be acquired, which can capture R high resolution targets, respectively
Figure BDA0003228535230000044
And R medium resolution targets
Figure BDA0003228535230000045
S2 best metric learned from the multidimensional image joint learning model, OaTo match the targets at high and medium resolution, respectively, by
Figure BDA0003228535230000046
And
Figure BDA0003228535230000047
is calculated to obtain wherein
Figure BDA0003228535230000051
And
Figure BDA0003228535230000052
represents a fusion distance;
s3 with Oa,CambCaptured a certain target ObTaking the fusion distance of (2) as an example, the calculation formula is shown in formula (4):
Figure BDA0003228535230000053
wherein
Figure BDA0003228535230000054
Represents Oa,ObThe feature vectors extracted at a large scale,
Figure BDA0003228535230000055
representing a feature vector, T, extracted at small i degreeshAnd TsRespectively representing transformation matrixes of corresponding scales, wherein alpha represents an influence factor and is used for adjusting the distribution of influences of different scales;
s4-calculating OaAnd the minimum blend distance with the target at the high and medium resolutions can match the target at the different resolutions.
Further, the step S5 of further identifying the matched target includes the specific steps of:
and splicing and fusing the matched low, medium and high target image features to obtain richer apparent and essential information for inspection, and performing corresponding defect inspection by using methods such as template matching, deep learning and the like.
The invention has the beneficial effects that: the invention provides a target intelligent matching and identifying method based on a multi-dimensional image, which effectively utilizes the cooperative cooperation among a fixed camera, a robot and an unmanned aerial vehicle, is reliable and has higher real-time performance, can greatly improve the accuracy, and performs combined learning on three resolutions of high resolution, medium resolution and small resolution when acquiring a low-resolution image to complete intelligent matching, thereby providing good feature expression for further identification and effectively improving the identification accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flowchart illustrating the detailed operation of a method for intelligent matching and recognition of a target based on a multi-dimensional image according to the present invention;
FIG. 2 is a schematic diagram of different sources of a data set in a multi-dimensional image-based target intelligent matching and identification method of the present invention;
fig. 3 is a schematic diagram of a network architecture structure for mapping different scales to the same latitude in the target intelligent matching and identification method based on the multidimensional image.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts based on the embodiments of the present invention belong to the protection scope of the present invention.
Referring to a specific operation flow chart shown in the attached fig. 1 of the specification, the method for intelligently matching and identifying a target based on a multi-dimensional image, provided by the invention, mainly comprises the following steps:
s1, screening and sorting collected images of a fixed camera, a robot, an unmanned aerial vehicle camera and the like in a power grid distribution and control inspection area to form corresponding multi-scale target training data;
referring to the specification and the attached figure 2, S2, two fixed cameras with different visual distances are deployed in some key routing inspection areas, the fixed cameras are used for simultaneously shooting images when the unmanned aerial vehicle only obtains low-resolution images during routing inspection, a plurality of groups of pictures shot in different routing inspection areas at different time are formed, and the pictures are screened to form three data sets with different resolutions for model training;
specifically, the same target matching degree maximization criterion under multiple scales is expressed in a formula, in this embodiment, the matching degree maximization criterion is performed by mapping feature vectors of the same target image of different scales to the same latitude space and then minimizing the difference of the multiple-scale feature vectors in the same latitude space. Namely, the mean difference of the feature vectors of all scales is minimized to realize the maximization of the matching degree. The matching degree maximization criterion is as follows:
Figure BDA0003228535230000061
wherein K represents the number of object classes, Th,Tm,TsRespectively representing transformation matrices that project three different scale features into the same latitude space,
Figure BDA0003228535230000071
respectively representing the mean value of the characteristic vectors of the image to which the ith target belongs at three different scales. Referring to the specification and fig. 3, in order to map feature vectors of different scales to the same latitude, in this embodiment, the convolutional neural network and the SPP network layer are used to complete feature extraction, the final loss function uses the maximum matching degree criterion, and the mapped latitude is set to 100.
Referring to the description shown in fig. 1, S3: constructing a multi-dimensional image joint learning model, wherein inter-class and intra-class divergence matrixes are introduced to measure inter-class and intra-class distances, and the formalized inter-class and intra-class matrixes are Sb,SwAs shown in the formula:
Figure BDA0003228535230000072
Figure BDA0003228535230000073
wherein Ii,IjTo representThe feature vectors of the targets with different scales to be matched meet the requirements of the same target with different scales
Figure BDA0003228535230000074
Figure BDA0003228535230000075
Satisfy different objectives
Figure BDA0003228535230000076
Figure BDA0003228535230000077
Where A isi,jExpressed as an affinity matrix, N represents the total number of samples, NKRepresenting the number of samples of the same target. The large, medium and small three-scale inter-class matrix is expressed as
Figure BDA0003228535230000078
Intra-class matrix representation
Figure BDA0003228535230000079
In this example, the multidimensional image model needs to satisfy a matching degree maximization criterion, and at the same time, can ensure that the intra-class distance of the same target is minimum and the out-class distance of different targets is maximum in the same scale, and solve the following formula to solve the optimal measurement of the multidimensional image joint learning model in three scales of large, medium and small, and satisfy the matching degree maximization criterion, and at the same time, can ensure that the intra-class distance of the same target is minimum and the out-class distance of the different targets is maximum in the same scale, thereby obtaining the optimal conversion matrix of the three different scales.
Figure BDA0003228535230000081
Referring to the specification and the attached figure 1, in step S4, after the multidimensional image joint learning model is built, the target matching of different resolutions under different cameras is realized. In this embodiment, generally, the fixed camera and the inspection robot or the unmanned aerial vehicle with two different scales cooperate to complete, and the specific steps are as follows:
starting to perform target matching on an inspection robot or an unmanned aerial vehicle within a distance range capable of acquiring only low-resolution target images, and setting a camera at the moment to CamaThe low resolution target is OaTwo fixed cameras around the target are CambAnd CamcHigh and medium resolution images can be acquired, which can capture R high resolution targets, respectively
Figure BDA0003228535230000082
And R medium resolution targets
Figure BDA0003228535230000083
The best metric, O, learned by the multidimensional image joint learning model at this timeaTo match the targets at high and medium resolution, respectively, by
Figure BDA0003228535230000084
And
Figure BDA0003228535230000085
is calculated to obtain wherein
Figure BDA0003228535230000086
And
Figure BDA0003228535230000087
the fusion distance is indicated. In this embodiment, the fusion distance is a pair-pass transformation matrix Th,Tm,TsObtained after transformation as Oa,CambCaptured a certain target ObFor example, the calculation formula is as follows
Figure BDA0003228535230000088
Wherein
Figure BDA0003228535230000089
Represents Oa,ObThe feature vectors extracted at a large scale,
Figure BDA00032285352300000810
representing the feature vectors extracted at small i degrees. T ishAnd TsRespectively, representing the transformation matrices of the corresponding scale. Alpha represents an influence factor for adjusting the distribution of different scale influences. By calculating OaAnd the minimum blend distance with the target at the high and medium resolutions can match the target at the different resolutions.
And S5, further identifying the matched targets, wherein in the implementation, in order to test the improvement of the identification accuracy of the invention, the data of the common unmanned aerial vehicle routing inspection targets of the power transmission line, including a high-voltage insulator, a vibration damper, an overhead ground wire and hardware, are tested, and the targets are marked as L1-L4 in the example. The present example used the most advanced target detection model SSD in this test.
And adopting general indexes AP (detection precision) and call (recall rate) for target detection. The mAP is defined as the mean of all classes of APs. Where AP is used to measure detection accuracy, defined as the area of the P-R plot for each class plotted by Recall and accuracy. Recall is defined as the ratio of the number of targets detected by the algorithm to the number of true targets in the test set.
Table 1 shows the contrast ratio between AP and recall (AP denoted a and recall denoted R) with and without the method of the invention at Tesla p 40.
Figure BDA0003228535230000091
As shown in the results shown in Table 1, the target intelligent matching and identification method based on the multi-dimensional image can effectively improve the identification accuracy, and the inspection efficiency can be better improved through multi-dimensional cooperation. The invention also makes up for some defects of the prior routing inspection.
The invention discloses a target intelligent matching and identifying method based on a multi-dimensional image, which specifically comprises the following steps: s1, screening and sorting collected images of a fixed camera, a robot, an unmanned aerial vehicle camera and the like in a power grid distribution and control inspection area to form corresponding multi-scale target training data; s2, providing a same target matching degree maximization criterion under multiple scales; s3, providing a multidimensional image joint learning model; s4, realizing target matching of different resolutions under different cameras; s5, further identifying the successfully matched image, effectively enhancing the expression information of the target under the low resolution through the information of different scales, and simultaneously judging the same target under different resolutions by using a matching degree maximization criterion so as to learn the optimal measurement model of different scales.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. A target intelligent matching and identification method based on a multi-dimensional image is characterized by comprising the following steps:
s1, fixing a camera device in the power grid distribution inspection area, collecting multi-dimensional images, screening and sorting the multi-dimensional images to form corresponding multi-scale target training data;
s2, providing a same target matching degree maximization criterion under multiple scales;
s3, providing a multidimensional image joint learning model;
s4, realizing target matching of different resolutions under different cameras;
and S5, further identifying the successfully matched image.
2. The method of claim 1, wherein the method comprises a step of matching and recognizing the target object based on the multi-dimensional imageIn step S1, the camera device is a fixed camera, a robot, or an unmanned aerial vehicle, and for the same pictures with different scales, at least two groups of fixed cameras with different viewing distances and the unmanned aerial vehicle or the robot are used to collect the same pictures, so as to form a plurality of image data with different scales, and the collected image data set is divided into three sub-data sets with different scales, i.e., a large scale, a medium scale, and a small scale, which are formally expressed as
Figure FDA0003228535220000011
Where N indicates that N pictures have been collected for each sub data set,
Figure FDA0003228535220000012
respectively representing the extracted features of the same target image at large, medium and small scales, wherein i represents the category of the target.
3. The method for intelligently matching and identifying targets based on multi-dimensional images as claimed in claim 1, wherein said step S2 provides a criterion for maximizing matching degree of the same target under multi-scale, wherein the maximization of matching degree is achieved by mapping feature vectors of different scale images of the same target to the same latitude space, and then minimizing the difference of the multi-scale feature vectors in the same latitude space.
4. The method for intelligently matching and identifying targets based on multi-dimensional images as claimed in claim 3, wherein the matching degree maximization is realized by mapping multi-scale feature vectors to the same latitude space and then minimizing the mean difference of the feature vectors of each scale, and the specific matching degree maximization criterion can be formulated as:
Figure FDA0003228535220000021
in the formula (1), K represents the number of object classes, Th,Tm,TsRespectively represent three kinds ofDifferent scale features are mapped to a transformation matrix of the same latitude space,
Figure FDA0003228535220000022
respectively representing the mean value of the characteristic vectors of the image to which the ith target belongs at three different scales.
5. The method as claimed in claim 1, wherein in step S3, the multidimensional image joint learning model is implemented by learning optimal metrics at different scales, and meets the matching maximization criterion, and at the same time, it can be ensured that the intra-class distance of the same target is the minimum and the out-class distance of the different targets is the maximum at the same scale.
6. The method as claimed in claim 1, wherein the step S3 introduces an inter-class and intra-class divergence matrix for measuring the inter-class and intra-class distances, and the formalization of the inter-class and intra-class matrices is Sb,SwThe formula (2) is:
Figure FDA0003228535220000023
Figure FDA0003228535220000024
wherein Ii,IjRepresenting the feature vectors of the target with different scales to be matched, and satisfying the same target with different scales
Figure FDA0003228535220000025
Satisfy different objectives
Figure FDA0003228535220000026
Wherein A isi,jIs shown as parentA sum degree matrix, N representing the total number of samples, NKThe number of samples representing the same target, and the large, medium and small inter-class matrixes are represented as
Figure FDA0003228535220000027
Intra-class matrix representation
Figure FDA0003228535220000028
In order to satisfy the maximization criterion of the matching degree of the multi-dimensional image joint learning model, and simultaneously ensure that the intra-class distance of the same target is minimum and the out-class distance of different targets is maximum under the same scale, the model can be expressed as formula (3):
Figure FDA0003228535220000031
wherein, Th,Tm,TsThe method comprises the steps of respectively representing transformation matrixes which project three different scale characteristics to the same latitude space, tr representing trace solving, solving the optimal measurement of the multi-dimensional image joint learning model in three scales of large, medium and small through a solving formula (3), meeting the matching degree maximization criterion, and simultaneously ensuring that the intra-class distance of the same target is minimum and the out-class distance of different targets is maximum under the same scale, thereby obtaining the optimal transformation matrixes of the three different scales.
7. The method according to claim 1, wherein the target matching with different resolutions under different cameras implemented in S4 is performed by cooperation of at least two fixed cameras with different scales and the inspection robot or the unmanned aerial vehicle, and the method comprises the following specific steps:
s1, starting to match the target of the inspection robot or the unmanned aerial vehicle within the distance range which can only obtain the low-resolution target image, and setting the camera at the moment as CamaThe low resolution target is OaTwo fixed cameras around the target are CambAnd CamcHigh and medium resolution images can be acquired, which can capture R high resolution targets, respectively
Figure FDA0003228535220000032
And R medium resolution targets
Figure FDA0003228535220000033
S2 best metric learned from the multidimensional image joint learning model, OaTo match the targets at high and medium resolution, respectively, by
Figure FDA0003228535220000034
And
Figure FDA0003228535220000035
is calculated to obtain wherein
Figure FDA0003228535220000036
And
Figure FDA0003228535220000037
represents a fusion distance;
s3 with Oa,CambCaptured a certain target ObTaking the fusion distance of (2) as an example, the calculation formula is shown in formula (4):
Figure FDA0003228535220000041
wherein
Figure FDA0003228535220000042
Represents Oa,ObThe feature vectors extracted at a large scale,
Figure FDA0003228535220000043
representing a feature vector, T, extracted at small i degreeshAnd TsRespectively representing transformation matrixes of corresponding scales, wherein alpha represents an influence factor and is used for adjusting the distribution of influences of different scales;
s4-calculating OaAnd the minimum blend distance with the target at the high and medium resolutions can match the target at the different resolutions.
8. The method for intelligently matching and identifying targets based on multi-dimensional images as claimed in claim 1, wherein the step S5 is further identifying the matched targets, and comprises the following specific steps:
and splicing and fusing the matched low, medium and high target image features to obtain richer apparent and essential information for inspection, and performing corresponding defect inspection by using methods such as template matching, deep learning and the like.
CN202110979544.7A 2021-08-25 2021-08-25 Target intelligent matching and identifying method based on multi-dimensional image Pending CN113657336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110979544.7A CN113657336A (en) 2021-08-25 2021-08-25 Target intelligent matching and identifying method based on multi-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110979544.7A CN113657336A (en) 2021-08-25 2021-08-25 Target intelligent matching and identifying method based on multi-dimensional image

Publications (1)

Publication Number Publication Date
CN113657336A true CN113657336A (en) 2021-11-16

Family

ID=78481901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110979544.7A Pending CN113657336A (en) 2021-08-25 2021-08-25 Target intelligent matching and identifying method based on multi-dimensional image

Country Status (1)

Country Link
CN (1) CN113657336A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404871A (en) * 2015-11-25 2016-03-16 中山大学 Multi-scale association learning based low-resolution pedestrian matching method used between cameras without overlapped view shed
CN110674861A (en) * 2019-09-19 2020-01-10 国网山东省电力公司电力科学研究院 Intelligent analysis method and device for power transmission and transformation inspection images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404871A (en) * 2015-11-25 2016-03-16 中山大学 Multi-scale association learning based low-resolution pedestrian matching method used between cameras without overlapped view shed
CN110674861A (en) * 2019-09-19 2020-01-10 国网山东省电力公司电力科学研究院 Intelligent analysis method and device for power transmission and transformation inspection images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王智飞: ""低分辨率人脸识别算法研究"", 《中国博士学位论文全文数据库 信息科技辑》, no. 1, 15 January 2014 (2014-01-15), pages 138 - 35 *

Similar Documents

Publication Publication Date Title
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN106250870A (en) A kind of pedestrian's recognition methods again combining local and overall situation similarity measurement study
CN108009515A (en) A kind of power transmission line positioning identifying method of the unmanned plane image based on FCN
CN110309701B (en) Pedestrian re-identification method based on same cross-view-angle area
CN105930768A (en) Spatial-temporal constraint-based target re-identification method
CN108680833B (en) Composite insulator defect detection system based on unmanned aerial vehicle
CN109376580B (en) Electric power tower component identification method based on deep learning
CN114973002A (en) Improved YOLOv 5-based ear detection method
CN106952289A (en) The WiFi object localization methods analyzed with reference to deep video
CN112966571B (en) Standing long jump flight height measurement method based on machine vision
CN115690542A (en) Improved yolov 5-based aerial insulator directional identification method
CN113033315A (en) Rare earth mining high-resolution image identification and positioning method
CN103679740B (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN112132157A (en) Raspberry pie-based gait face fusion recognition method
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN111241905A (en) Power transmission line nest detection method based on improved SSD algorithm
CN104063689B (en) Face image identification method based on binocular stereoscopic vision
CN105825215A (en) Instrument positioning method based on local neighbor embedded kernel function and carrier of method
CN117274627A (en) Multi-temporal snow remote sensing image matching method and system based on image conversion
CN113657336A (en) Target intelligent matching and identifying method based on multi-dimensional image
CN115527118A (en) Remote sensing image target detection method fused with attention mechanism
CN116385477A (en) Tower image registration method based on image segmentation
CN115147591A (en) Transformer equipment infrared image voltage heating type defect diagnosis method and system
CN114943904A (en) Operation monitoring method based on unmanned aerial vehicle inspection
CN114037895A (en) Unmanned aerial vehicle pole tower inspection image identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination