CN113344002B - Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium - Google Patents

Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113344002B
CN113344002B CN202110860925.3A CN202110860925A CN113344002B CN 113344002 B CN113344002 B CN 113344002B CN 202110860925 A CN202110860925 A CN 202110860925A CN 113344002 B CN113344002 B CN 113344002B
Authority
CN
China
Prior art keywords
target
coordinates
target coordinates
targets
specific area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110860925.3A
Other languages
Chinese (zh)
Other versions
CN113344002A (en
Inventor
李志轩
刘博�
唐东明
严超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tuzhi Tianxia Technology Co ltd
Original Assignee
Beijing Tuzhi Tianxia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tuzhi Tianxia Technology Co ltd filed Critical Beijing Tuzhi Tianxia Technology Co ltd
Priority to CN202110860925.3A priority Critical patent/CN113344002B/en
Publication of CN113344002A publication Critical patent/CN113344002A/en
Application granted granted Critical
Publication of CN113344002B publication Critical patent/CN113344002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target coordinate duplicate removal method, a target coordinate duplicate removal system, electronic equipment and a readable storage medium, and relates to the technical field of data processing. The method comprises the following steps: acquiring target coordinates, types and quantities of all targets in a specific area range; dividing a specific area range into a plurality of area sub-arrays; all target coordinates are subjected to set division according to regional subarrays, and all target coordinates in the same regional subarray are a set; establishing adjacent matrixes by pairwise grouping all target coordinates in the same set; judging whether the measurement between two target coordinates in the adjacent matrix is smaller than a threshold value; if the measurement is smaller than the threshold value, judging whether the two targets corresponding to the two target coordinates are the same in type; if the types of the two targets are the same, judging that the coordinates of the two targets correspond to the same target; duplicate target coordinates are removed. According to the technical scheme, the target coordinates which are repeatedly identified can be effectively removed, and the accuracy of the identification result is improved.

Description

Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of data processing, in particular to a target coordinate duplicate removal method, a target coordinate duplicate removal system, electronic equipment and a readable storage medium.
Background
Image recognition, which refers to a technique for processing, analyzing and understanding images by a computer to recognize various different patterns of objects and objects, is a practical application of applying a deep learning algorithm. In real life/production, there are many fields in which image recognition technology needs to be applied to recognize an object. For example, in the process of inspecting a photovoltaic power station, an image recognition technology is needed to identify defects, and further defect coordinates are determined, so that maintenance is facilitated; in the process of face recognition, the face feature points and the coordinates thereof need to be recognized by using an image recognition technology, and whether the face feature points and the coordinates thereof belong to the same user with the head portrait stored in the database is further compared.
The inventor finds that in the image recognition process, in order to prevent omission, in the process of acquiring images, a certain overlap of the ranges of two adjacent acquired images is generally required, but this will result in an increase in the number of acquired images, and the number of objects to be recognized in the images is also increased, which affects the accuracy of the recognition result.
Disclosure of Invention
The invention provides a target coordinate duplicate removal method, a target coordinate duplicate removal system, electronic equipment and a readable storage medium, which can effectively remove repeatedly identified target coordinates and improve the accuracy of identification results.
In a first aspect, the present invention provides a method for removing duplicate of a target coordinate, which adopts the following technical scheme:
the target coordinate de-duplication method comprises the following steps:
acquiring target coordinates, types and quantity of all targets in a specific area range, wherein all targets are obtained by identifying the specific area range for multiple times, and the area ranges identified twice are overlapped;
dividing the specific area range into a plurality of area sub-arrays, wherein the range of the area sub-arrays comprises at least two identified area ranges;
all the target coordinates are subjected to set division according to the area subarrays, and all the target coordinates in the same area subarray are a set;
establishing adjacent matrixes by pairwise grouping all target coordinates in the same set;
judging whether the measurement between the two target coordinates in the adjacency matrix is smaller than a threshold value;
if the measurement is smaller than the threshold value, judging whether the two targets corresponding to the two target coordinates are the same in type;
if the types of the two targets are the same, judging that the coordinates of the two targets correspond to the same target;
and removing the repeated target coordinates.
Optionally, obtaining the types and the number of all targets in the specific area range includes: acquiring a plurality of infrared images of the specific area range, wherein the area ranges corresponding to the two adjacent acquired infrared images are overlapped; and identifying each infrared image to acquire the type and the number of the targets in the specific area range.
Optionally, the target coordinates are world geographic coordinates; the acquiring target coordinates of all targets in the specific area range comprises: acquiring the coordinates of the target in the infrared image coordinate system; acquiring a plurality of wide-angle images of the specific area range at the same time of acquiring a plurality of infrared images; mapping the infrared image to the corresponding wide-angle image; and converting the coordinates of the target mapped on the wide-angle image in the infrared image coordinate system into a world coordinate system to obtain the target coordinates.
Optionally, the specific area range is a photovoltaic power station, the target is a defect, the unmanned aerial vehicle is provided with a dual-optical camera, hovers at a fixed point, and the infrared image and the wide-angle image are collected at the same time; converting coordinates of the target mapped on the wide-angle image in the infrared image coordinate system into a world coordinate system, and obtaining the target coordinates comprises:
performing space-three operation on the wide-angle image to obtain an external attitude angle relative to a world coordinate system when the unmanned aerial vehicle collects the wide-angle image;
and converting the coordinates of the target mapped to the wide-angle image in the infrared image coordinate system into a world coordinate system by combining the internal parameters, the external attitude angle and the hovering position information of the dual-optical camera to obtain the target coordinates.
Optionally, the constructing the adjacency matrix by grouping all the target coordinates in the same set two by two includes:
the number of target coordinates in the same set is N;
an N multiplied by N adjacent matrix A is constructed between every two N target coordinates in a group, N values of the ith row in the adjacent matrix A are sequentially the measurement between the ith target coordinate and the 1 st to N target coordinates respectively, and i is 1, 2, … … and N.
Optionally, the specific area range is a photovoltaic power station, and the target is a defect; the photovoltaic power station is internally provided with a plurality of photovoltaic components, the sizes of the photovoltaic components are the same, and the size relationship between the threshold Th and the width W of the photovoltaic components is as follows: w is less than or equal to the threshold Th is less than or equal to 2W.
Optionally, the removing the repeated target coordinates comprises: determining the central points of the area ranges respectively corresponding to the identification processes of the two target coordinates; respectively calculating the distance from the position of the two target coordinates to the central point of the corresponding area range; and reserving the target coordinates corresponding to the position which is closer to the central point of the area range.
In a second aspect, the present invention provides a target coordinate deduplication system, which adopts the following technical solution:
the target coordinate deduplication system comprises:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring target coordinates, types and quantity of all targets in a specific area range, all targets are acquired by identifying the specific area range for multiple times, and the area ranges identified twice are overlapped;
the subarray division module is used for dividing the specific area range into a plurality of area subarrays, and the area subarray range comprises at least twice identified area ranges;
the set dividing module is used for carrying out set division on all the target coordinates according to the area subarrays, and all the target coordinates in the same area subarray are a set;
the matrix construction module is used for constructing adjacent matrixes by pairwise grouping all target coordinates in the same set;
the first judgment module is used for judging whether the measurement between the two target coordinates in the adjacent matrix is smaller than a threshold value;
the second judgment module is used for judging whether the two targets corresponding to the two target coordinates are the same in type or not when the measurement is smaller than the threshold;
the judging module is used for judging that the coordinates of the two targets correspond to the same target when the types of the two targets are the same;
and the duplication removing module is used for removing the duplicated target coordinates.
In a third aspect, the present invention provides an electronic device, comprising: a memory storing execution instructions; and a processor executing execution instructions stored by the memory to cause the processor to perform the target coordinate deduplication method of any one of the above.
In a fourth aspect, the present invention provides a readable storage medium having stored therein execution instructions, which when executed by a processor, are configured to implement the target coordinate deduplication method as described in any one of the above.
The invention provides a target coordinate duplication eliminating method, a system, electronic equipment and a readable storage medium, wherein the target coordinate duplication eliminating method comprises the following steps: acquiring target coordinates, types and quantity of all targets in a specific area range, wherein all targets are obtained by identifying the specific area range for multiple times, and the adjacent two identified area ranges are overlapped; dividing a specific area range into a plurality of area sub-arrays, wherein the area sub-array range comprises at least twice identified area ranges; all target coordinates are subjected to set division according to regional subarrays, and all target coordinates in the same regional subarray are a set; establishing adjacent matrixes by pairwise grouping all target coordinates in the same set; judging whether the measurement between two target coordinates in the adjacent matrix is smaller than a threshold value; if the measurement is smaller than the threshold value, judging whether the two targets corresponding to the two target coordinates are the same in type; if the types of the two targets are the same, judging that the coordinates of the two targets correspond to the same target; duplicate target coordinates are removed. Therefore, the target coordinate de-duplication method provided by the invention can effectively remove the target coordinate which is repeatedly identified, and improve the accuracy of the identification result.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a method for removing duplicate target coordinates according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an overlapping relationship between two adjacent infrared images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the location of a mark on an infrared image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the position of a corresponding mark on a wide-angle image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an infrared image corresponding to a wide-angle image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a coordinate transformation process provided by an embodiment of the present invention;
fig. 7 is a detailed flowchart of the sub-step S142 provided in the embodiment of the present invention;
fig. 8 is a block diagram of a target coordinate deduplication system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the technical features in the embodiments of the present invention may be combined with each other without conflict.
An embodiment of the present invention provides a method for removing duplicate of a target coordinate, and specifically, as shown in fig. 1, fig. 1 is a flowchart of the method for removing duplicate of a target coordinate provided in the embodiment of the present invention, where the method for removing duplicate of a target coordinate includes:
and step S1, acquiring the target coordinates, types and number of all targets in the specific area range.
All the targets are obtained by identifying the specific area range for multiple times, and the area ranges identified twice are overlapped. Due to the fact that the area ranges of two adjacent identification times are overlapped, omission of targets cannot be caused in the whole identification process, the accuracy of the target coordinates output finally is higher, but targets obtained through two adjacent identification times may have targets identified repeatedly, and the repeated target coordinates can be removed through the following steps provided by the embodiment of the invention.
The identification process can be used for directly identifying the targets in the area range and acquiring the coordinates, types and quantity of the targets; or acquiring an image of the area range, such as an infrared image, an optical image and the like, and acquiring the coordinates, the type and the number of the targets by identifying the image. In the identification process, the number of the targets in the specific area range is identified, the types of the targets are also identified, the types of the targets can be output together when the result is output, the targets with the coordinates close to but different types can be prevented from being removed by mistake, and the accuracy of the output result can be further improved. In the embodiment of the invention, the image of the area range can be selected to be obtained firstly, and then the coordinates, the type and the number of the target can be obtained by identifying the image.
Optionally, in the embodiment of the present invention, acquiring the types and the numbers of all targets in the specific area range includes: acquiring a plurality of infrared images of a specific region range, as shown in fig. 2, fig. 2 is an overlapping relationship diagram of two adjacent acquired infrared images provided by the embodiment of the present invention, and the region ranges (respectively represented by a solid line frame and a dashed line frame in the diagram) corresponding to the two adjacent acquired infrared images are overlapped; and identifying each infrared image to obtain the type and the number of the targets in the specific area range. The specific way of identifying each infrared image and acquiring the type and number of the targets in the specific area range may be as follows: and constructing a neural network deep learning model, inputting the infrared image into the neural network deep learning model, and detecting the target to further obtain the type and the number of the target.
In the embodiment of the present invention, a coordinate system adapted to the target coordinate is not limited too much, and any coordinate system may be used as long as the coordinate system has the same dimension, and may be a corresponding coordinate system in the image, or a world coordinate system such as a WGS84 world coordinate system, a UTM, or the like, but may not be a coordinate represented by longitude and latitude. For different target coordinate types, the specific manner of obtaining the target coordinates of all targets in the specific range can be selected according to actual needs.
In an example, the target coordinates in the embodiment of the present invention are coordinates in an infrared image coordinate system, and the target coordinates can be obtained by directly recognizing each acquired infrared image.
In another example, the target coordinates in the embodiment of the present invention are world geographic coordinates, and acquiring the target coordinates of all targets in the specific area range includes:
and a substep S11 of acquiring the coordinates of the target in the infrared image coordinate system.
For example, a neural network deep learning model is constructed, an infrared image is input into the neural network deep learning model for target detection, coordinates of a target in an infrared image coordinate system are further obtained, and if the size of the target is large, the coordinates of the center position of a range frame where the target is located in the infrared image are used as the coordinates of the target in the infrared image coordinate system.
And a substep S12 of acquiring a plurality of wide-angle images of a specific region range at the same time of acquiring the plurality of infrared images.
By taking a specific area range as a photovoltaic power station and taking a target as a defect as an example, the unmanned aerial vehicle can be used for loading the double-optical camera, hovering at a fixed point and acquiring an infrared image and a wide-angle image.
And a sub-step S13 of mapping the infrared image onto a corresponding wide-angle image.
Optionally, as shown in fig. 3, 4, and 5, fig. 3 is a schematic diagram of a mark position on an infrared image according to an embodiment of the present invention, fig. 4 is a schematic diagram of a corresponding mark position on a wide-angle image according to an embodiment of the present invention, and fig. 5 is a schematic diagram of an infrared image according to an embodiment of the present invention corresponding to the wide-angle image in a one-to-one manner, where mapping the infrared image to the wide-angle image includes: selecting 4 marker positions from the infrared image; corresponding the 4 marked positions to the wide-angle image, and determining parameters of a perspective transformation matrix; carrying out perspective transformation according to the parameters of the perspective transformation matrix to obtain a perspective transformation matrix (the perspective transformation matrix is suitable for all infrared images and all wide-angle images in the identification process); and mapping the infrared image to the wide-angle image according to the perspective transformation matrix, thereby realizing the mapping from the infrared image to the wide-angle image.
In the process, the coordinates of the target in the infrared image coordinate system are also mapped onto the wide-angle image.
And a substep S14 of converting the coordinates of the target mapped on the wide-angle image in the infrared image coordinate system into a world coordinate system to obtain the target coordinates.
Still taking a specific area range as a photovoltaic power station and a target as a defect, the unmanned aerial vehicle is equipped with a dual-optical camera, hovers at a fixed point, and simultaneously acquires an infrared image and a wide-angle image, as shown in fig. 6, fig. 6 is a schematic diagram of a coordinate transformation process provided by an embodiment of the present invention, and the coordinate of the target mapped onto the wide-angle image in an infrared image coordinate system is transformed into a world coordinate system, and obtaining the target coordinate includes:
and a substep S141 of performing space-three operation on the wide-angle image to obtain an external attitude angle relative to the world coordinate system when the unmanned aerial vehicle acquires the wide-angle image.
The substep S141 may specifically include: performing space-three operation on the wide-angle image to obtain geographic coordinates of a world coordinate system (such as WGS84 world coordinate system) of the shooting center point of the wide-angle image, elevation information X, Y, Z, and rotation angles around three axes of a space coordinate axis X, Y, Z included in an external attitude angle
Figure 604209DEST_PATH_IMAGE001
And a substep S142 of converting the coordinates of the target mapped on the wide-angle image in the infrared image coordinate system into a world coordinate system by combining the internal reference and the external attitude angle of the double-optical camera and the hovering position information to obtain target coordinates.
As shown in fig. 7, fig. 7 is a detailed flowchart of the sub-step S142 provided in the embodiment of the present invention, and the sub-step S142 may specifically include:
and a substep a, acquiring longitude, latitude and absolute elevation of the hovering position of the unmanned aerial vehicle when the wide-angle image is acquired, and acquiring corresponding absolute elevation information of the infrared image center point at a corresponding geographic position.
And a substep b, correcting the distortion of the wide-angle image by combining the internal parameters of the double-light camera.
Optionally, correcting distortion of the wide-angle image in combination with the intrinsic parameters of the dual-camera comprises:
let the coordinates of the target mapped on the wide-angle image in the infrared image coordinate system be (x, y), and the geometric center coordinates of the dual-optical camera be (x)0,y0) The correction process is as follows:
Figure 752294DEST_PATH_IMAGE002
Figure 573620DEST_PATH_IMAGE003
Figure 719430DEST_PATH_IMAGE004
Figure 891785DEST_PATH_IMAGE005
in the formula w0Denotes the width of an infrared image, d denotes a unit pixel length of the dual camera, f denotes a focal length of the dual camera, k1、k2、k3Taylor expansion coefficient, p, representing radial distortion of a two-light camera1、p2Coefficients representing tangential distortion of a two-light camera.
And a substep c of calculating a rotation matrix according to the Y, Z, X axis rotation order.
Optionally, calculating the rotation matrix from the sequence of Y, Z, X axis rotations includes:
the rotation matrix is as follows:
Figure 781244DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 457076DEST_PATH_IMAGE007
Figure 304946DEST_PATH_IMAGE008
Figure 433439DEST_PATH_IMAGE009
Figure 392168DEST_PATH_IMAGE010
Figure 922507DEST_PATH_IMAGE011
Figure 675699DEST_PATH_IMAGE012
Figure 557067DEST_PATH_IMAGE013
Figure 254240DEST_PATH_IMAGE014
Figure 701402DEST_PATH_IMAGE015
and d, calculating to obtain a target coordinate according to the rotation matrix and the geographic coordinate of the world coordinate system of the wide-angle image shooting center point.
Optionally, the calculating the target coordinate according to the rotation matrix and the geographic coordinate of the world coordinate system of the wide-angle image shooting center point includes:
Figure 563179DEST_PATH_IMAGE016
Figure 931843DEST_PATH_IMAGE017
(Xs,Ys) Representing target coordinates, (X)c,Yc) And (x, y) selecting the final value obtained after the correction.
Step S2, dividing the specific area range into a plurality of area sub-arrays, wherein the area sub-array range includes at least two identified area ranges.
Optionally, step S2 specifically includes: and dividing a plurality of area sub-arrays according to factors such as the actual shape of the specific area range. Taking a specific area range as a photovoltaic power station as an example, dividing the inspection range into a plurality of area sub-arrays according to the actual terrain of the photovoltaic power station and the difficulty degree of photographing of the unmanned aerial vehicle. The method specifically comprises the following steps: exporting a kml file, wherein the kml file comprises the world geographic coordinate ranges of the photovoltaic power station and each regional subarray; an tfw file for marking the range of each area subarray is generated according to the kml file, and the interval and the unit pixel value size defined by each area subarray are determined. Conversion between the world geographic coordinates and the image coordinates of the station building image during the generation process of the patrol report can be facilitated.
And step S3, performing set division on all target coordinates according to the area subarrays, wherein all target coordinates in the same area subarray are a set.
Taking the defect that the target is a photovoltaic power station as an example, all target coordinates are subjected to set division by using a kml file according to an area subarray.
And step S4, constructing an adjacency matrix by grouping all target coordinates in the same set.
Optionally, the constructing the adjacency matrix by grouping all target coordinates in the same set two by two includes:
the number of target coordinates in the same set is N;
an N multiplied by N adjacent matrix A is constructed between every two N target coordinates in a group, N values of the ith row in the adjacent matrix A are sequentially the measurement between the ith target coordinate and the 1 st to N target coordinates respectively, and i is 1, 2, … … and N. Two target coordinates, i.e. target coordinate (x)1,y1) And target coordinates (x)2,y2) The measure a between is calculated as:
Figure 763533DEST_PATH_IMAGE018
taking the example that N is 4, the method is,
Figure 534043DEST_PATH_IMAGE019
wherein, the 4 values of the 1 st row in the adjacency matrix a are respectively: the measure a between the 1 st target coordinate and the 1 st to 4 th target coordinates11,a12,a13,a14(ii) a The 4 values in row 2 are: measure a between 2 nd target coordinate and 1-4 th target coordinate21,a22,a23,a24(ii) a The 4 values in row 3 are: the measure a between the 3 rd target coordinate and the 1 st to 4 th target coordinates31,a32,a33,a34The 4 values in row 4 are: the measure a between the 4 th target coordinate and the 1 st to 4 th target coordinates41,a42,a43,a44
The value of the measure between each target coordinate and itself is 0, i.e. a11、a22、a33And a44Are all 0, and can be directly recorded as "0" in the adjacency matrix a; and the measure a between the 1 st target coordinate and the 2 nd target coordinate12And the measurement a between the 2 nd target coordinate and the 1 st target coordinate21Same, similarly, a13And a31Same, a14And a41Same, a23And a32Same, a24And a42Same, a34And a43If the values are the same, the same values in the adjacency matrix A only need to be calculated once, and repeated calculation is not needed. The above adjacency matrix a may be simplified to
Figure 363459DEST_PATH_IMAGE020
Wherein "-" means without calculation。
Although there is a related application of an adjacency matrix in the prior art, for example, the relevance of pedestrians in two frames before and after one video is realized by the adjacency matrix, each value in the adjacency matrix represents the relevance degree between pedestrians in different frames of videos, and if the value is smaller than the threshold value, the relevance is established, and the pedestrians in the two frames of videos are the same person. For example, if there are a pedestrian 1, a pedestrian 2, a pedestrian 3, and a pedestrian 4 in the first frame video, and there are a pedestrian a, a pedestrian B, a pedestrian c, and a pedestrian d in the second frame video, the constructed adjacent matrix B is
Figure 219419DEST_PATH_IMAGE021
If a1aIf the value of (a) is less than the threshold value, the pedestrian 1 in the first frame video and the pedestrian a in the second frame video are the same person. In the application scenario of the adjacency matrix, the similarity is mainly judged, and each value in the adjacency matrix needs to be calculated.
And step S5, judging whether the measurement between the two target coordinates in the adjacency matrix is smaller than a threshold value.
Wherein, the threshold value of the measurement between the two target coordinates in the adjacency matrix can be set according to the actual application scene. Taking a specific area range as a photovoltaic power station and taking a target as an example of a defect, a plurality of photovoltaic modules are arranged in the photovoltaic power station, the sizes of the photovoltaic modules are the same, and the threshold value isThThe magnitude relationship with the width W of the photovoltaic module may be: w is less than or equal to threshold valueThLess than or equal to 2W. The photovoltaic module is generally rectangular, and the width W of the photovoltaic module refers to the dimension of the shorter side of the rectangle. In this application scenario, the above thresholdThThe setting of size can be when considering better deduplication effect to and avoid the target coordinate disappearance that improper deduplication caused.
The judging process can be as follows: traversing each value in the adjacent matrix A, and respectively connecting each value with the threshold valueThMaking comparisons, e.g. a in adjacency matrix A12A is 0.1 and the threshold value is 0.2, then a12Is less than the threshold value and is less than the threshold value,indicating that the metric between the 1 st target coordinate and the 2 nd target coordinate is less than the threshold.
If the measure between the coordinates of two objects in the adjacency matrix a is smaller than the threshold, it may indicate that an object is recognized 2 times, and if the measure between the coordinates of three objects in the adjacency matrix a is smaller than the threshold, it may indicate that an object is recognized 3 times. And so on.
It should be noted that, in an actual process, it is necessary to determine whether the metric between any two target coordinates in the adjacency matrix is smaller than a threshold, and only the above contents need to be repeated.
And step S6, if the measurement is smaller than the threshold, judging whether the two targets corresponding to the two target coordinates are the same in type.
In step S1, the type of the target is already acquired, and the comparison may be performed directly here. Taking the target as a defect of a photovoltaic power station as an example, the type of the target can be diode fault, hot spot, open circuit and the like.
And step S7, if the types of the two objects are the same, judging that the coordinates of the two objects correspond to the same object.
If the measurement between the target coordinates of the two targets is smaller than the threshold value and the types of the two targets are the same, the two target coordinates are judged to correspond to the same target, namely the same target is repeatedly identified in the identification process. If the measurement between the target coordinates of the two targets is smaller than the threshold value but the types of the two targets are different, the two targets are different targets, and the problem of repeated recognition does not exist.
And step S8, removing repeated target coordinates.
Optionally, removing the repeated target coordinates comprises: determining the central points of the area ranges respectively corresponding to the identification processes of the two target coordinates; respectively calculating the distance from the positions of the two target coordinates to the central point of the corresponding area range; and reserving target coordinates corresponding to a position which is closer to the central point of the area range. Calculating target coordinates (x)A,yA) Is located to the center point (x) of the corresponding area rangeB,yB) In the following manner:
Figure 58062DEST_PATH_IMAGE022
if one target is repeatedly identified for 2 times, if the target coordinate 1 and the target coordinate 2 are 2 target coordinates obtained by repeatedly identifying the same target, and the metric between the target coordinate 1 and the target coordinate 2 in the adjacency matrix is smaller than the threshold, it is sufficient to remove one of the target coordinates according to the method in step S8 when it is further determined that the target 1 and the target 2 are of the same type.
If a target is identified 3 times, if the target coordinate 1, the target coordinate 2, and the target coordinate 3 are 3 target coordinates obtained by repeatedly identifying the same target, the metric between the target coordinate 1 and the target coordinate 2 in the adjacency matrix is smaller than the threshold, the metric between the target coordinate 2 and the target coordinate 3 is smaller than the threshold, and the metric between the target coordinate 1 and the target coordinate 3 is smaller than the threshold, and when it is further determined that the target 1, the target 2, and the target 3 are of the same type, it is sufficient to remove two target coordinates thereof according to the method in step S8.
In the method for removing the target coordinates provided by the embodiment of the invention, the target coordinates, types and number of all targets in a specific area range are obtained, all targets are obtained by identifying the specific area range for multiple times, and the area ranges of two adjacent identifications are overlapped; dividing a specific area range into a plurality of area sub-arrays, wherein the area sub-array range comprises at least twice identified area ranges; all target coordinates are subjected to set division according to regional subarrays, and all target coordinates in the same regional subarray are a set; establishing adjacent matrixes by pairwise grouping all target coordinates in the same set; judging whether the measurement between two target coordinates in the adjacent matrix is smaller than a threshold value; if the measurement is smaller than the threshold value, judging whether the two targets corresponding to the two target coordinates are the same in type; the types of the two targets are the same, and the coordinates of the two targets are judged to correspond to the same target; duplicate target coordinates are removed. Therefore, the target coordinate de-duplication method provided by the invention can effectively remove the target coordinate which is repeatedly identified, and improve the accuracy of the identification result.
In addition, an embodiment of the present invention provides a target coordinate deduplication system, and specifically, as shown in the drawing, a block diagram of the target coordinate deduplication system provided in the embodiment of the present invention is illustrated, where the target coordinate deduplication system includes:
the acquisition module 10 is configured to acquire target coordinates, types, and numbers of all targets in a specific area range, where all targets are obtained by performing multiple identifications on the specific area range, and adjacent two identified area ranges overlap;
a subarray division module 20, configured to divide a specific area range into a plurality of area subarrays, where the area subarray range includes area ranges identified at least twice;
the set dividing module 30 is configured to perform set division on all target coordinates according to a regional subarray, where all target coordinates in the same regional subarray are a set;
the matrix construction module 40 is used for constructing adjacent matrixes by grouping all the target coordinates in the same set;
a first judging module 50, configured to judge whether a metric between two target coordinates in the adjacency matrix is smaller than a threshold;
a second judging module 60, configured to judge whether two targets corresponding to the two target coordinates are of the same type when the metric is smaller than the threshold;
the judging module 70 is used for judging that the coordinates of the two targets correspond to the same target when the types of the two targets are the same;
and a deduplication module 80 for removing duplicate target coordinates.
It should be noted that the specific contents of the foregoing target coordinate deduplication method are all applicable to each corresponding module, and are not described herein again.
In addition, an embodiment of the present invention further provides an electronic device, where the electronic device includes: a memory storing execution instructions; and a processor executing the execution instructions stored by the memory to cause the processor to perform the target coordinate deduplication method of any one of the above.
The embodiment of the invention also provides a readable storage medium, wherein the readable storage medium stores an execution instruction, and the execution instruction is used for realizing the target coordinate deduplication method in any one of the above steps when being executed by a processor.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A method for de-duplicating target coordinates, comprising:
acquiring target coordinates, types and quantity of all targets in a specific area range, wherein all targets are obtained by identifying the specific area range for multiple times, and the area ranges identified twice are overlapped;
dividing the specific area range into a plurality of area sub-arrays, wherein the range of the area sub-arrays comprises at least two identified area ranges;
all the target coordinates are subjected to set division according to the area subarrays, and all the target coordinates in the same area subarray are a set;
establishing adjacent matrixes by pairwise grouping all target coordinates in the same set;
judging whether the measurement between the two target coordinates in the adjacency matrix is smaller than a threshold value;
if the measurement is smaller than the threshold value, judging whether the two targets corresponding to the two target coordinates are the same in type;
if the types of the two targets are the same, judging that the coordinates of the two targets correspond to the same target;
removing the repeated target coordinates;
the acquiring the types and the number of all targets in the specific area range comprises: acquiring a plurality of infrared images of the specific area range, wherein the area ranges corresponding to the two adjacent acquired infrared images are overlapped; identifying each infrared image to obtain the type and the number of targets in the specific area range;
the target coordinates are world geographic coordinates; the acquiring target coordinates of all targets in the specific area range comprises: acquiring the coordinates of the target in the infrared image coordinate system; acquiring a plurality of wide-angle images of the specific area range at the same time of acquiring a plurality of infrared images; mapping the infrared image to the corresponding wide-angle image; converting the coordinates of the target mapped to the wide-angle image in the infrared image coordinate system into a world coordinate system to obtain the target coordinates;
the specific area range is a photovoltaic power station, the target is a defect, the unmanned aerial vehicle is provided with a double-optical camera, hovers at a fixed point, and the infrared image and the wide-angle image are collected at the same time; converting coordinates of the target mapped on the wide-angle image in the infrared image coordinate system into a world coordinate system, and obtaining the target coordinates comprises:
performing space-three operation on the wide-angle image to obtain an external attitude angle relative to a world coordinate system when the unmanned aerial vehicle collects the wide-angle image;
and converting the coordinates of the target mapped to the wide-angle image in the infrared image coordinate system into a world coordinate system by combining the internal parameters, the external attitude angle and the hovering position information of the dual-optical camera to obtain the target coordinates.
2. The method for removing duplicate target coordinates according to claim 1, wherein the grouping all target coordinates in the same set two by two to construct the adjacency matrix comprises:
the number of target coordinates in the same set is N;
an N multiplied by N adjacent matrix A is constructed between every two N target coordinates in a group, N values of the ith row in the adjacent matrix A are sequentially the measurement between the ith target coordinate and the 1 st to N target coordinates respectively, and i is 1, 2, … … and N.
3. The method for removing the weight of the target coordinate according to claim 1, wherein the specific area range is a photovoltaic power station, and the target is a defect; the photovoltaic power station is internally provided with a plurality of photovoltaic components, the sizes of the photovoltaic components are the same, and the size relation between the threshold Th and the width W of the photovoltaic components is as follows: w is less than or equal to the threshold Th is less than or equal to 2W.
4. The method of claim 1, wherein the removing the repeated target coordinates comprises: determining the central points of the area ranges respectively corresponding to the identification processes of the two target coordinates; respectively calculating the distance from the position of the two target coordinates to the central point of the corresponding area range; and reserving the target coordinates corresponding to the position which is closer to the central point of the area range.
5. A target coordinate deduplication system, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring target coordinates, types and quantity of all targets in a specific area range, all targets are acquired by identifying the specific area range for multiple times, and the area ranges identified twice are overlapped;
the subarray division module is used for dividing the specific area range into a plurality of area subarrays, and the area subarray range comprises at least twice identified area ranges;
the set dividing module is used for carrying out set division on all the target coordinates according to the area subarrays, and all the target coordinates in the same area subarray are a set;
the matrix construction module is used for constructing adjacent matrixes by pairwise grouping all target coordinates in the same set;
the first judgment module is used for judging whether the measurement between the two target coordinates in the adjacent matrix is smaller than a threshold value;
the second judgment module is used for judging whether the two targets corresponding to the two target coordinates are the same in type or not when the measurement is smaller than the threshold;
the judging module is used for judging that the coordinates of the two targets correspond to the same target when the types of the two targets are the same;
a duplicate removal module for removing duplicate target coordinates;
the acquiring the types and the number of all targets in the specific area range comprises: acquiring a plurality of infrared images of the specific area range, wherein the area ranges corresponding to the two adjacent acquired infrared images are overlapped; identifying each infrared image to obtain the type and the number of targets in the specific area range;
the target coordinates are world geographic coordinates; the acquiring target coordinates of all targets in the specific area range comprises: acquiring the coordinates of the target in the infrared image coordinate system; acquiring a plurality of wide-angle images of the specific area range at the same time of acquiring a plurality of infrared images; mapping the infrared image to the corresponding wide-angle image; converting the coordinates of the target mapped to the wide-angle image in the infrared image coordinate system into a world coordinate system to obtain the target coordinates;
the specific area range is a photovoltaic power station, the target is a defect, the unmanned aerial vehicle is provided with a double-optical camera, hovers at a fixed point, and the infrared image and the wide-angle image are collected at the same time; converting coordinates of the target mapped on the wide-angle image in the infrared image coordinate system into a world coordinate system, and obtaining the target coordinates comprises:
performing space-three operation on the wide-angle image to obtain an external attitude angle relative to a world coordinate system when the unmanned aerial vehicle collects the wide-angle image;
and converting the coordinates of the target mapped to the wide-angle image in the infrared image coordinate system into a world coordinate system by combining the internal parameters, the external attitude angle and the hovering position information of the dual-optical camera to obtain the target coordinates.
6. An electronic device, characterized in that the electronic device comprises: a memory storing execution instructions; and a processor executing the execution instructions stored in the memory to cause the processor to execute the target coordinate deduplication method of any one of claims 1-4.
7. A readable storage medium having stored thereon execution instructions, which when executed by a processor, are configured to implement the target coordinate deduplication method of any one of claims 1-4.
CN202110860925.3A 2021-07-29 2021-07-29 Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium Active CN113344002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110860925.3A CN113344002B (en) 2021-07-29 2021-07-29 Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110860925.3A CN113344002B (en) 2021-07-29 2021-07-29 Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113344002A CN113344002A (en) 2021-09-03
CN113344002B true CN113344002B (en) 2022-02-01

Family

ID=77480455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110860925.3A Active CN113344002B (en) 2021-07-29 2021-07-29 Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113344002B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472069B (en) * 2023-12-28 2024-03-26 烟台宇控软件有限公司 Robot control method and system for power transmission line detection

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106847067A (en) * 2017-01-19 2017-06-13 武汉联图时空信息科技有限公司 The automatic geometric correction method of indoor parking map
CN107015560A (en) * 2017-03-02 2017-08-04 浙江大学 A kind of method for inspecting of the photovoltaic array based on unmanned plane
CN108253971A (en) * 2017-12-29 2018-07-06 深圳创动科技有限公司 A kind of method for inspecting and system
CN110503679A (en) * 2019-08-29 2019-11-26 四川轻化工大学 A kind of preparation of infrared reference figure and evaluation method
CN111062991A (en) * 2019-12-27 2020-04-24 西安因诺航空科技有限公司 Method and device for positioning inspection target of photovoltaic scene and readable storage medium
EP3726729A1 (en) * 2017-12-14 2020-10-21 Acciona Energía, S.A. Automated photovoltaic plant inspection system and method
CN112577472A (en) * 2020-12-03 2021-03-30 上海振华重工电气有限公司 Unmanned aerial vehicle photovoltaic inspection positioning system and method
CN112857246A (en) * 2021-02-05 2021-05-28 中国矿业大学(北京) Strip mine slope deformation online monitoring method utilizing ground three-eye video matching

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344746B (en) * 2018-09-17 2022-02-01 曜科智能科技(上海)有限公司 Pedestrian counting method, system, computer device and storage medium
CN111862204A (en) * 2019-12-18 2020-10-30 北京嘀嘀无限科技发展有限公司 Method for extracting visual feature points of image and related device
CN112634370A (en) * 2020-12-31 2021-04-09 广州极飞科技有限公司 Unmanned aerial vehicle dotting method, device, equipment and storage medium
CN112633535B (en) * 2021-01-14 2024-05-24 国网安徽省电力有限公司 Intelligent inspection method and system for photovoltaic power station based on unmanned aerial vehicle image
CN112923928B (en) * 2021-01-29 2022-05-06 江苏提米智能科技有限公司 Photovoltaic panel navigation method and device based on image recognition, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106847067A (en) * 2017-01-19 2017-06-13 武汉联图时空信息科技有限公司 The automatic geometric correction method of indoor parking map
CN107015560A (en) * 2017-03-02 2017-08-04 浙江大学 A kind of method for inspecting of the photovoltaic array based on unmanned plane
EP3726729A1 (en) * 2017-12-14 2020-10-21 Acciona Energía, S.A. Automated photovoltaic plant inspection system and method
CN108253971A (en) * 2017-12-29 2018-07-06 深圳创动科技有限公司 A kind of method for inspecting and system
CN110503679A (en) * 2019-08-29 2019-11-26 四川轻化工大学 A kind of preparation of infrared reference figure and evaluation method
CN111062991A (en) * 2019-12-27 2020-04-24 西安因诺航空科技有限公司 Method and device for positioning inspection target of photovoltaic scene and readable storage medium
CN112577472A (en) * 2020-12-03 2021-03-30 上海振华重工电气有限公司 Unmanned aerial vehicle photovoltaic inspection positioning system and method
CN112857246A (en) * 2021-02-05 2021-05-28 中国矿业大学(北京) Strip mine slope deformation online monitoring method utilizing ground three-eye video matching

Also Published As

Publication number Publication date
CN113344002A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
Ukhwah et al. Asphalt pavement pothole detection using deep learning method based on YOLO neural network
Akagic et al. Pothole detection: An efficient vision based method using rgb color space image segmentation
CN113286129B (en) Inspection method and system for photovoltaic power station
TWI521448B (en) Vehicle identification system and method
CN110853033A (en) Video detection method and device based on inter-frame similarity
CN112329645B (en) Image detection method, device, electronic equipment and storage medium
CN112446363A (en) Image splicing and de-duplication method and device based on video frame extraction
Koch et al. Improving pothole recognition through vision tracking for automated pavement assessment
CN112258507B (en) Target object detection method and device of internet data center and electronic equipment
CN114419028A (en) Transmission line insulator defect duplication removing method and device integrating space multiple visual angles
CN115424155B (en) Illegal construction detection method, illegal construction detection device and computer storage medium
CN113344002B (en) Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium
CN104065863A (en) Image processing method and processing device
CN108694716B (en) Workpiece detection method, model training method and equipment
CN113011435A (en) Target object image processing method and device and electronic equipment
CN116342538A (en) Method and device for detecting running and leaking, intelligent equipment and storage medium
CN115546223A (en) Method and system for detecting loss of fastening bolt of equipment under train
CN115861448A (en) System calibration method and system based on angular point detection and characteristic point extraction
CN116258663A (en) Bolt defect identification method, device, computer equipment and storage medium
CN114463395A (en) Monitoring equipment offset detection method, equipment and medium
CN114445615A (en) Rotary insulator target detection method based on scale invariant feature pyramid structure
CN116935013B (en) Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction
CN112116068A (en) Annular image splicing method, equipment and medium
Zhang et al. DSA-Net: An Attention-Guided Network for Real-Time Defect Detection of Transmission Line Dampers Applied to UAV Inspections
CN112802112B (en) Visual positioning method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant