CN113159234B - Method and device for marking category of inspection picture, electronic equipment and storage medium - Google Patents
Method and device for marking category of inspection picture, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113159234B CN113159234B CN202110565105.1A CN202110565105A CN113159234B CN 113159234 B CN113159234 B CN 113159234B CN 202110565105 A CN202110565105 A CN 202110565105A CN 113159234 B CN113159234 B CN 113159234B
- Authority
- CN
- China
- Prior art keywords
- picture
- color value
- template
- matrix
- marked
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a category labeling method and device for inspection pictures, electronic equipment and a storage medium, wherein the to-be-labeled pictures are obtained, the to-be-labeled pictures are matched with template pictures in a template picture library, target template pictures matched with the to-be-labeled pictures are determined, the template pictures are obtained after a login user carries out category labeling on picture verification codes of a monitoring platform, and category labeling is carried out on the to-be-labeled pictures according to the categories of the target template pictures, so that continuous updating of the template pictures is realized, differences between the to-be-labeled pictures and the template pictures are reduced, the matching success rate of the to-be-labeled pictures and the template pictures is improved, and further, the automatic labeling success rate of the inspection pictures is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method and a device for marking categories of inspection pictures, electronic equipment and a storage medium.
Background
The current engineering monitoring project almost all needs to be patrolled and examined to foundation ditch engineering monitoring project for example, a project needs to upload supporting construction: the forming quality of the supporting structure; the crown beam, the support and the purlin have no cracks; the support and the upright post have large deformation; whether the waterproof curtain cracks or leaks or not; whether the soil body behind the wall sinks, cracks and slides or not; the foundation pit has no soil gushing, sand flowing and piping. Monitoring facilities: the conditions of the reference points and the measuring points are intact; whether there is an obstacle that affects the observation work; and monitoring the intact and protected conditions of the elements and the like to inspect pictures. The workload of monitoring personnel is greatly increased, and especially, a great amount of time is consumed for the patrol personnel in marking, naming and classifying a great amount of pictures shot in each patrol.
Although some software realizes automatic labeling and classification of pictures at present, pictures collected in the former period are used as template pictures to match inspection pictures in the later period, and due to the fact that field objects age and the like along with the time, the success rate of the matching of the inspection pictures is lower and lower, and the success rate of the automatic labeling of the inspection pictures is not high.
Disclosure of Invention
The embodiment of the application provides a category marking method and device for inspection pictures, electronic equipment and a storage medium, and aims to solve the problem that in the prior art, the automatic marking success rate is not high.
In a first aspect, an embodiment of the present application provides a method for labeling categories of inspection pictures, including:
acquiring a picture to be marked, wherein the picture to be marked is shot in the inspection process;
matching the picture to be marked with template pictures in a template picture library, and determining a target template picture matched with the picture to be marked, wherein the template picture is obtained by a login user after class marking is carried out on a picture verification code of a monitoring platform;
and carrying out category marking on the picture to be marked according to the category of the target template picture.
Optionally, the step of correspondingly storing the template picture in the template picture library with the standard color value matrix and the standard color value transformation matrix, matching the picture to be labeled with the template picture in the template picture library, and determining the target template picture matched with the picture to be labeled includes:
determining an interested area of the picture to be marked according to the characteristic area of each template picture in the template picture library;
extracting the pixel color value of the region of interest to obtain a color value matrix of the picture to be marked;
carrying out weighting processing and binarization processing on the color values in the color value matrix to obtain a color value transformation matrix of the picture to be labeled;
and matching the picture to be marked with each template picture respectively based on the color value matrix, the color value transformation matrix, the standard color value matrix and the standard color value transformation matrix, and determining a target template picture matched with the picture to be marked.
Optionally, the matching the to-be-labeled picture with each template picture respectively based on the color value matrix, the color value transformation matrix, the standard color value matrix, and the standard color value transformation matrix, and determining a target template picture matched with the to-be-labeled picture includes:
performing primary matching operation on the picture to be marked and each template picture according to the color value transformation matrix and the standard color value transformation matrix of each template picture, and determining candidate template pictures meeting primary matching conditions;
and performing secondary matching operation on the picture to be marked and each candidate template according to the color value matrix and the standard color value matrix of each candidate template picture, and determining a target template picture meeting secondary matching conditions.
Optionally, a feature region of a template picture in the template picture library is formed by at least two feature points, each feature point includes at least two pixels, a primary matching operation is performed on the picture to be labeled and each template picture according to the color value transformation matrix and the standard color value transformation matrix of each template picture, and a candidate template picture meeting a primary matching condition is determined, including:
determining a characteristic point pixel matrix of each template picture according to the characteristic point of each template picture and the standard color value transformation matrix;
calculating the similarity between the picture to be marked and each template picture according to the feature point pixel matrix and the color transformation matrix of each template picture to obtain a similarity matrix of each template picture;
and calculating the multiple relation between the elements of the similarity matrix of each template picture and the number of the characteristic points, and determining the template picture of which the multiple relation meets the preset multiple condition as a candidate template picture.
Optionally, the performing, according to the color value matrix and the standard color value matrix of each candidate template picture, a secondary matching operation on the picture to be labeled and each candidate template picture, and determining a target template picture meeting a secondary matching condition includes:
comparing the color value matrix with the color value of the corresponding position of the standard color value matrix of each candidate template picture to obtain a color value difference matrix of each candidate template picture;
and determining whether the color value difference in the color value difference matrix meets a preset deviation condition, and determining the candidate template picture meeting the preset deviation condition as the target template picture.
Optionally, the weighting and binarization processing are performed on the color values in the color value matrix to obtain a color value transformation matrix of the picture to be labeled, including:
according to a preset weight matrix and a preset weighting algorithm, carrying out weighting processing on the color values in the color value matrix to obtain a color value combination matrix;
and carrying out binarization processing on the color values in the color value merging matrix according to a preset color value threshold value to obtain the color value transformation matrix.
Optionally, before the matching of the to-be-annotated picture with the template picture in the template picture library and the determination of the target template picture matched with the to-be-annotated picture, the method further includes:
extracting pixel color values of characteristic areas of all template pictures in a template picture library to obtain a standard color value matrix of all template pictures;
and performing weighting processing and binarization processing on the standard color value matrix to obtain a standard color value transformation matrix of each template picture.
Optionally, the method further comprises:
if the target template picture matched with the picture to be marked does not exist in the template picture library, taking the picture to be marked as a picture verification code of the monitoring platform so that a user marks the picture to be marked when logging in the monitoring platform;
and updating the corresponding template picture in the template picture library by taking the picture to be marked, which is marked by the user, as a new template picture.
In a second aspect, an embodiment of the present application provides a category labeling device for polling pictures, including:
the image acquisition module is used for acquiring an image to be marked, and the image to be marked is shot in the inspection process;
the image processing module is used for matching the image to be marked with the template images in the template image library and determining a target template image matched with the image to be marked, wherein the template image is obtained by a login user after class marking is carried out on an image verification code of the monitoring platform;
and the picture marking module is used for marking the class of the picture to be marked according to the class of the target template picture.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for labeling the category of the inspection picture according to the first aspect when executing the program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for labeling categories of inspection pictures according to the first aspect.
The method, the device, the electronic equipment and the storage medium for marking the category of the inspection picture provided by the embodiment of the application realize the automatic marking of the category of the inspection picture by acquiring the picture to be marked, matching the picture to be marked with the template pictures in the template picture library and determining the target template picture matched with the picture to be marked, wherein the template picture is obtained by performing the category marking on the picture verification code of a monitoring platform by a login user, the picture to be marked is subjected to the category marking according to the category of the target template picture, the picture verification code is continuously updated by the inspection picture which is not automatically marked and the picture verification code which is verified by the user by logging in the monitoring platform is taken as the template picture, the continuous updating of the template picture is realized, thereby reducing the difference between the picture to be marked and the template picture and improving the matching success rate of the picture to be marked and the template picture, and then the automatic marking success rate of the polling pictures is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for labeling categories of inspection pictures according to an embodiment of the present application;
fig. 2 is a schematic diagram of a matching process of inspection pictures according to a second embodiment of the present application;
fig. 3 is a schematic diagram of a template picture provided in the second embodiment of the present application;
fig. 4 is a schematic diagram of a matching principle of the inspection picture according to the second embodiment of the present application;
fig. 5 is a schematic color value diagram of a feature region according to a second embodiment of the present application;
fig. 6 is a schematic diagram illustrating a generation principle of a color transformation matrix according to a second embodiment of the present application;
fig. 7 is a schematic diagram illustrating a generation principle of a color value combining matrix according to a second embodiment of the present disclosure;
fig. 8 is a color value diagram of a color value combining matrix according to a second embodiment of the present application;
fig. 9 is a color value diagram of a color value transformation matrix according to the second embodiment of the present application;
fig. 10 is a schematic structural diagram of a category labeling device for inspection pictures according to a third embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
The inspection is used in most engineering monitoring projects as an important work monitoring means, a large number of pictures are shot through inspection, and the types of the pictures are labeled, so that the pictures are distributed to corresponding professional technicians for further analysis, and the inspection method is of great importance for timely grasping the states of different engineering facilities and ensuring the safety and smoothness of the engineering projects. Although some software in the prior art can realize automatic labeling and classification of pictures, the success rate of picture matching is lower and lower along with the aging of engineering facilities and other conditions, so that the accuracy and efficiency of automatic labeling of inspection pictures are not high.
The main ideas of the technical scheme are as follows: based on the problem of low matching success rate in the automatic labeling process of the inspection pictures in the prior art, in the embodiment, on one hand, the inspection pictures which are not successfully matched can be used as the verification codes of the monitoring platform, workers can manually label the inspection pictures when logging in the monitoring platform, and the manually labeled inspection pictures are used for updating the template pictures, so that the template pictures can be updated and iterated in time, on the other hand, edge removal and color deepening processing can be performed on the pictures to be labeled and the template pictures through matrix transformation to weaken picture difference caused by object aging and the like, and picture matching is performed based on the transformed matrix Accuracy and efficiency.
Example one
Fig. 1 is a flowchart illustrating a method for labeling categories of inspection pictures according to an embodiment of the present disclosure, where the method according to the present disclosure may be executed by a device for labeling categories of inspection pictures according to an embodiment of the present disclosure, and the device may be implemented by software and/or hardware, and may be integrated in an electronic device (hereinafter, referred to as a picture processing device) such as a computer and an intelligent terminal. As shown in fig. 1, the method for labeling categories of inspection pictures in this embodiment includes:
and S101, obtaining a picture to be marked.
In this embodiment, a large number of pictures that inspection workers or robots will take in the process of inspecting engineering facilities, that is, inspection pictures, are uploaded to picture processing equipment through a data line or a network, and are stored by the picture processing equipment.
In this step, when class marking needs to be performed on the inspection picture, if the picture processing device receives a class marking instruction, the picture processing device extracts the required inspection picture from a database storing the inspection picture to obtain the picture to be marked.
In addition, in order to facilitate the user to grasp the picture processing progress, the picture processing device may further display the acquired picture to be labeled on the display screen.
S102, matching the picture to be marked with the template pictures in the template picture library, and determining a target template picture matched with the picture to be marked.
In this embodiment, a template picture library is set in advance, template pictures of various categories are stored in the template picture library, and the purpose of classifying and labeling the pictures to be labeled is achieved by identifying and positioning a target template picture matched with the pictures to be labeled from the template picture library.
In order to improve the success rate of picture matching, in the embodiment, the inspection picture which is not successfully matched in the automatic labeling process is used as a login picture verification code of a professional platform such as a monitoring platform and the like, a verification rule is set according to the picture type, and the picture verification code which passes the verification in the process that a user logs in the monitoring platform is used as a template picture, so that a template picture library is generated.
Illustratively, assume that N pictures are taken at a certain patrol, where N is the number of pictures1Sheet prism measuring point picture, n2Sheet reflector measuring point picture n3None of the water level hole measuring point pictures is automatically identified, marked and classified by software, and at the moment, the pictures are stored in the verification code pictureIn the library, a verification code for monitoring a platform login page is randomly extracted, and verification is realized by filling a classification name of a picture verification code according to user specification, for example, the verification is performed according to the existing classification names of the inspection picture, such as a prism measurement point, a reflector measurement point, a water level hole measurement point and the like.
Since each inspection picture has no determined name, in order to ensure the safety of the monitoring platform, illustratively, the picture verification code of the login interface of the monitoring platform needs to be input three times to normally log in the system, that is, after one inspection picture appears on the login interface, the user needs to input a standard name, and if the input name is not the standard name, the system requires the user to input the standard name again. If the name is the standard name, the system captures another inspection picture which is not marked from the picture library again, the user inputs the standard name again, repeatedly captures an inspection picture again, and inputs the standard name again for three times in total.
It can be understood that, in this embodiment, to ensure the accuracy and usability of the user labeling, the inspection picture labeled by a certain logged-in user is re-given to other logged-in users to fill in the name, one inspection picture needs to be filled in as the same name by at least three different users, and the repetition rate of the same name needs to exceed 70% before being used as the template picture. The condition that the user can normally log in the system is that at least three different pictures need to be marked simultaneously, and at least two marked pictures are consistent with the mark names of other users, so that the user can normally log in the system. By the method, the monitoring platform is protected, and updating and iteration of the template picture required by automatic labeling are realized by fully utilizing the fragmentization time of the practitioner.
Further, the template picture library can be obtained by storing the obtained template pictures according to categories, and illustratively, the name of each template picture in the template picture library is the name of the category. It can be understood that the template pictures in the template picture library are continuously updated, and when the types of the newly generated template pictures and the types of the template pictures in the template picture library are repeated, the new template picture is used for replacing the existing template picture, so that only one template picture in the same type in the template picture library is ensured.
It should be noted that, in this step, when performing image matching, a feature point matching algorithm in the prior art may be adopted, for example, by extracting RGB values and contours of feature points of an image, and performing rotation, scaling, comparison, and the like on the image according to the contours and the RGB values of the feature points, so as to implement matching, or other existing matching algorithms may be adopted, which is not limited herein.
S103, according to the type of the target template picture, carrying out type marking on the picture to be marked.
In this step, since the type of the template picture in the template picture library is known, after the target template picture is identified, the type of the target template picture is directly used as the type of the picture to be labeled.
It can be understood that, in this embodiment, when performing category labeling on a picture to be labeled, corresponding category labels, such as watermarks, may be directly added to the picture to be labeled, or names the picture to be labeled according to corresponding category names, or places the picture to be labeled in a corresponding folder, and the like.
It should be noted that, in this embodiment, if a target template picture matched with a picture to be annotated is found in the template picture library, it is stated that the picture to be annotated is not successfully matched, that is, the class annotation of the picture to be annotated this time fails, in this embodiment, the picture to be annotated is added to the verification code picture library, and is used as a picture verification code of the monitoring platform together with other inspection pictures in the verification code picture library, a user performs annotation on the inspection pictures when logging in the monitoring platform, and when the picture to be annotated satisfies the verification passing condition, the picture to be annotated is used as a new template picture, and a corresponding template picture in the template picture library is updated.
In the embodiment, the picture to be marked is obtained, the picture to be marked is matched with the template pictures in the template picture library, the target template picture matched with the picture to be marked is determined, the template picture is obtained by the login user after class marking is carried out on the picture verification code of the monitoring platform, according to the class of the target template picture, the class marking is carried out on the picture to be marked, the automatic marking of the class of the inspection picture is realized, the picture verification code is continuously updated through the inspection picture which is not automatically marked, and the picture identifying code which is verified by the user logging in the monitoring platform is used as the template picture, so that the template picture is continuously updated, therefore, the difference between the picture to be marked and the template picture is reduced, the matching success rate of the picture to be marked and the template picture is improved, and the automatic marking success rate of the inspection picture is further improved.
Example two
In order to improve the success rate of matching the pictures, the matching algorithm in this embodiment is improved, and a specific embodiment is described below to describe a matching process of the pictures, and exemplarily, fig. 2 is a schematic diagram of a matching process of the inspection picture provided in the second embodiment of the present application, where on the basis of the foregoing embodiment, as shown in fig. 2, in this embodiment, a picture to be labeled is matched with a template picture in a template picture library, and a target template picture matched with the picture to be labeled is determined, where the method includes:
s201, determining an interested area of the picture to be marked according to the characteristic area of each template picture in the template picture library.
For convenience of description, in this embodiment, it is assumed that only one to-be-labeled picture is matched at a time, and it can be understood that by using the technical solution of the present application, multiple to-be-labeled pictures can be matched at the same time.
Assuming that there are n template pictures in the template picture library, due to different engineering facilities corresponding to different template pictures, in order to find a target template picture matching the picture S to be annotated, in this embodiment, a mode of respectively matching the picture S to be annotated with the n template pictures is adopted, so as to find the target template picture matching the picture S to be annotated.
In this embodiment, the feature region of the template picture is generally a region where an engineering facility is located, and exemplarily, fig. 3 is a schematic diagram of the template picture provided in the second embodiment of the present application, as shown in fig. 3, fig. 3 shows a template picture of a certain monitoring point, in the template picture, a series of outlines of prisms are marked, each point on the outline of a prism represents a feature point, and a region including all the feature points on the picture is selected to form a feature region, such as a region selected by a white box in fig. 3.
Since different template pictures usually correspond to different engineering facilities, and the shapes, sizes, and the like of the different engineering facilities are usually different, the characteristic regions of the different template pictures are also different. In order to ensure that the picture to be annotated can be matched with each template picture, in this embodiment, an area of interest of the picture to be annotated is determined according to a feature region of each template picture, for example, fig. 4 is a schematic diagram of a matching principle of the inspection picture provided in the second embodiment of the present application, as shown in fig. 4, for the picture S to be annotated, feature regions of template pictures P1, P2, … …, and Pn are respectively used as the area of interest of the picture S to be annotated, and n intermediate pictures S1, S2, … …, and Sn of the picture to be annotated are obtained, for example, the intermediate picture S1 uses the feature region of the template picture P1 as the area of interest, the intermediate picture S2 uses the feature region of the template picture P2 as the area of interest, and so on, the intermediate picture Sn uses the feature region of the template picture Pn as the area of interest, and further, by using S1, S2, S … …, Sn and P1 as the area of interest, and so on, P2, … … and Pn are respectively matched correspondingly to determine the final target template picture.
It can be understood that, in this embodiment, feature points and labeled feature regions may be set for each template picture in advance according to features of different engineering facilities, and the feature points and the feature regions of each template picture are stored and stored in correspondence with each template picture. In this embodiment, an association folder is established for each template picture in the template picture library, and the association folder is used to store all data related to the template picture, and this data may include feature point data, feature area data, and standard color value matrix and standard color value transformation matrix data, which will be described below, of the template picture.
In order to facilitate subsequent picture matching, in this embodiment, before or after the template picture is stored in the template picture library, data processing is performed on the template picture according to the region of interest of the template picture to obtain a standard color value matrix and a standard color value transformation matrix of each template picture, and the same template picture is stored in correspondence with the standard color value matrix and the standard color value transformation matrix thereof, for example, the standard color value matrix and the standard color value transformation matrix of the template picture Pn are stored in the associated folder Pn of the template picture Pn.
In this embodiment, a specific implementation manner of obtaining the standard color value matrix of the template picture is similar to a specific implementation manner of obtaining the color value matrix of the picture to be annotated, and reference may be made to the following description in S202, and a specific implementation manner of obtaining the standard color value transformation matrix of the template picture is similar to a specific implementation manner of obtaining the color value transformation matrix of the picture to be annotated, and reference may be made to the following description in S203.
S202, extracting pixel color values of the region of interest to obtain a color value matrix of the picture to be marked.
After determining the region of interest of the picture to be labeled, in this step, the color value matrix of the picture to be labeled is obtained by extracting the color values, namely RGB values, of the pixels in the region of interest on the picture to be labeled, each element in the color value matrix represents the color value of one pixel, and exemplarily, the color value matrix can be expressed as:wherein, amnAnd expressing the color value of the pixel point of the mth row and the nth column in the characteristic region.
It should be noted that, in order to improve the matching accuracy in this embodiment, the RGB values of the pixels are split, that is, each to-be-labeled picture corresponds to three color value matrices, that is, an R channel color value matrix, a G channel color value matrix, and a B channel color value matrix. Exemplarily, fig. 5 is a color value diagram of a feature region provided in the second embodiment of the present application, taking a part of the feature region as an example, and fig. 5 shows a G channel color value of the region, where a value 1 represents the lightest color, which is white, and 256 represents the darkest color, which is green. Fig. 5 shows the positions of feature points corresponding to the areas where the color values with gray background are located.
Similarly, the standard color value matrix of each template picture can be obtained by extracting the pixel color value of the characteristic region of each template picture in the template picture library, and one template picture corresponds to three standard color value matrices, namely an R-channel standard color value matrix, a G-channel standard color value matrix and a B-channel standard color value matrix.
S203, carrying out weighting processing and binarization processing on the color value matrix to obtain a color value transformation matrix of the picture to be labeled.
In this step, based on the color value matrix obtained in S202, weighting processing and binarization processing are performed on the color values in each channel color value matrix, so as to realize matrix transformation, and obtain a color value transformation matrix of the picture to be labeled.
Exemplarily, fig. 6 is a schematic diagram illustrating a principle of generating a color transformation matrix according to a second embodiment of the present disclosure, and as shown in fig. 6, in this embodiment, generating the color transformation matrix is divided into two steps: (1) firstly, weighting the color values in the color value matrix according to a preset weight matrix and a preset weighting algorithm to obtain a color value combination matrix of each channel; (2) and then carrying out binarization processing on the color values in the color value merging matrix according to a preset color value threshold value to obtain a color value transformation matrix of each channel. These two steps will be described separately below.
(1) Weighting process
The weighting process cannot be separated from the weighting matrix and the weighting algorithm, wherein the number of rows and columns of the weighting matrix can be set in advance according to the merging requirement, and the weighting matrix can be a 1 × 2 matrix, such as (w)1,w2) Or a 2 x 2 matrix, e.g.The values of the weight matrix may be fixed or may beMay be random and is not limited thereto.
The weighting algorithm is related to the weight matrix, and different weighting algorithms may be designed in advance according to different weight matrices, optionally, fig. 7 is a schematic diagram of a generation principle of a color value combining matrix provided in the second embodiment of the present application, as shown in fig. 7, if the weight matrix is (w)1,w2) When the weighted calculation is carried out, the color values of the adjacent left and right pixel points in the color value matrix can be converted into one element, and each element and the weight matrix are subjected to weighted summation to obtain a color value merging matrix.
Exemplarily, fig. 8 is a color value schematic diagram of a color value combining matrix provided in the second embodiment of the present application, taking a weight matrix as (1, 0.3) as an example, and the color values in fig. 8 are obtained by performing weighting calculation on the color values in the color value matrix shown in fig. 5 according to the weighting algorithm in fig. 7.
As can be seen, the color value matrix is weighted by the preset weight matrix and the preset weight algorithm, the colors of the feature points are deepened, the due features of the feature points of the picture are not lost while the colors of the feature points are deepened, the number of pixels of the feature points is reduced, and the calculation speed is accelerated when the picture is matched.
(2) Binarization processing
In order to further increase the image matching speed, in this embodiment, after the pixels of the image to be labeled are weighted and combined, the pixels need to be further binarized, so as to better highlight the feature points.
In a possible implementation manner, in this embodiment, the color values in the color value combining matrix are grayed first, and then binarized.
The graying processing is a process of setting pixel points on an image to be 0 or 255 to make the whole image show an obvious black-and-white effect. The color value threshold is needed to be used in the graying process, the value of the pixel point of which the color value is larger than or equal to the color value threshold is set to be 255, and the value of the pixel point of which the color value is smaller than the color value threshold is set to be 0 to realize the graying process.
It can be understood that the color value threshold herein refers to a color value threshold of the template picture, the color value thresholds of different template pictures may be the same or different, the color values of different color channels of the same template picture may also be the same or different, and the color value threshold of the template picture may be set in advance and stored. In this embodiment, when the color value transformation matrix of the to-be-annotated picture is obtained, the color value threshold of the template picture currently matched with the color value transformation matrix is used as the color value threshold, so that matrix transformation is realized, that is, the color value threshold of the template picture Pn is used as the color value threshold of the to-be-annotated picture S when the template picture Pn is matched with the template picture Pn.
On the basis of graying processing, further, through setting the color value of each pixel point to-1 or 1, the binarization processing is completed, exemplarily, the color value of the pixel point with the value of 255 is set to 1, the color value device of the pixel point with the value of 0 is set to-1, a color value transformation matrix is obtained, fig. 9 is a color value schematic diagram of the color value transformation matrix provided by the second embodiment of the application, fig. 9 is a color value of the color value transformation matrix obtained after the color value matrix of one color channel of the pixel point on the characteristic region shown in fig. 3 is transformed, and it is difficult to see from fig. 9, and the color values of the pixel points corresponding to almost all the characteristic points are assigned to 1.
Similarly, a standard color value transformation matrix of each template picture can be obtained by performing weighting processing and binarization processing on the standard color value matrix of each template picture, and one template picture corresponds to three standard color value transformation matrices, namely an R channel standard color value transformation matrix, a G channel standard color value transformation matrix and a B channel standard color value transformation matrix.
S204, matching the picture to be marked with each template picture respectively based on the color value matrix, the color value transformation matrix, the standard color value matrix and the standard color value transformation matrix, and determining a target template picture matched with the picture to be marked.
After the color value matrix and the color value transformation matrix of each intermediate picture of the picture to be annotated are obtained, matching the intermediate picture Sn with the template picture Pn based on the color value matrix and the color value transformation matrix of the intermediate picture Sn and the quasi-color value matrix and the standard color value transformation matrix of the corresponding template picture Pn, and determining whether the template picture Pn is the target template picture of the picture S to be annotated.
In order to accelerate the matching speed while ensuring the matching accuracy, in this embodiment, the color transformation matrix of each intermediate picture of the picture to be labeled is transformed through S203, and therefore in this step, a primary matching operation needs to be performed on the picture to be labeled and each template picture according to the color value transformation matrix and the standard color value transformation matrix of each template picture, a candidate template picture meeting a primary matching condition is determined, and then a secondary matching operation is performed on the picture to be labeled and each candidate template according to the color value matrix and the standard color value matrix of each candidate template picture, so as to determine a target template picture meeting a secondary matching condition. The primary matching operation is performed on the basis of the color value transformation matrix and the standard color value transformation matrix, the color value transformation matrix and the standard color value transformation matrix are both composed of-1 and 1, so that the matching speed is accelerated, the secondary matching operation is based on the original color value matrix and the standard color value matrix, and the color value matrix and the standard color value matrix correspond to actual color values in an interested area and a characteristic area, so that the matching accuracy is guaranteed.
It should be noted that, taking the matching between the picture to be annotated and one of the template pictures in the template picture library as an example, for example, matching the picture S to be annotated and the template picture Pn, when performing the initial matching, color value transformation matrices of three color channels corresponding to the intermediate picture Sn of the picture S to be annotated need to be respectively matched with standard color value transformation matrices of three color channels of the template picture Pn, that is, three times of matching between the R channel color value transformation matrix of the intermediate picture Sn and the R channel standard color value transformation matrix of the template picture Pn, between the G channel color value transformation matrix of the intermediate picture Sn and the G channel standard color value transformation matrix of the template picture Pn, and between the B channel color value transformation matrix of the intermediate picture Sn and the B channel standard color value transformation matrix of the template picture Pn are needed to be performed, only if a plurality of operation results all satisfy the initial matching condition, the template picture Pn can be determined as a candidate template picture.
Similarly, in the second matching operation, three times of matching between the R channel color value matrix of the intermediate picture Sn and the R channel standard color value matrix of the template picture Pn, between the G channel color value matrix of the intermediate picture Sn and the G channel standard color value matrix of the template picture Pn, and between the B channel color value matrix of the intermediate picture Sn and the B channel standard color value matrix of the template picture Pn need to be performed, and only when all the operation results satisfy the second matching condition, the template picture Pn can be determined as the target template picture.
Taking one of the color channels as an example, the specific implementation processes of the primary matching and the secondary matching will be described below.
(1) Initial matching
In this embodiment, the initial matching is performed by using the feature points as units, and first, the feature point pixel matrix of each template picture needs to be determined according to the feature points of the template picture and the standard color value transformation matrix.
Since the feature points are points summarizing the contour of the engineering facility and are usually larger than pixels, that is, one feature point often includes a plurality of pixels, for example, as shown in fig. 9, a position outlined by a black square frame in fig. 9 is a position of one feature point, and color values corresponding to all pixels included in the feature point constitute a feature point pixel matrix, for example, a feature point pixel matrix is formed by color values corresponding to all pixels included in the feature point
And secondly, calculating the similarity between the picture to be marked and each template picture according to the characteristic point pixel matrix and the color transformation matrix of the template picture to obtain the similarity matrix of each template picture.
Suppose that the feature point pixel matrix of the G channel of the template picture Pn is represented asThe matrix of the matching points matched with the intermediate picture Sn on the G channel corresponding to the picture S to be marked is expressed asAccording to the mode from left to right and from top to bottom, color values of corresponding positions of a color transformation matrix are sequentially selected to form a matching point matrix, a characteristic point pixel matrix is multiplied by the color values of corresponding positions of the matching point matrix, the matching point matrix which is most matched with the characteristic point pixel matrix in the color transformation matrix is found, when the characteristic point pixel matrix is matched with the matching point matrix which is completely the same as the characteristic point pixel matrix, a target matrix obtained by multiplying the characteristic point pixel matrix by the color values of the corresponding positions of the matching point matrix is
In this embodiment, a matrix obtained by multiplying a feature point pixel matrix on a template picture by a color value at a position corresponding to a matching point matrix on a to-be-labeled picture is defined as a first matrix, a numerical value obtained by adding and dividing 9 values of elements in the first matrix is defined as a similarity, a value range of the similarity is [ -1, 1], if the feature point pixel matrix is completely matched with the matching point matrix, the similarity is 1, and if the feature point pixel matrix is not matched with the matching point matrix, the similarity is-1.
Defining the point corresponding to the matching point matrix with the similarity of 1 as a target matching point, defining a matrix formed by the target matching point and a certain number of points around the target matching point and the similarity of the feature points as a second matrix, exemplarily, the second matrix isAnd respectively extracting the maximum values in the second matrixes corresponding to all the characteristic points of the template picture to form a new matrix, namely a similarity matrix.
It can be understood that the similarity matrix of each template picture can be obtained by respectively performing matching calculation on the picture to be labeled and each template picture in the template picture library.
For example, if a template picture has 20 feature points in total, 20 second matrices can be obtained after matching calculation, and the similarity matrix of the template picture can be obtained by extracting the maximum value in each second matrix. It should be noted that, since the maximum value in the second matrix may be more than one, the number of elements in the similarity matrix may also be more than 20.
And finally, determining whether the template picture can become a candidate template picture or not by calculating the element of the similarity matrix of each template picture and the multiple relation between the element of the similarity matrix and the number of the characteristic points and determining whether the multiple relation meets a preset multiple condition, if so, whether the multiple relation is greater than or equal to a multiple relation threshold value.
The preset multiple condition can be set according to the actual situation, and is not limited here.
And screening all template pictures meeting the preset multiple condition from the template picture library through primary matching operation to obtain candidate template pictures.
(2) Second order matching
The second matching refers to matching between a picture to be annotated and each candidate template picture, specifically, a color value matrix of a middle picture Sn of the picture S to be annotated is compared with a color value of a corresponding position of a standard color value matrix of the candidate template picture Pn to obtain a color value difference value of a corresponding pixel point, if so, the color value difference value is subtracted to form a color value difference value matrix, and then, whether the color value difference value in the color value difference value matrix meets a preset deviation condition is determined, if so, whether the number of elements of which the color value difference value is greater than or equal to a color value difference threshold value in the color value difference value matrix reaches a certain percentage is determined, and the candidate template picture of which the standard color value matrix of all color channels meets the preset deviation condition is determined as a target template picture.
It should be noted that, in this embodiment, if the candidate template picture is not screened out through the primary matching operation or the target template picture is not screened out through the secondary matching operation, it is indicated that the matching of the picture to be labeled is unsuccessful, and the picture to be labeled is added to the verification code picture library, so that the user labels the picture when logging in the monitoring platform, and the picture becomes a new template picture.
In the embodiment, the region of interest of the picture to be marked is determined according to the characteristic region of each template picture in the template picture library; extracting pixel color values of the region of interest to obtain a color value matrix of the picture to be marked; carrying out weighting processing and binarization processing on color values in the color value matrix to obtain a color value transformation matrix of the picture to be labeled; based on the color value matrix, the color value transformation matrix, the standard color value matrix and the standard color value transformation matrix, the picture to be marked is respectively matched with each template picture, a target template picture matched with the picture to be marked is determined, the color values of the picture to be marked and the template picture are split and subjected to matrix transformation, and through primary matching and secondary matching, the matching speed in the picture marking process is increased, the matching success rate of the picture is also ensured, and the automatic marking accuracy and efficiency of the inspection picture are further improved.
EXAMPLE III
Fig. 10 is a schematic structural diagram of a category labeling device for an inspection picture according to a third embodiment of the present application, and as shown in fig. 10, the category labeling device 10 for an inspection picture according to the present embodiment includes:
the system comprises a picture acquisition module 11, a picture processing module 12 and a picture marking module 13.
The image acquisition module 11 is used for acquiring an image to be marked, wherein the image to be marked is shot in the inspection process;
the picture processing module 12 is configured to match the picture to be labeled with a template picture in a template picture library, and determine a target template picture matched with the picture to be labeled, where the template picture is obtained by performing category labeling on a picture verification code of a monitoring platform by a login user;
and the picture marking module 13 is configured to perform category marking on the picture to be marked according to the category of the target template picture.
Optionally, the template picture in the template picture library is stored in correspondence with the standard color value matrix and the standard color value transformation matrix, and the picture processing module 12 is specifically configured to:
determining an interested area of the picture to be marked according to the characteristic area of each template picture in the template picture library;
extracting the pixel color value of the region of interest to obtain a color value matrix of the picture to be marked;
carrying out weighting processing and binarization processing on the color values in the color value matrix to obtain a color value transformation matrix of the picture to be labeled;
and matching the picture to be marked with each template picture respectively based on the color value matrix, the color value transformation matrix, the standard color value matrix and the standard color value transformation matrix, and determining a target template picture matched with the picture to be marked.
Optionally, the image processing module 12 is specifically configured to:
performing primary matching operation on the picture to be marked and each template picture according to the color value transformation matrix and the standard color value transformation matrix of each template picture, and determining candidate template pictures meeting primary matching conditions;
and performing secondary matching operation on the picture to be marked and each candidate template according to the color value matrix and the standard color value matrix of each candidate template picture, and determining a target template picture meeting secondary matching conditions.
Optionally, the feature region of the template picture in the template picture library is formed by at least two feature points, each feature point includes at least two pixels, and the picture processing module 12 is specifically configured to:
determining a characteristic point pixel matrix of each template picture according to the characteristic point of each template picture and the standard color value transformation matrix;
calculating the similarity between the picture to be marked and each template picture according to the feature point pixel matrix and the color transformation matrix of each template picture to obtain a similarity matrix of each template picture;
and calculating the multiple relation between the elements of the similarity matrix of each template picture and the number of the characteristic points, and determining the template picture of which the multiple relation meets the preset multiple condition as a candidate template picture.
Optionally, the image processing module 12 is specifically configured to:
comparing the color value matrix with the color value of the corresponding position of the standard color value matrix of each candidate template picture to obtain a color value difference matrix of each candidate template picture;
and determining whether the color value difference in the color value difference matrix meets a preset deviation condition, and determining the candidate template picture meeting the preset deviation condition as the target template picture.
Optionally, the image processing module 12 is specifically configured to:
according to a preset weight matrix and a preset weighting algorithm, carrying out weighting processing on the color values in the color value matrix to obtain a color value combination matrix;
and carrying out binarization processing on the color values in the color value merging matrix according to a preset color value threshold value to obtain the color value transformation matrix.
Optionally, the picture processing module 12 is further configured to:
extracting pixel color values of characteristic areas of all template pictures in a template picture library to obtain a standard color value matrix of all template pictures;
and performing weighting processing and binarization processing on the standard color value matrix to obtain a standard color value transformation matrix of each template picture.
Optionally, the picture processing module 12 is further configured to:
if the target template picture matched with the picture to be marked does not exist in the template picture library, taking the picture to be marked as a picture verification code of the monitoring platform so that a user marks the picture to be marked when logging in the monitoring platform;
and updating the corresponding template picture in the template picture library by taking the picture to be marked, which is marked by the user, as a new template picture.
The class marking device for the inspection pictures, provided by the embodiment of the invention, can execute the class marking method for the inspection pictures provided by the embodiment of the method, and has the corresponding functional modules and beneficial effects of the execution method. The implementation principle and technical effect of this embodiment are similar to those of the above method embodiments, and are not described in detail here.
Example four
Fig. 11 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application, as shown in fig. 11, the electronic device 20 includes a memory 21, a processor 22, and a computer program stored in the memory and executable on the processor; the number of the processors 22 of the electronic device 20 may be one or more, and one processor 22 is taken as an example in fig. 11; the processor 22 and the memory 21 in the electronic device 20 may be connected by a bus or other means, and fig. 11 illustrates the connection by the bus as an example.
The memory 21 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the picture acquiring module 11, the picture processing module 12, and the picture labeling module 13 in the embodiments of the present application. The processor 22 executes various functional applications and data processing of the device/terminal by running the software programs, instructions and modules stored in the memory 21, that is, the method for labeling the category of the inspection picture is realized.
The memory 21 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 21 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 21 may further include memory located remotely from processor 22, which may be connected to devices/terminals through a mesh. Examples of such a mesh include, but are not limited to, the internet, an intranet, a local area network, a mobile communications network, and combinations thereof.
EXAMPLE five
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is used to execute a method for labeling categories of inspection pictures when executed by a computer processor, and the method includes:
acquiring a picture to be marked, wherein the picture to be marked is shot in the inspection process;
matching the picture to be marked with template pictures in a template picture library, and determining a target template picture matched with the picture to be marked, wherein the template picture is obtained by a login user after class marking is carried out on a picture verification code of a monitoring platform;
and carrying out category marking on the picture to be marked according to the category of the target template picture.
Of course, in the computer-readable storage medium provided in this embodiment of the present application, the computer program is not limited to the method operations described above, and may also perform related operations in the method for labeling categories of inspection pictures provided in any embodiment of the present application.
From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a grid device) to execute the methods described in the embodiments of the present application.
It should be noted that, in the embodiment of the device for labeling categories of inspection pictures, each unit and each module included in the device are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be realized; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.
Claims (9)
1. A category marking method for inspection pictures is characterized by comprising the following steps:
acquiring a picture to be marked, wherein the picture to be marked is shot in the inspection process;
matching the picture to be marked with template pictures in a template picture library, and determining a target template picture matched with the picture to be marked, wherein the template picture is obtained by a login user after class marking is carried out on a picture verification code of a monitoring platform;
according to the category of the target template picture, carrying out category marking on the picture to be marked;
the template picture in the template picture library, the standard color value matrix of the template picture and the standard color value transformation matrix thereof are correspondingly stored, the picture to be marked is matched with the template picture in the template picture library, and a target template picture matched with the picture to be marked is determined, wherein the method comprises the following steps:
determining an interested area of the picture to be marked according to the characteristic area of each template picture in the template picture library;
extracting the pixel color value of the region of interest to obtain a color value matrix of the picture to be marked;
carrying out weighting processing and binarization processing on the color values in the color value matrix to obtain a color value transformation matrix of the picture to be labeled;
matching the picture to be marked with each template picture respectively based on the color value matrix, the color value transformation matrix, the standard color value matrix and the standard color value transformation matrix, and determining a target template picture matched with the picture to be marked;
the matching the picture to be labeled with each template picture respectively based on the color value matrix, the color value transformation matrix, the standard color value matrix and the standard color value transformation matrix, and determining a target template picture matched with the picture to be labeled comprises:
performing primary matching operation on the picture to be marked and each template picture according to the color value transformation matrix and the standard color value transformation matrix of each template picture, and determining candidate template pictures meeting primary matching conditions;
and performing secondary matching operation on the picture to be marked and each candidate template according to the color value matrix and the standard color value matrix of each candidate template picture, and determining a target template picture meeting secondary matching conditions.
2. The method according to claim 1, wherein a feature region of a template picture in the template picture library is composed of at least two feature points, each feature point includes at least two pixels, and performing a primary matching operation on the picture to be labeled and each template picture according to the color value transformation matrix and a standard color value transformation matrix of each template picture to determine candidate template pictures meeting a primary matching condition comprises:
determining a characteristic point pixel matrix of each template picture according to the characteristic point of each template picture and the standard color value transformation matrix;
calculating the similarity between the picture to be marked and each template picture according to the feature point pixel matrix and the color value transformation matrix of each template picture to obtain a similarity matrix of each template picture;
and calculating the element number of the similarity matrix of each template picture and the multiple relation between the element number and the feature point number, and determining the template picture of which the multiple relation meets a preset multiple condition as a candidate template picture.
3. The method of claim 1, wherein performing a secondary matching operation on the to-be-labeled picture and each candidate template according to the color value matrix and a standard color value matrix of each candidate template picture to determine a target template picture satisfying a secondary matching condition comprises:
comparing the color value matrix with the color value of the corresponding position of the standard color value matrix of each candidate template picture to obtain a color value difference matrix of each candidate template picture;
and determining whether the color value difference in the color value difference matrix meets a preset deviation condition, and determining the candidate template picture meeting the preset deviation condition as the target template picture.
4. The method according to claim 1, wherein the performing weighting processing and binarization processing on the color values in the color value matrix to obtain a color value transformation matrix of the picture to be labeled comprises:
according to a preset weight matrix and a preset weighting algorithm, carrying out weighting processing on the color values in the color value matrix to obtain a color value combination matrix;
and carrying out binarization processing on the color values in the color value merging matrix according to a preset color value threshold value to obtain the color value transformation matrix.
5. The method according to any one of claims 1 to 4, wherein before matching the picture to be annotated with the template pictures in the template picture library and determining the target template picture matched with the picture to be annotated, the method further comprises:
extracting pixel color values of characteristic areas of all template pictures in a template picture library to obtain a standard color value matrix of all template pictures;
and performing weighting processing and binarization processing on the standard color value matrix to obtain a standard color value transformation matrix of each template picture.
6. The method according to any one of claims 1-4, further comprising:
if the target template picture matched with the picture to be marked does not exist in the template picture library, taking the picture to be marked as a picture verification code of the monitoring platform so that a user marks the picture to be marked when logging in the monitoring platform;
and updating the corresponding template picture in the template picture library by taking the picture to be marked, which is marked by the user, as a new template picture.
7. The utility model provides a category marking device of picture patrols and examines which characterized in that includes:
the image acquisition module is used for acquiring an image to be marked, and the image to be marked is shot in the inspection process;
the image processing module is used for matching the image to be marked with the template images in the template image library and determining a target template image matched with the image to be marked, wherein the template image is obtained by a login user after class marking is carried out on an image verification code of the monitoring platform;
the image marking module is used for marking the type of the image to be marked according to the type of the target template image;
the template picture in the template picture library is correspondingly stored with the standard color value matrix and the standard color value transformation matrix of the template picture, and the picture processing module is further used for:
determining an interested area of the picture to be marked according to the characteristic area of each template picture in the template picture library;
extracting the pixel color value of the region of interest to obtain a color value matrix of the picture to be marked;
carrying out weighting processing and binarization processing on the color values in the color value matrix to obtain a color value transformation matrix of the picture to be labeled;
matching the picture to be marked with each template picture respectively based on the color value matrix, the color value transformation matrix, the standard color value matrix and the standard color value transformation matrix, and determining a target template picture matched with the picture to be marked;
the picture processing module is further configured to: performing primary matching operation on the picture to be marked and each template picture according to the color value transformation matrix and the standard color value transformation matrix of each template picture, and determining candidate template pictures meeting primary matching conditions;
and performing secondary matching operation on the picture to be marked and each candidate template according to the color value matrix and the standard color value matrix of each candidate template picture, and determining a target template picture meeting secondary matching conditions.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for labeling the category of inspection pictures according to any one of claims 1 to 6 when executing the program.
9. A computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the method for labeling categories of inspection pictures according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565105.1A CN113159234B (en) | 2021-05-24 | 2021-05-24 | Method and device for marking category of inspection picture, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565105.1A CN113159234B (en) | 2021-05-24 | 2021-05-24 | Method and device for marking category of inspection picture, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113159234A CN113159234A (en) | 2021-07-23 |
CN113159234B true CN113159234B (en) | 2021-12-28 |
Family
ID=76877086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110565105.1A Active CN113159234B (en) | 2021-05-24 | 2021-05-24 | Method and device for marking category of inspection picture, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113159234B (en) |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040160616A1 (en) * | 2003-02-18 | 2004-08-19 | Match Lab, Inc. | Black and white image color mark removal |
JP2007288470A (en) * | 2006-04-17 | 2007-11-01 | Fuji Xerox Co Ltd | Unit and method for color adjustment, unit and method for generating color conversion parameter, unit and method for color conversion, color adjustment program, color conversion parameter generation program, color conversion program, and recording medium |
US9582517B2 (en) * | 2013-03-14 | 2017-02-28 | Shutterstock, Inc. | Content based systems and methods for conducting spectrum color based image search |
CN106204563B (en) * | 2016-07-04 | 2019-11-15 | 傲讯全通科技(深圳)有限公司 | A kind of image conversion method |
US10748304B2 (en) * | 2018-03-08 | 2020-08-18 | Datacolor Inc. | Color search using a smartphone and a reference color chart |
US11010888B2 (en) * | 2018-10-29 | 2021-05-18 | International Business Machines Corporation | Precision defect detection based on image difference with respect to templates |
CN110197176A (en) * | 2018-10-31 | 2019-09-03 | 国网宁夏电力有限公司检修公司 | Inspection intelligent data analysis system and analysis method based on image recognition technology |
CN110458796A (en) * | 2019-06-10 | 2019-11-15 | 腾讯科技(深圳)有限公司 | A kind of image labeling method, device and storage medium |
CN111028213B (en) * | 2019-12-04 | 2023-05-26 | 北大方正集团有限公司 | Image defect detection method, device, electronic equipment and storage medium |
CN111242240B (en) * | 2020-02-13 | 2023-04-07 | 深圳市联合视觉创新科技有限公司 | Material detection method and device and terminal equipment |
CN111598883B (en) * | 2020-05-20 | 2023-05-26 | 重庆工程职业技术学院 | Calibration label equipment for acquiring cloud data medical images and working method |
CN111582405B (en) * | 2020-05-28 | 2023-10-27 | 上海依图网络科技有限公司 | Data labeling method and device |
CN111986785B (en) * | 2020-08-26 | 2023-09-12 | 北京至真互联网技术有限公司 | Medical image labeling method, device, equipment and storage medium |
-
2021
- 2021-05-24 CN CN202110565105.1A patent/CN113159234B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113159234A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111784685A (en) | Power transmission line defect image identification method based on cloud edge cooperative detection | |
CN108776772B (en) | Cross-time building change detection modeling method, detection device, method and storage medium | |
CN109858367B (en) | Visual automatic detection method and system for worker through supporting unsafe behaviors | |
CN110610483B (en) | Crack image acquisition and detection method, computer equipment and readable storage medium | |
CN110544293B (en) | Building scene recognition method through visual cooperation of multiple unmanned aerial vehicles | |
CN110097087B (en) | Automatic reinforcing steel bar binding position identification method | |
CN110910360B (en) | Positioning method of power grid image and training method of image positioning model | |
CN111178206A (en) | Building embedded part detection method and system based on improved YOLO | |
CN112070135A (en) | Power equipment image detection method and device, power equipment and storage medium | |
CN111260645B (en) | Tampered image detection method and system based on block classification deep learning | |
CN111127465A (en) | Automatic generation method and system for bridge detection report | |
CN112967255A (en) | Shield segment defect type identification and positioning system and method based on deep learning | |
CN113255590A (en) | Defect detection model training method, defect detection method, device and system | |
CN114548912A (en) | Whole-process tracking method and system for building engineering project management | |
CN115272826A (en) | Image identification method, device and system based on convolutional neural network | |
CN115018777A (en) | Power grid equipment state evaluation method and device, computer equipment and storage medium | |
CN114880730A (en) | Method and device for determining target equipment and photovoltaic system | |
CN113159234B (en) | Method and device for marking category of inspection picture, electronic equipment and storage medium | |
Gupta et al. | Post disaster mapping with semantic change detection in satellite imagery | |
CN113920450A (en) | Method and device for identifying insulator RTV coating based on intrinsic image decomposition | |
CN115049998A (en) | Point cloud data extraction method and device, electronic equipment and storage medium | |
CN104778468A (en) | Image processing device, image processing method and monitoring equipment | |
CN114283442A (en) | Intelligent identification method and device for secondary wiring diagram and storage medium | |
CN113674142A (en) | Method, device, computer equipment and medium for ablating target object in image | |
CN115310505A (en) | Automatic identification method and system for secondary circuit wiring terminal of mutual inductor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |