CN115457092A - Image detection method, vertex registration method and storage medium - Google Patents

Image detection method, vertex registration method and storage medium Download PDF

Info

Publication number
CN115457092A
CN115457092A CN202210904918.3A CN202210904918A CN115457092A CN 115457092 A CN115457092 A CN 115457092A CN 202210904918 A CN202210904918 A CN 202210904918A CN 115457092 A CN115457092 A CN 115457092A
Authority
CN
China
Prior art keywords
vertex
vertexes
relation
information
sample image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210904918.3A
Other languages
Chinese (zh)
Inventor
李杰明
杨洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huahan Weiye Technology Co ltd
Original Assignee
Shenzhen Huahan Weiye Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huahan Weiye Technology Co ltd filed Critical Shenzhen Huahan Weiye Technology Co ltd
Priority to CN202210904918.3A priority Critical patent/CN115457092A/en
Publication of CN115457092A publication Critical patent/CN115457092A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application relates to an image detection method, a vertex registration method and a storage medium, wherein the image detection method comprises the following steps: acquiring a vertex detection model of a target object; the vertex detection model is used for detecting local characteristics of an image to be detected of a target object to obtain a plurality of vertexes of the target object; and outputting a plurality of vertexes on the target object, wherein each vertex has one or more of position coordinates, categories, circumscribed rectangles and outsourcing rectangles. According to the technical scheme, when the vertex detection model is obtained, the pre-registration processing of each vertex is carried out on the sample images participating in network training, and the image detection performance of the model is enhanced.

Description

Image detection method, vertex registration method and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image detection method, a vertex registration method, and a storage medium.
The scheme is a divisional application document based on a parent case (CN 202110740635.5, 2021-07-01, a vertex registration method and device based on graph matching, and a storage medium).
Background
In recent years, artificial intelligence and big data become the focus of attention in various fields at home and abroad. In the field of computer vision, the image algorithm based on deep learning has wide application. The convolutional neural network is trained by using the one-to-one correspondence of the images and the labeling information, and the work of classification, target detection, semantic segmentation and the like can be completed. The target detection convolutional neural network has a great number of applications in industry, such as identification and counting of pipeline products.
The target detection convolutional neural network (hereinafter referred to as target detection network) still faces a plurality of difficulties, such as identification of an occlusion object, identification of an extreme dimension and a shape object; furthermore, the accurate recognition of the object in direction is also a great demand for industrial machine vision. The existing target detection algorithms based on deep learning include YOLO, SSD, RCNN, etc., which are trained by constructing a CNN (convolutional neural network) and using labeled data, after the training is completed, an image is input to the convolutional neural network and a feature map is output, and a detection result, such as the type of an object, the central coordinate and the length and width of an external rectangle, is obtained by calculating the feature map.
In the existing application, identifying an occlusion object is one of the difficulties of target detection. One technique for optimizing the recognition of occluding objects is: enhancing the training set image, such as randomly covering a part of the object to be detected in the image with noise, or randomly setting pixels of the part of the object to be detected in the image to be fixed pixel values (such as 0); the disadvantage of this method is that too much noise may be introduced into the training data, making convergence of the target detection model more difficult, and the training data is not necessarily processed in the same way as the real occlusion, so that erroneous detection results may still be obtained. In addition, the method for optimizing the multi-scale detection comprises the following steps: the characteristic pyramid is used for obtaining a plurality of characteristic graphs with different receptive fields by fusing the characteristic graphs with different scales, and then classifying and performing square regression on the characteristic graphs respectively to obtain results; the disadvantage of this method is that the amount of calculation increases, and the aspect ratio of the shape of the object to be identified needs to be in a moderate interval (typically between 1. The three methods for identifying the object direction comprise the following steps: on the basis of the original target detection network, one or more angle regressions are added, and fitting is performed through training data, so that the direction of an object can be identified; the method has the disadvantages that angle labeling needs to be additionally added during labeling, so that the workload of labeling is increased, and the training difficulty is increased due to the addition of angle regression.
Disclosure of Invention
The method mainly solves the technical problem of how to accurately register the vertexes on the target object. In order to solve the technical problem, the present application provides a vertex registration method and apparatus based on graph matching, and a storage medium.
According to a first aspect, an embodiment provides a vertex registration method based on graph matching, which includes: acquiring a standard template of a standard object corresponding to a target object and an information search range of the standard template; the standard template comprises position information, angle information and category information of all vertexes on the standard object, and the vertexes are used for representing a local feature of the surface of the object; the information search range is used for setting detection ranges of angles, positions and distance scaling scales; acquiring at least one sample image in an image dataset with respect to the target object; the sample image comprises position information, angle information and category information of a partially labeled vertex on the target object; matching part of marked vertexes in the sample image in the information search range of the standard template through a preset image matching algorithm to obtain a vertex matching result; calculating a transformation relation of a target object in the sample image relative to a standard object in the standard template according to the vertex matching result; deducing the position information, the angle information and the category information of other unmarked vertexes in the sample image according to the transformation relation; and registering each vertex of the target object in the sample image by using the part of labeled vertexes in the sample image and the rest of unlabeled vertexes in the sample image.
Matching some marked vertexes in the sample image in the information search range of the standard template through a preset image matching algorithm to obtain a vertex matching result, wherein the vertex matching result comprises the following steps: forming a set of vertexes to be detected by using part of labeled vertexes in the sample image, calculating a position change relation between any two vertexes in the set of vertexes to be detected and any two vertexes in the standard template, and constructing a connection relation between the vertexes in the set of vertexes to be detected; comparing the connection relation between each vertex and other vertexes in the vertex set to be detected, and adding the vertex and other vertexes of which the connection relation accords with a preset screening condition into the vertex set X h And determining a change relationship
Figure BDA0003772164360000021
Set the vertex X h The variation relation corresponding to each vertex and the determined variation relation
Figure BDA0003772164360000022
Comparing, and determining the vertex set X by voting process h A plurality of vertexes matched with the standard template; and obtaining the vertex matching result by utilizing the corresponding relation between the matched plurality of vertexes and the corresponding vertexes in the standard template.
The calculating the position change relationship between any two vertexes in the vertex set to be detected and any two vertexes in the standard template, and constructing the connection relationship between the vertexes in the vertex set to be detected, includes: for any two vertexes k and l in the vertex set to be detected and the vertex set to be detectedCalculating the position relation of a vertex k and a vertex l and expressing as beta for any two vertexes i and j in the standard template kl Calculating the positional relationship between the vertex i and the vertex j and expressing as beta ij (ii) a If the type information of the vertex i and the vertex k is the same and the type information of the vertex j and the vertex l is the same, calculating the position relation beta ij Relative to the positional relationship beta kl Is expressed as delta ij-kl (ii) a The variation relation is used for representing the angle offset and the distance scaling amount of the relative conversion; determining the variation relation delta ij-kl Whether the vertex k is in the information search range of the standard template or not, if so, constructing the connection relation between the vertex k and the vertex l in the vertex set to be detected and using gamma kl Representing; and the information searching range is provided with a detection range of an angle alpha, a coordinate x, a coordinate y and a distance scaling scale.
Comparing the connection relation between each vertex and other vertexes in the vertex set to be detected, and adding the vertex and other vertexes of which the connection relation accords with a preset screening condition into the vertex set X h And determining a change relationship
Figure BDA0003772164360000031
The method comprises the following steps: judging that the vertex set to be detected is constructed with the connection relation between the vertexes, and storing all the vertexes in the vertex set to be detected into a stack S t (ii) a From the stack S t Sequentially popping each vertex and recording as the vertex h, and sequentially establishing a vertex set X h And adding vertex h to vertex set X h (ii) a Traversing other vertexes having connection relations with the vertex h in the vertex set to be detected, voting to determine one connection relation and obtaining a corresponding change relation
Figure BDA0003772164360000034
Forming a candidate point set P by using other vertexes which have a connection relation with the vertex h; for each vertex o in the candidate point set P, acquiring the connection relation and the corresponding change relation between the vertex o and each vertex in the vertex set to be detected, and if the vertex o and the vertex are judged to be connected, acquiring the corresponding change relation between the vertex o and each vertex in the vertex set to be detectedSet of points X h The variation relation corresponding to the connection relation of any one vertex in (2) is equal to the variation relation
Figure BDA0003772164360000032
Adding vertex o to the vertex set X temporarily h Performing the following steps; then obtaining the vertex set X h Newly added internal connection relation l between internal vertexes in And each internal vertex and the vertex set X h Newly-added external connection relation l between external vertexes out Determining the newly added internal connection relation l in Whether the number of the new external connection relation is less than the number of the new external connection relation l out If vertex o is moved out of the vertex set X h If not, adding new external connection relation l out Adding another vertex associated with the vertex o into the candidate point set P; traversing all vertices in the candidate point set P and updating the vertex set X h Outputting the vertex set X finally formed h And the determined change relation
Figure BDA0003772164360000033
The vertex is set to X h The change relation corresponding to each vertex and the determined change relation
Figure BDA0003772164360000035
Comparing, and determining the vertex set X by voting process h The plurality of vertexes matched with the standard template comprise: obtaining the vertex set X h The connection relation and the corresponding change relation between the middle vertex h and the rest vertexes; if the variation corresponding to the connection between the vertex h and any of the other vertices is equal to the variation
Figure BDA0003772164360000036
Marking the connection relation as a first value; counting and voting the vertexes marked as the first values in the connection relation to obtain voting results of all vertexes in the vertex set, and determining and voting according to the voting resultsAnd the standard templates are matched with a plurality of vertexes.
The acquiring of the standard template of the standard object corresponding to the target object and the information search range of the standard template includes: acquiring a standard image of a standard object corresponding to the target object, acquiring labeling information of all vertexes on the standard object in the standard image, and generating the standard template according to the labeling information of all vertexes on the standard object; the marking information comprises position information, angle information and category information of each vertex on the standard object; acquiring a reference direction and a rotation center point configured for the standard template, and maximum variation of an angle, a position and a distance scaling configured for the standard template; and setting detection ranges of the angle, the position and the distance scaling scale according to the configured reference direction, the rotation center point and the maximum variation of the angle, the position and the distance scaling scale, thereby forming an information search range of the standard template.
After registering the vertices of the target object in the sample image, further comprising: constructing a convolutional neural network based on deep learning, and training the convolutional neural network by using the sample image after the registration of each vertex until a loss function corresponding to the convolutional neural network is converged; using the trained convolutional neural network as a vertex detection model; the vertex detection model is used for detecting local characteristics of the image to be detected of the target object to obtain a plurality of vertexes of the target object.
According to a second aspect, an embodiment provides an image processing device comprising: the storage unit is used for storing a standard template of a standard object corresponding to a target object, an information search range of the standard template and an image data set; the standard template comprises position information, angle information and category information of all vertexes on the standard object, and the vertexes are used for representing a local feature of the surface of the object; the information search range is used for setting detection ranges of angles, positions and distance scaling scales; the image dataset comprises at least one sample image relating to the target object, the sample image comprising position information, angle information and class information of partially labeled vertices on the target object; an acquisition unit configured to acquire, from the storage unit, a standard template of a standard object corresponding to the target object and an information search range of the standard template, and acquire at least one sample image regarding the target object in an image data set; the processing unit is used for matching part of labeled vertexes in the sample image in the information search range of the standard template through a preset image matching algorithm to obtain a vertex matching result, and calculating the transformation relation of a target object in the sample image relative to a standard object in the standard template according to the vertex matching result; and deducing the position information, the angle information and the category information of other unmarked vertexes in the sample image according to the transformation relation, and registering each vertex of the target object in the sample image by using part of marked vertexes in the sample image and other unmarked vertexes in the sample image.
The processing unit comprises the following processes when executing a preset graph matching algorithm: forming a vertex set to be detected by using part of labeled vertexes in the sample image, calculating the position change relationship between any two vertexes in the vertex set to be detected and any two vertexes in the standard template, and constructing the connection relationship between the vertexes in the vertex set to be detected; comparing the connection relation between each vertex and other vertexes in the vertex set to be detected, and adding the vertex and other vertexes of which the connection relation accords with a preset screening condition into the vertex set X h And determining a change relationship
Figure BDA0003772164360000051
Set the vertex X h The variation relation corresponding to each vertex and the determined variation relation
Figure BDA0003772164360000052
Comparing, and determining the vertex set X by voting h A plurality of vertexes matched with the standard template; and obtaining the vertex matching result by utilizing the corresponding relation between the matched plurality of vertexes and the corresponding vertexes in the standard template.
According to a third aspect, an embodiment provides a computer readable storage medium having a program stored thereon, the program being executable by a processor to implement the vertex registration method as described in the first aspect above.
The beneficial effect of this application is:
according to the embodiment, the vertex registration method and device based on graph matching and the storage medium are provided, wherein the vertex registration method comprises the following steps: acquiring a standard template of a standard object corresponding to a target object and an information search range of the standard template; matching part of marked vertexes in the sample image in an information search range of the standard template through a preset image matching algorithm to obtain a vertex matching result; calculating the transformation relation of the target object in the sample image relative to the standard object in the standard template according to the vertex matching result; deducing the position information, the angle information and the category information of other unmarked vertexes in the sample image according to the transformation relation; and registering each vertex of the target object in the sample image by using the part of the labeled vertexes in the sample image and the rest of the unlabeled vertexes in the sample image. On the first hand, because some marked vertexes in the sample image are matched in the information search range of the standard template through the graph matching algorithm, not only the vertex matching problem is converted into the graph matching algorithm to be solved so as to improve the accuracy and the stability of subsequent registration, but also the proper vertexes are screened through calculating the position relation among the vertexes, so that the calculation complexity is reduced and the operation efficiency of the algorithm is improved; in the second aspect, the technical scheme infers the position information, the angle information and the category information of other un-labeled vertexes in the sample image according to the transformation relation of the target object in the sample image relative to the standard object in the standard template, so that the other un-labeled vertexes can be inferred only by a small number of labeled vertexes, and the workload of manual labeling of the sample image is greatly reduced; in the third aspect, because the information of part of the labeled vertexes in the sample image and the information of the other unlabeled vertexes in the sample image can be obtained, the vertexes of the target object in the sample image can be registered according to the vertex information, so that the vertex registration speed is improved, and the vertex registration accuracy is also improved.
In order to obtain vertex matching results of part of marked vertexes in a sample image, the technical scheme provides a graph matching algorithm for matching the part of marked vertexes in an information search range of a standard template, and converts a vertex matching problem into a graph matching problem for solving and calculating, so that the corresponding relation between the part of marked vertexes and the vertexes in the standard template is realized; then, under the condition of obtaining the corresponding relation of the vertex part, the transformation matrix between the corresponding parts is convenient to calculate according to the corresponding relation, so that the rest vertexes on the target object are deduced and identified through the transformation relation, and finally the registration of all vertexes on the target object is finished.
Drawings
FIG. 1 is a flow chart of a graph matching based vertex registration method of the present application;
FIG. 2 is a flow chart of matching to obtain vertex matching results;
fig. 3 is a schematic view of setting a vertex on a dial to be detected;
FIG. 4 is a schematic diagram of building a standard template for a watch face;
FIG. 5 is a diagram illustrating vertices and connections of a standard template;
FIG. 6 is a set of vertices X h A schematic diagram of the vertices and connections of (a);
FIG. 7 is a schematic diagram of counting votes;
FIG. 8 is a flowchart of obtaining vertex complete annotation information and vertex registration;
FIG. 9 is a flow chart of building a vertex detection model;
FIG. 10 is a schematic diagram of a vertex detection model;
FIG. 11 is a schematic diagram of an image processing apparatus according to the present application;
fig. 12 is a schematic structural diagram of an image processing apparatus in another embodiment.
Detailed Description
The present application will be described in further detail below with reference to the accompanying drawings by way of specific embodiments. Wherein like elements in different embodiments have been given like element numbers associated therewith. In the following description, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in this specification in order not to obscure the core of the present application with unnecessary detail, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the described features, operations, or characteristics may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The ordinal numbers used herein for the components, such as "first," "second," etc., are used merely to distinguish between the objects described, and do not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
In many cases of industrial production, the shape of the target object to be detected is relatively fixed, and these shape changes can be regarded as affine transformation approximately at this time, which often occurs in the detection of standardized products such as metal parts and plastic housings. A complete target object can be divided into a plurality of feature vertices, hereinafter referred to as vertices (these vertices need to be set as local regions that can better reflect the features of the object to be detected). The set formed by a plurality of vertexes is called a template, and the whole detection of the template can be completed by detecting each vertex; that is, the detection of the whole target object can be completed by detecting the local target object.
The technical solution of the present application will be specifically described with reference to the following examples.
The first embodiment,
Referring to fig. 1, the present application discloses a vertex registration method based on graph matching, which includes steps 110-150, described below.
And step 110, acquiring a standard template of a standard object corresponding to the target object and an information search range of the standard template. The target object can be a product on an industrial production line, a mechanical part in an article box or a tool on an operation table, and the like, and the target object may have partial occlusion, surface damage and abnormal shape in some scenes, which brings some difficulties to the detection of the surface characteristics of the target object. The standard object corresponding to the target object is a target object without occlusion, surface defect, deformation and normal posture, the standard object has complete object surface features, and if the object surface features are expressed by feature vertices, the standard template of the standard object is a set of the vertex information.
Here, the standard template may include position information, angle information, and class information of all vertices on the standard object, where a vertex is a local feature for characterizing the surface of the object. In addition, the information search range of the standard template is used for setting the detection range of the angle, the position and the distance scaling scale. It should be noted that, for any target object or corresponding standard object, the local features of the surface thereof include, but are not limited to: the object can represent the convex-concave shape of the core characteristic of the object, the printing pattern of the core characteristic of the object, and the like; if a certain kind of local features is selected once, all the vertexes belonging to the local features need to be labeled completely. It is understood that each vertex includes at least the following 2 pieces of information: the horizontal and vertical coordinates of the vertex (i.e., the coordinates of the vertex), the category of the characterized local feature (i.e., the category of the vertex); of course, in addition to the 2 pieces of information, other information may be optionally added to accelerate the subsequent matching speed, and generally, the optional information includes: the minimum bounding rectangle of the local feature (i.e. the vertex bounding rectangle, the information of which may include the rectangle center point, the rectangle length and width, and may refer to a standard rectangle with a rotation angle), the bounding rectangle of the local feature (i.e. the vertex bounding rectangle, the information of which may include the rectangle center point, the rectangle length and width, and the direction, and may refer to a standard rectangle without a rotation angle). It will be appreciated that since each vertex on an object is a representation of a local feature that has a well-defined class, exact location and relative angle of rotation, and size of the region in the overall object image, the vertices may be described by specific values for angle, location, class, rectangle, etc.
In one embodiment, the standard template and the information search range of the standard object may be obtained by:
(1) The method comprises the steps of obtaining a standard image of a standard object corresponding to a target object from a camera device, obtaining labeling information of all vertexes on the standard object in the standard image, and generating a standard template according to the labeling information of all vertexes on the standard object. The labeling information of all vertexes on the standard object can be generated in a manual labeling mode, and the labeling information comprises position information, angle information and category information of each vertex on the standard object; of course, the label information may also include values such as a vertex bounding rectangle and a vertex bounding rectangle.
(2) Acquiring a reference direction and a rotation central point configured for a standard template, and maximum variation of an angle, a position and a distance scaling configured for the standard template; and setting detection ranges of the angle, the position and the distance scaling scale according to the configured reference direction, the rotation center point and the maximum variation of the angle, the position and the distance scaling scale, thereby forming an information search range of the standard template.
For example, as shown in fig. 3 and 4, the dial to be detected is used as a standard object corresponding to the target object, and some significant local features on the dial 1 are labeled, such as a vertex A1 at the center of the large dial, vertices A2 and A4 at the center of the small dial, and a vertex A3 at the edge number of the small dial, so that a simple standard template can be generated by using labeling information of these vertices, specifically see A1-A2-A3-A4 in fig. 4; in fig. 4, the reference direction of the standard template is L1, and the rotation center point is L0. Since the same vertex shows different shapes in images with different sizes and rotation directions, some adjustments need to be made to the vertex shapes to achieve matching, and the adjustment content includes not only the angular offset and the position offset of the vertex itself, but also the distance scaling amount between the vertex and another vertex. The information search range here means that a standard template transformed within the range can be detected, and otherwise cannot be detected.
Step 120, acquiring at least one sample image about the target object in the image dataset; the sample image includes position information, angle information, and category information of a partially labeled vertex on the target object. The image data set may be a training data set provided by a user, and includes image and annotation information about the target object.
It should be noted that although the image data set includes some sample images related to the target object, these sample images need to be labeled with a large number of vertices to be applied to some training tasks of the detection model. In the process of labeling the sample image, a small number of vertexes can be labeled firstly, and the rest of unlabeled vertexes are deduced by using the following steps 130-140, so that the labeling information of most vertexes can be obtained, and the workload of manual labeling can be greatly reduced.
And step 130, matching some marked vertexes in the sample image in the information search range of the standard template through a preset graph matching algorithm to obtain a vertex matching result.
In this embodiment, referring to FIG. 2, the step 130 may include steps 131-134, which are described below.
And 131, forming a set of vertexes to be detected by using part of labeled vertexes in the sample image, calculating a position change relation between any two vertexes in the set of vertexes to be detected and any two vertexes in the standard template, and constructing a connection relation between the vertexes in the set of vertexes to be detected.
Since the standard template is constructed in advance, each vertex { v ] in the standard template can be obtained i }(i=1,2,…,N 1 ) And the connection relationship between the vertices, then at a given set of vertices to be detected { p } i }(i=1,2,…,N 2 ) In the case of (2), the problem to be solved is the correspondence between the two. In order to quickly solve the corresponding relationship between the two, the matching problem of the vertex is converted into the matching problem of the graph to be solved and calculated.
In a specific embodiment, step 131 specifically includes the following processes:
(1) For set of vertices to be detected P = { P = i Any two vertices k, l in }, and a standard template V = { V = } i Calculating the position relation of a vertex k and a vertex l and expressing as beta for any two vertexes i and j in the method kl Calculating the positional relationship between the vertex i and the vertex j and expressing as beta ij
(2) The detection range of the angle α, the coordinate x, the coordinate y, and the distance scaling scale is acquired (which may be set by a user), thereby setting an information search range of the standard template, which may be denoted as Ω.
(3) Judging the type of the vertex, if judging that the type information of the vertex i is the same as that of the vertex k and the type information of the vertex j is the same as that of the vertex l, calculating the position relation beta ij Relative to the positional relationship beta kl Is expressed as delta ij-kl (ii) a The variation relation delta here ij-kl The angular offset and the distance scaling, i.e. the positional relationship β, used to characterize the relative transformation ij Conversion to positional relationship beta kl The desired angular offset, and the vertex v i —v j Distance between to vertex v k —v l The amount of distance scaling required for the distance between.
(4) Determining the variation relation delta ij-kl If the information is in the information search range omega of the standard template, the delta is represented if the information is in the information search range omega of the standard template ij-kl E is equal to omega, then construct the graph to be processedDetecting the connection relation between the vertex k and the vertex l in the vertex set and using gamma kl And (4) showing. It can be understood that since the information search range Ω is provided with the detection ranges of the angle α, the coordinate x, the coordinate y, and the distance scaling, the relationship δ is transformed when ij-kl When all the information in the vertex set meets the corresponding detection range, the connection relation between the vertex k and the vertex l in the vertex set to be detected is established; this indicates that the vertex k and the vertex l may be transformed by a certain transformation method from the vertex i and the vertex j, in this case, the vertex k and the vertex l may form an edge of the graph, and there is a connection relationship.
Step 132, comparing the connection relationship between each vertex and other vertices in the vertex set to be detected, and adding the vertex and other vertices whose connection relationship meets the preset screening condition into the vertex set X h And determining a change relationship
Figure BDA0003772164360000091
In one embodiment, step 132 specifically includes the following processes:
(1) Judging that the vertex set to be detected has a connection relation between the vertexes, and storing all the vertexes in the vertex set to be detected into a stack S t That is, all vertices to be detected are stacked, so that the stack can be recorded as S t
(2) Slave stack S t Sequentially popping each vertex and recording as the vertex h, and sequentially establishing a vertex set X h And adding vertex h to vertex set X h (ii) a Because a vertex set X is newly built h Then vertex h may be added to vertex set X h
In the vertex set X, the vertex set X is defined as a vertex h In (2), set the vertex to X h The connection line between the internal vertexes is called the internal connection relation l in Set the vertices to X h The connection line between the internal vertex and any vertex outside the set is marked as the external connection relation l out (ii) a Of course, if there are multiple interior points connected to the same point outside the vertex set, the exterior connections are counted only once.
(3) Traversing other vertexes having connection relations with the vertex h in the vertex set to be detected, voting to determine one connection relation and obtaining a corresponding change relation
Figure BDA0003772164360000101
And forming a candidate point set P by using other vertexes with connection relations with the vertex h, namely adding all vertexes which are connected with the vertex h into the candidate point set P.
It can be understood that since every vertex connected to h stores a change relationship (including an angle offset and a distance scaling amount), calculating the mode of the transformation relationship enables voting to determine the change relationship
Figure BDA0003772164360000102
The process of (1). Furthermore, for the initial set of vertices X h Where there is only one vertex, the inner edge is now empty.
(4) For each vertex o in the candidate point set P, acquiring the connection relation and the corresponding change relation between the vertex o and each vertex in the vertex set to be detected, and if the vertex o and the vertex set X are judged h The variation relation corresponding to the connection relation of any one vertex in the set is equal to the variation relation
Figure BDA0003772164360000103
Then vertex o is temporarily added to vertex set X h In (1). Next, vertex set X is obtained h Newly added internal connection relation l between internal vertexes in And internal vertices and vertex set X h Newly added external connection relation l between external vertexes out Then, the newly added internal connection relation l can be determined in Whether the number of the new external connection relation is less than the number of the new external connection relation l out If the number of vertices o is greater than the number of vertices in vertex set X, then the vertex set X is shifted to the vertex o h If not, adding new external connection relation l out The vertex associated with vertex o also adds to candidate point set P.
It will be appreciated that adding vertex o to vertex set X h Then, can pass throughView newly added internal connection relation l in And a newly added external connection relation l out To make a number judgment if the new internal connection relation l in More than newly added external connection relation l out If the candidate vertex o meets the requirement, the newly added vertex is added into the vertex set X h (ii) a If the new internal connection relation l is added in Less than the newly added external connection relation l out Then the candidate vertex o should be removed from vertex set X without meeting the requirement h And deleted.
(5) Traverse all vertices in the candidate set of points P and update the set of vertices X h That is, step (4) is repeated until the candidate point set P becomes empty, and the finally formed vertex set X is output h And the determined variation relation
Figure BDA0003772164360000104
It will be appreciated that above is the slave stack S t Pop a vertex h and form a vertex set X h And the determined variation relation
Figure BDA0003772164360000105
Then the slave stack S is needed next t Pop the next vertex and then perform a similar calculation until stack S t Becomes empty.
Step 133, set the vertices X h The change relation corresponding to each vertex and the determined change relation
Figure BDA0003772164360000106
Comparing, and determining vertex set X by voting h A number of vertices that match the standard template.
In a specific embodiment, step 133 specifically includes the following processes:
(1) For a vertex h, obtain vertex set X h The connection relation and the corresponding change relation between the middle vertex h and the rest vertexes.
(2) If the vertex h corresponds to the connection relation between any one of the other vertexesThe change relation is equal to the change relation
Figure BDA0003772164360000112
The connection is marked as a first value, such as set to 1.
(3) And counting and voting the vertexes marked with the first values in the connection relation to obtain a voting result of each vertex in the vertex set, and determining a plurality of vertexes matched with the standard template according to the voting result.
Referring to fig. 5 and 6, a vertex U-V-W is arranged in the standard template, and the connection lines between the three vertices form a connection relation; vertex set X h The middle of the four vertexes is provided with a vertex E-F-H-G, and connecting lines among the four vertexes form a connecting relation. Then, the vertex set X in Table 1 below can be obtained h And the corresponding relation and the transformation relation of the standard template.
TABLE 1 set of vertices X h Corresponding relation and change relation with standard template
WU WV
HE δ WU~HE δ WV~HE
HF δ WU~HF δ WV~HF
HG δ WU~HG δ WV~HG
For the variation relationships in Table 1, if the variation relationship is equal to
Figure BDA0003772164360000111
Set to 1, unsatisfied set to 0, there are the flagged results in table 2.
TABLE 2 numerical labeling of the variation relationships
WU WV
HE
1 0
HF 0 1
HG 0 0
Then, a count vote may be performed for each vertex according to table 2, with the votes for the different classes of vertices set to-1, and then with the count vote results of table 3.
TABLE 3 tally voting results
U V W
E
1 0 -1
F 0 1 -1
G 0 0 0
H -1 -1 2
It is more intuitive to show the counting and voting results of table 3 with fig. 7, with the number of votes between vertex E and vertex U being 1, the number of votes between vertex F and vertex V also being 1, and the number of votes between vertex H and vertex W being 2. Then, vertex E may be matched to vertex U, vertex F may be matched to vertex V, and vertex H may be matched to vertex W.
And 134, obtaining a vertex matching result by using the corresponding relation between the matched plurality of vertexes and the corresponding vertexes in the standard template.
It will be appreciated that it can be certain that vertex set X is h There is some correspondence with the standard template, but the vertex set X is unclear h The corresponding relation exists between the vertex in the template and the vertex in the standard template; the solution is carried out by adopting a counting and voting algorithm, so that a vertex set X is obtained h The corresponding relation between the vertex of (1) and the vertex of the standard template, called vertex matching result, can be expressed as F 1
It can be understood that, since some labeled vertexes in the sample image are matched in the information search range of the standard template through the graph matching algorithm, not only the vertex matching problem is converted into the graph matching algorithm to be solved to improve the accuracy and stability of subsequent registration, but also appropriate vertexes are screened by calculating the position relationship between the vertexes, so that the calculation complexity is reduced and the operation efficiency of the algorithm is improved.
And 140, calculating a transformation relation of the target object in the sample image relative to the standard object in the standard template according to the vertex matching result, and deducing the position information, the angle information and the category information of other unmarked vertices in the sample image according to the transformation relation.
Note that, since the vertex set X is obtained h The vertex of the standard template, the angular offset and the distance scaling amount required when the standard object in the standard template is transformed to the target object in the sample image can be determined, thereby forming the transformation relation of the target object in the sample image relative to the standard object in the standard template. Since some of the labeled vertices in the sample image can be matched with some of the vertices in the standard template, the unlabeled vertices in the sample image can also be matched with the remaining vertices in the standard template, so that the position information, the angle information, and the category information of the remaining unlabeled vertices in the sample image can be inferred.
It can be understood that, here, the position information, the angle information and the category information of the remaining un-labeled vertexes in the sample image are inferred according to the transformation relation of the target object in the sample image relative to the standard object in the standard template, so that the remaining un-labeled vertexes can be inferred only by a small number of labeled vertexes, thereby greatly reducing the workload of manual labeling of the sample image.
And 150, registering each vertex of the target object in the sample image by using the part of the labeled vertexes in the sample image and the other unlabeled vertexes in the sample image. It can be understood that since information of some labeled vertices in the sample image and information of other unlabeled vertices in the sample image are obtained, the information of each vertex on the target object in the sample image is obtained, and thus the registration of each vertex is realized.
It can be understood that, since information of some labeled vertexes in the sample image and information of other unlabeled vertexes in the sample image can be obtained, the vertexes of the target object in the sample image can be registered according to the vertex information, so that the vertex registration speed is increased, and the vertex registration accuracy is also increased.
In the present embodiment, the vertex registration method based on graph matching disclosed can be specifically referred to fig. 8. On one hand, the standard template and the information search range of the standard object need to be acquired, so that the standard image of the standard object corresponding to the target object can be acquired from the camera equipment, and the labeling information of all vertexes on the standard object in the standard image is acquired, so that the standard template is generated according to the labeling information of all vertexes on the standard object, and the information search range of the standard template is automatically or manually set. On the other hand, if at least one sample image of the target object needs to be obtained, at least one sample image of the target object can be obtained from the image data set, and part of vertexes on the target object in the sample image can be labeled, so as to obtain position information, angle information and category information of the part of labeled vertexes. Next, the vertex matching process is performed on the sample image and the standard template by using a graph matching algorithm.
In the execution process of the graph matching algorithm, firstly, a set of vertexes to be detected and a connection relation can be formed by utilizing part of labeled vertexes in a sample image, the change relation of the positions between any two vertexes in the set of vertexes to be detected and any two vertexes in a standard template is calculated, and the connection relation between the vertexes in the set of vertexes to be detected is constructed; then, comparing the connection relation between each vertex and other vertexes in the vertex set to be detected, and adding the vertex and other vertexes with the connection relation meeting the preset screening condition into the vertex set X h And determining a change relationship
Figure BDA0003772164360000131
Then, set the vertex X h The change relation corresponding to each vertex and the determined change relation
Figure BDA0003772164360000132
Comparing, and determining vertex set X by voting h A plurality of vertexes matched with the standard template; and finally, obtaining a vertex matching result by utilizing the corresponding relation between the matched plurality of vertexes and the corresponding vertexes in the standard template.
After the vertex matching result is obtained, the transformation relation of the target object in the sample image relative to the standard object in the standard template can be calculated according to the vertex matching result, and then the position information, the angle information and the category information of other unmarked vertexes in the sample image can be inferred according to the transformation relation.
Because the information of part of the labeled vertexes in the sample image and the information of the rest unlabeled vertexes in the sample image are obtained, the information of all vertexes on the target object in the sample image is equivalently obtained, and thus the registration task of all vertexes on the target object in the sample image is realized.
It should be noted that, the above vertex registration method mainly obtains the information of each vertex in the sample image through graph matching, so that a sample image with each vertex completely labeled can be obtained; because the sample images are necessary training data for training a certain network model and have complete vertex marking information, the network model can be trained by using one or more sample images, and the accuracy of the network model for detecting the object can be improved through training.
In another embodiment, after the registration of the vertices of the target object in the sample image, a model construction step is further included. Referring to FIG. 9, the model building steps may specifically include steps 210-220, described separately below.
And step 210, constructing a convolutional neural network based on deep learning, and training the convolutional neural network by using the sample image after each vertex is registered until a loss function corresponding to the convolutional neural network is converged.
In one embodiment, the convolutional neural network based on deep learning can adopt a network type such as YOLO, retina-Net, SSD and the like; since these network types are common, it is easy to configure the corresponding loss function of the convolutional neural network. The training process of the sample image participating in the convolutional neural network is the updating process of the network weight coefficient, and the corresponding loss function gradually tends to converge along with the increase of the updating times; generally, the convolutional neural network training can be considered to be completed when the loss function converges.
Step 220, the trained convolutional neural network is used as a vertex detection model, and the vertex detection model is used for detecting local features of the image to be detected of the target object, so that a plurality of vertexes on the target object are obtained. That is, after the vertex detection model is obtained, the image to be detected of the target object can be input, and a plurality of vertexes on the target object can be output by detecting the local features on the target object in the image to be detected. It will be appreciated that the information for each vertex output may include: vertex position coordinates, vertex categories, vertex external rectangles and the like; as to which information can be output specifically, there is a relationship with the self structure of the convolutional neural network and the vertex information of the sample images involved in training.
With respect to the structure of the convolutional neural network, referring to fig. 10, a backbone network for image feature extraction, a classification detection network for classification, and a block regression network for regression are specifically configured in the network. The backbone network may be configured with operations such as convolution, activation function, pooling, and the like, and may extract image features from an input image (e.g., an image of the dial 1) to obtain a corresponding feature map. The classification detection network can set operations such as convolution, activation function and pooling, and can be used for concentrating on processing of characteristic classification due to the difference of network parameters, so that the obtained characteristic diagram is further classified, the characteristic diagram of the vertexes on the target object about classification information is obtained, and the classification information of the vertexes can be obtained. The square regression network can also be provided with operations such as convolution, activation function and pooling, and the obtained feature graph can be subjected to further regression processing according to the difference of network parameters, so that the feature graph of the vertex on the target object regressing relative to the square frame is obtained, and the position of the vertex and the circumscribed rectangle/outsourcing rectangle can be obtained.
The technical scheme includes that in order to obtain vertex matching results of part of labeled vertexes in a sample image, a graph matching algorithm is provided to match the part of labeled vertexes in an information search range of a standard template, and a vertex matching problem is converted into a graph matching problem to be solved and calculated, so that the corresponding relation between the part of labeled vertexes and the vertexes in the standard template is realized; therefore, under the condition of obtaining the corresponding relation of the vertex part, the transformation matrix between the corresponding points can be conveniently calculated according to the corresponding relation, so that the rest vertexes on the target object can be deduced and identified through the transformation relation, and the registration of all vertexes on the target object is finally completed.
Example II,
Referring to fig. 11, on the basis of the vertex registration method based on graph matching disclosed in the first embodiment, the present embodiment discloses an image processing apparatus, which mainly includes a storage unit 31, an obtaining unit 32, and a processing unit 33, which are respectively described below.
The storage unit 31 may employ any type of memory, and is mainly used to store the standard template of the standard object corresponding to the target object and the information search range of the standard template, and to store the image data set. In this embodiment, the standard template includes position information, angle information, and category information of all vertices on the standard object, where a vertex is used to represent a local feature of the object surface; the information search range is used for setting the detection range of the angle, the position and the distance scaling scale. In this embodiment, the image dataset comprises at least one sample image relating to the target object, each sample image comprising position information, angle information and class information of a partially labeled vertex on the target object.
The acquisition unit 32 is configured to acquire the standard template of the standard object corresponding to the target object and the information search range of the standard template from the storage unit 31, and acquire at least one sample image regarding the target object in the image data set.
The processing unit 33 may adopt data processing equipment such as a CPU, an FPGA, an MCU, and the like, and the processing unit 33 may be configured to match some marked vertexes in the sample image within an information search range of the standard template by using a preset graph matching algorithm to obtain a vertex matching result, and calculate a transformation relationship between a target object in the sample image and a standard object in the standard template according to the vertex matching result; and the processing unit 33 deduces the position information, the angle information and the category information of the other unmarked vertexes in the sample image according to the transformation relation, and performs registration on each vertex of the target object in the sample image by using the part of the marked vertexes in the sample image and the other unmarked vertexes in the sample image.
In a specific embodiment, the processing unit 33 comprises the following processes when executing the preset map matching algorithm:
(1) Forming a set of vertexes to be detected by using part of labeled vertexes in the sample image, calculating the position change relationship between any two vertexes in the set of vertexes to be detected and any two vertexes in the standard template, and constructing the connection relationship between the vertexes in the set of vertexes to be detected.
For example, the processing unit 33 is used for detecting the vertices in the set of vertices to be detectedAny two vertexes k and l and any two vertexes i and j in the standard template, calculating the position relation of the vertexes k and l and expressing as beta kl Calculating the position relation of the vertex i and the vertex j and expressing as beta ij (ii) a If the type information of the vertex i and the vertex k is the same and the type information of the vertex j and the vertex l is the same, calculating the position relation beta ij Relative to the positional relationship beta kl Is expressed as delta ij-kl (ii) a Determining the variation relation delta ij-kl Whether the vertex k is in the information search range of the standard template, if so, constructing the connection relation between the vertex k and the vertex l in the vertex set to be detected and using gamma kl And (4) showing.
(2) Comparing the connection relation between each vertex and other vertexes in the vertex set to be detected, and adding the vertex and other vertexes of which the connection relation accords with the preset screening condition into the vertex set X h And determining a change relationship
Figure BDA0003772164360000161
For example, if the processing unit 33 determines that the vertex set to be detected has a connection relationship between vertices, then all vertices in the vertex set to be detected are stored in the stack S t (ii) a Slave stack S t Sequentially popping each vertex and recording as the vertex h, and sequentially establishing a vertex set X h And adding vertex h to vertex set X h (ii) a Traversing other vertexes having connection relations with the vertex h in the vertex set to be detected, voting to determine one connection relation and obtaining a corresponding change relation
Figure BDA0003772164360000162
And forming a candidate point set P by using other vertexes having a connection relation with the vertex h. For each vertex o in the candidate point set P, acquiring the connection relation and the corresponding change relation between the vertex o and each vertex in the vertex set to be detected, and if the vertex o and the vertex set X are judged h The variation relation corresponding to the connection relation of any one vertex in the set is equal to the variation relation
Figure BDA0003772164360000163
Then vertex o is temporarily added to vertex set X h Performing the following steps; then obtain vertex set X h Newly added internal connection relation l between internal vertexes in And interior vertices and vertex set X h Newly added external connection relation l between external vertexes out Judging the newly added internal connection relation l in Whether the number of the new external connection relation is less than the number of the new external connection relation l out If the number of vertices o is greater than the number of vertices in vertex set X, then the vertex set X is shifted to the vertex o h If not, adding new external connection relation l out The vertex associated with vertex o also adds to candidate point set P. Traverse all vertices in the candidate set of points P and update the set of vertices X h And outputting the finally formed vertex set X h And the determined change relation
Figure BDA0003772164360000164
(3) Set the vertex X h The variation relation corresponding to each vertex and the determined variation relation
Figure BDA0003772164360000166
Comparing, and determining vertex set X by voting h A number of vertices matching the standard template.
For example, the processing unit 33 obtains the vertex set X h The connection relation and the corresponding change relation between the middle vertex h and the rest vertexes; if the variation corresponding to the connection between the vertex h and any of the other vertices is equal to the variation
Figure BDA0003772164360000165
Marking the connection relation as a first value; and counting and voting the vertexes marked with the first values in the connection relation to obtain a voting result of each vertex in the vertex set, and determining a plurality of vertexes matched with the standard template according to the voting result.
(4) And obtaining a vertex matching result by utilizing the corresponding relation between the matched plurality of vertexes and the corresponding vertexes in the standard template.
It should be noted that, as to the specific functions of the processing unit 33, reference may be made to steps 130 to 150 in the first embodiment, which is not described herein again.
Example III,
Referring to fig. 12, the present embodiment discloses an image processing apparatus, and the image processing apparatus 4 mainly includes a memory 41 and a processor 42.
The main components of the image processing apparatus 4 are a memory 41 and a processor 42. The memory 41 serves as a computer-readable storage medium, and is mainly used for storing a program, where the program may be a program code corresponding to the vertex registration method in the first embodiment.
Wherein the processor 42 is connected to the memory 41 for executing the program stored in the memory 31 to implement the vertex registration method. The functions implemented by the processor 42 can refer to the processing unit 33 in the second embodiment, and will not be described in detail here.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (11)

1. An image detection method, comprising:
acquiring a vertex detection model of a target object; the vertex detection model is configured to be a training result of at least one registered sample image to a convolutional neural network, the registered sample image contains the target object, each vertex of the target object is registered in advance, and the vertex is used for representing a local feature of the object surface;
the vertex detection model is used for detecting local features of the image to be detected of the target object to obtain a plurality of vertexes of the target object;
outputting a plurality of vertexes on the target object; each vertex has one or more of a location coordinate, a category, a bounding rectangle, and an envelope rectangle.
2. The image inspection method of claim 1, wherein the process of configuring the vertex inspection model comprises:
constructing the convolutional neural network based on deep learning;
training the convolutional neural network by utilizing at least one registered sample image until a loss function corresponding to the convolutional neural network is converged;
using the trained convolutional neural network as a vertex detection model;
each local feature on the target object in the registered sample image is characterized by a vertex, and the registered sample image includes position information, angle information and category information of all vertices on the target object.
3. The image detection method of claim 2, wherein the acquiring of the registered sample image comprises:
acquiring a standard template of a standard object corresponding to the target object and an information search range of the standard template; the standard template comprises position information, angle information and category information of all vertexes on the standard object, and each vertex is used for representing a local feature of the surface of the object; the information search range is used for setting detection ranges of angles, positions and distance scaling scales;
obtaining at least one pre-registered sample image in an image dataset with respect to the target object; the sample image before registration comprises position information, angle information and category information of a part of marked vertexes on the target object;
matching part of marked vertexes in the sample image before registration in the information search range of the standard template through a preset image matching algorithm to obtain a vertex matching result;
calculating a transformation relation of a target object in the sample image before registration relative to a standard object in the standard template according to the vertex matching result;
deducing the position information, the angle information and the category information of other unmarked vertexes in the sample image before registration according to the transformation relation;
and registering each vertex of the target object in the sample image before registration by using part of the labeled vertexes in the sample image before registration and other unlabeled vertexes in the sample image before registration to obtain the sample image after registration.
4. The image detection method of claim 3, wherein the matching of some labeled vertexes in the sample image before registration within the information search range of the standard template through a preset graph matching algorithm to obtain a vertex matching result comprises:
forming a set of vertexes to be detected by using part of labeled vertexes in the sample image before registration, calculating a position change relation between any two vertexes in the set of vertexes to be detected and any two vertexes in the standard template, and constructing a connection relation between the vertexes in the set of vertexes to be detected;
comparing the connection relation between each vertex and other vertexes in the vertex set to be detected, and adding the vertex and other vertexes of which the connection relation accords with a preset screening condition into the vertex set X h And determining a change relationship
Figure FDA0003772164350000021
Set the vertex X h The variation relation corresponding to each vertex and the determined variation relation
Figure FDA0003772164350000022
Comparing, and determining the vertex set X by voting h A plurality of vertexes matched with the standard template;
and obtaining the vertex matching result by utilizing the corresponding relation between the matched plurality of vertexes and the corresponding vertexes in the standard template.
5. The image detection method according to claim 3, wherein the acquiring of the standard template of the standard object corresponding to the target object and the information search range of the standard template includes:
acquiring a standard image of a standard object corresponding to the target object, acquiring labeling information of all vertexes on the standard object in the standard image, and generating the standard template according to the labeling information of all vertexes on the standard object; the marking information comprises position information, angle information and category information of each vertex on the standard object;
acquiring a reference direction and a rotation central point configured for the standard template, and maximum variation of an angle, a position and a distance scaling scale configured for the standard template; and setting detection ranges of the angle, the position and the distance scaling scale according to the configured reference direction, the rotation center point and the maximum variation of the angle, the position and the distance scaling scale, thereby forming an information search range of the standard template.
6. A vertex registration method, comprising:
acquiring a standard template of a standard object corresponding to a target object, wherein the standard template comprises position information, angle information and category information of all vertexes on the standard object, and the vertexes are used for representing a local feature of the surface of the object; acquiring at least one sample image in an image dataset with respect to the target object; the sample image comprises position information, angle information and category information of a part of marked vertexes on the target object;
forming a vertex set to be detected by using part of labeled vertexes in the sample image, and adding vertexes meeting preset screening conditions in the vertex set to be detected into a vertex set X h And from the set of vertices X h Determining a plurality of vertexes matched with the standard template to form a vertex matching result;
calculating a transformation relation of a target object in the sample image relative to a standard object in the standard template according to the vertex matching result;
deducing the position information, the angle information and the category information of other unmarked vertexes in the sample image according to the transformation relation;
and registering each vertex of the target object in the sample image by using the part of labeled vertexes in the sample image and the rest of unlabeled vertexes in the sample image.
7. The vertex registration method according to claim 6, wherein the vertices meeting the preset screening condition in the vertex set to be detected are added to a vertex set X h And from the set of vertices X h Determining a plurality of vertexes matching with the standard template to form a vertex matching result, including:
calculating the position change relationship between any two vertexes in the vertex set to be detected and any two vertexes in the standard template, and constructing the connection relationship between the vertexes in the vertex set to be detected;
comparing the connection relation between each vertex and other vertexes in the vertex set to be detected, and adding the vertex and other vertexes of which the connection relation accords with a preset screening condition into the vertex set X h And determining a change relationship
Figure FDA0003772164350000031
Set the vertex X h The change relation corresponding to each vertex and the determined change relation
Figure FDA0003772164350000032
Comparing, and determining the vertex set X by voting process h A plurality of vertexes matched with the standard template;
and obtaining the vertex matching result by utilizing the corresponding relation between the matched plurality of vertexes and the corresponding vertexes in the standard template.
8. The vertex registration method according to claim 7, wherein the calculating of the variation relationship of the positions between any two vertices in the set of vertices to be detected and any two vertices in the standard template, and the constructing of the connection relationship between the vertices in the set of vertices to be detected include:
calculating the position relation of the vertex k and the vertex l and expressing the relation as beta for any two vertexes k and l in the vertex set to be detected and any two vertexes i and j in the standard template kl Calculating the position relation of the vertex i and the vertex j and expressing as beta ij
If the type information of the vertex i and the vertex k is the same and the type information of the vertex j and the vertex l is the same, calculating the position relation beta ij Relative to the positional relationship beta kl Is expressed as delta ij-kl (ii) a The variation relation is used for tablesCharacterizing the angle offset and distance scaling of the relative transformation;
determining the variation relation delta ij-kl Whether the vertex k is in the information search range of the standard template or not, if so, constructing the connection relation between the vertex k and the vertex l in the vertex set to be detected and using gamma kl Represents; and the information search range is provided with a detection range of an angle alpha, a coordinate x, a coordinate y and a distance scaling scale.
9. The vertex registration method according to claim 7, wherein the comparison of the connection relationship between each vertex and each other vertex in the set of vertices to be detected adds the vertex and other vertices whose connection relationship meets a preset screening condition to the set of vertices X h And determining a change relationship
Figure FDA0003772164350000041
The method comprises the following steps:
judging whether the vertex set to be detected has a connection relation between vertexes, and storing all vertexes in the vertex set to be detected into a stack S t (ii) a From the stack S t Sequentially popping each vertex and recording as the vertex h, and sequentially establishing a vertex set X h And adding vertex h to vertex set X h
Traversing other vertexes having connection relations with the vertex h in the vertex set to be detected, voting to determine one connection relation and obtaining a corresponding change relation
Figure FDA0003772164350000042
Forming a candidate point set P by using other vertexes with connection relations with the vertex h;
for each vertex o in the candidate point set P, acquiring the connection relation and the corresponding change relation between the vertex o and each vertex in the vertex set to be detected, and if the vertex o and the vertex set X are judged h The variation relation corresponding to the connection relation of any one vertex in the set is equal to the variation relation
Figure FDA0003772164350000043
Adding vertex o to the vertex set X temporarily h Performing the following steps; then obtaining the vertex set X h Newly added internal connection relation l between internal vertexes in And each internal vertex and the vertex set X h Newly added external connection relation l between external vertexes out Determining the newly added internal connection relation l in Whether the number of the new external connection relation is less than the number of the new external connection relation l out If vertex o is moved out of the vertex set X h If not, adding new external connection relation l out Adding the vertex associated with the vertex o into the candidate point set P;
traversing all vertices in the candidate point set P and updating the vertex set X h And outputting the finally formed vertex set X h And the determined change relation
Figure FDA0003772164350000044
10. The vertex registration method of claim 7, wherein the grouping X of the vertices h The change relation corresponding to each vertex and the determined change relation
Figure FDA0003772164350000045
Comparing, and determining the vertex set X by voting process h The plurality of vertexes matched with the standard template comprise:
obtaining the vertex set X h The connection relation and the corresponding change relation between the middle vertex h and the rest vertexes;
if the variation relation corresponding to the connection relation between the vertex h and any one of the other vertexes is equal to the variation relation
Figure FDA0003772164350000046
Marking the connection relation as a first value;
and counting and voting the vertexes marked as the first values in the connection relation to obtain voting results of all the vertexes in the vertex set, and determining a plurality of vertexes matched with the standard template according to the voting results.
11. A computer-readable storage medium, characterized in that the medium has stored thereon a program executable by a processor to implement the image detection method of any one of claims 1-5 or to implement the vertex registration method of any one of claims 6-10.
CN202210904918.3A 2021-07-01 2021-07-01 Image detection method, vertex registration method and storage medium Pending CN115457092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210904918.3A CN115457092A (en) 2021-07-01 2021-07-01 Image detection method, vertex registration method and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110740635.5A CN113344996B (en) 2021-07-01 2021-07-01 Vertex registration method and device based on graph matching and storage medium
CN202210904918.3A CN115457092A (en) 2021-07-01 2021-07-01 Image detection method, vertex registration method and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110740635.5A Division CN113344996B (en) 2021-07-01 2021-07-01 Vertex registration method and device based on graph matching and storage medium

Publications (1)

Publication Number Publication Date
CN115457092A true CN115457092A (en) 2022-12-09

Family

ID=77481921

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210904918.3A Pending CN115457092A (en) 2021-07-01 2021-07-01 Image detection method, vertex registration method and storage medium
CN202110740635.5A Active CN113344996B (en) 2021-07-01 2021-07-01 Vertex registration method and device based on graph matching and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110740635.5A Active CN113344996B (en) 2021-07-01 2021-07-01 Vertex registration method and device based on graph matching and storage medium

Country Status (1)

Country Link
CN (2) CN115457092A (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203342A (en) * 2016-07-01 2016-12-07 广东技术师范学院 Target identification method based on multi-angle local feature coupling
JP2018139086A (en) * 2017-02-24 2018-09-06 三菱電機株式会社 Correlation tracking device, correlation tracking method and correlation tracking program
CN109977833B (en) * 2019-03-19 2021-08-13 网易(杭州)网络有限公司 Object tracking method, object tracking device, storage medium, and electronic apparatus
CN110532897B (en) * 2019-08-07 2022-01-04 北京科技大学 Method and device for recognizing image of part
CN112232420A (en) * 2020-10-19 2021-01-15 深圳市华汉伟业科技有限公司 Image labeling method, target detection method and device and storage medium

Also Published As

Publication number Publication date
CN113344996A (en) 2021-09-03
CN113344996B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN106599830B (en) Face key point positioning method and device
CN109002562B (en) Instrument recognition model training method and device and instrument recognition method and device
CN107463946B (en) Commodity type detection method combining template matching and deep learning
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
CN113378976B (en) Target detection method based on characteristic vertex combination and readable storage medium
CN105551022B (en) A kind of image error matching inspection method based on shape Interactive matrix
US20090028442A1 (en) Method And Apparatus For Determining Similarity Between Surfaces
CN110688947A (en) Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
CN113111844B (en) Operation posture evaluation method and device, local terminal and readable storage medium
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
WO2017107865A1 (en) Image retrieval system, server, database, and related method
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
US20200005078A1 (en) Content aware forensic detection of image manipulations
CN114863464B (en) Second-order identification method for PID drawing picture information
JP2018106618A (en) Image data classifying apparatus, object detection apparatus, and program therefor
CN113420848A (en) Neural network model training method and device and gesture recognition method and device
CN113255702B (en) Target detection method and target detection device based on graph matching
CN109840529B (en) Image matching method based on local sensitivity confidence evaluation
WO2017107866A1 (en) Image retrieval server and system, related retrieval and troubleshooting method
JP7178803B2 (en) Information processing device, information processing device control method and program
CN113344996B (en) Vertex registration method and device based on graph matching and storage medium
JP2001143073A (en) Method for deciding position and attitude of object
CN111079208B (en) Particle swarm algorithm-based CAD model surface corresponding relation identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination