CN116704224B - Marker identification method and identification device based on deep learning - Google Patents
Marker identification method and identification device based on deep learning Download PDFInfo
- Publication number
- CN116704224B CN116704224B CN202310990912.7A CN202310990912A CN116704224B CN 116704224 B CN116704224 B CN 116704224B CN 202310990912 A CN202310990912 A CN 202310990912A CN 116704224 B CN116704224 B CN 116704224B
- Authority
- CN
- China
- Prior art keywords
- comparison
- marker
- image
- features
- point location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000003550 marker Substances 0.000 title claims abstract description 104
- 238000013135 deep learning Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012216 screening Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 16
- 238000012935 Averaging Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000000214 mouth Anatomy 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The application discloses a marker identification method and device based on deep learning, and relates to the technical field of image identification and positioning; the recognition method is configured with a deep learning database, and the deep learning database is used for storing a plurality of comparison images output by the management terminal; the identification method comprises the following steps: extracting comparison features of the comparison images, and comparing the images based on the comparison feature marks; obtaining an image to be identified, extracting regional image features from the image to be identified, and marking the regional image features of which the comparison results with the comparison features are in a first similar range as a marker region; the method and the device can improve the accuracy of acquiring the target point location, and simultaneously, the method and the device can reduce the processing amount of image data and improve the accuracy of point location screening by firstly identifying the region and then identifying the point location in the identification process, so as to solve the problems of low identification efficiency and inaccurate identification positioning in the existing identification method.
Description
Technical Field
The application relates to the technical field of image recognition and positioning, in particular to a marker recognition method and device based on deep learning.
Background
Image recognition, which is a technique for processing, analyzing and understanding images by using a computer to recognize targets and objects in various different modes, is a practical application for applying a deep learning algorithm; in the process of image recognition, the difficulty of feature extraction in the image is different due to different specific application fields, for example, in the oral cavity field, accurate recognition and positioning of a treatment area in the oral cavity are required.
However, in practical application, the image shot by the camera is disordered in background, certain difficulty exists in positioning the marks of the designated area, the designated area is subjected to feature extraction in the picture shot in advance under normal conditions, point location comparison is performed in the next comparison process, but the internal structure of the oral cavity is continuously changed in the actual shooting process, the structure of the preset point location is changed, so that the final recognition is not accurate enough or the problem that recognition cannot be performed is solved, and meanwhile, if the preset point location features are set too simply, the problems that the extracted features are too much and the data processing amount is too large are caused in the actual recognition process.
Disclosure of Invention
The method aims to solve at least one of the technical problems in the prior art to a certain extent, can improve the accuracy of acquiring the target point location by arranging the marker, and can reduce the processing amount of image data and improve the accuracy of point location screening by firstly identifying the region and then identifying the point location in the identification process so as to solve the problems of lower identification efficiency and insufficient identification positioning in the existing identification method.
In order to achieve the above object, a first aspect of the present application provides a marker identification method based on deep learning, the identification method being configured with a deep learning database for storing a plurality of comparison images output by a management terminal; the identification method comprises the following steps: extracting comparison features of the comparison images, and comparing the images based on the comparison feature marks;
obtaining an image to be identified, extracting regional image features from the image to be identified, and marking the regional image features of which the comparison results with the comparison features are in a first similar range as a marker region;
performing point location feature comparison on the marker region, marking point location features, which are within a second similar range to the comparison result of the comparison features, as marker points, and eliminating the marker region without the marker points;
determining coordinates of the marker points;
and marking the point position characteristics of the comparison result with the comparison characteristics within a third similar range as the point position to be updated, and outputting the point position to be updated to the management terminal.
Extracting comparison features of the comparison image, and comparing the image based on the comparison feature marks comprises: the method comprises the steps of carrying out gray processing on a comparison image, dividing the comparison image into a blank area and a characteristic area, setting the blank area to be white, setting the characteristic area to be black, and enabling the blank area to completely surround the characteristic area;
setting a first similar range: the first similarity range is between the first quantity threshold value and the second quantity threshold value, and comprises the first quantity threshold value and the second quantity threshold value, wherein the first quantity threshold value is smaller than the second quantity threshold value;
setting comparison characteristics of a first similar range, wherein the comparison characteristics of the first similar range comprise: marking the adjacent areas with black and white, and setting the areas as areas to be screened; and obtaining the number of independent white areas and the number of independent black areas in the area to be screened, and adding the number of the independent white areas and the number of the independent black areas to obtain a first similar number.
Extracting comparison features of the comparison image, and comparing the image based on the comparison feature marks further comprises: the characteristic region comprises two fan-shaped regions which are arranged in a central symmetry way, the central points of the two fan-shaped regions are identical, and a first fan-shaped angle is arranged for the fan-shaped regions;
setting the comparison feature of the second similar range, wherein the comparison feature of the second similar range comprises: extracting independent black areas in the areas to be screened to obtain the outline of the black areas, and setting the outline as a black comparison outline;
dividing the black comparison outline, dividing the black comparison outline section by section according to line segments and curves, and when the divided black comparison outline has two groups of line segments and one group of curves, acquiring an included angle between the two groups of line segments, and setting the included angle as a comparison included angle;
extracting comparison included angles of black comparison outlines in the area to be screened one by one, and setting the number of the comparison included angles as point location comparison numbers;
when the point position comparison number is larger than 1, the distance between the vertexes of the two adjacent comparison included angles is obtained, and when the distance between the vertexes of the two adjacent comparison included angles is zero, the point position feature is marked;
averaging comparison included angles at two sides of a point feature to obtain a comparison reference angle, obtaining a difference value between the comparison reference angle and the first fan-shaped angle, and setting the difference value as a point comparison difference value;
setting a second similar range: the second similar range is between the first angle difference value and the second angle difference value, and comprises a first angle difference value and a second angle difference value, wherein the first angle difference value is smaller than the second angle difference value.
Extracting regional image features from the image to be identified, and marking the regional image features with the comparison result of the comparison features within a first similar range as a marker region comprises: carrying out graying treatment on the image to be identified, setting a region screening frame, and carrying out region image characteristic treatment through the region screening frame;
a first similar number in the region image is acquired, and the region image feature is marked as a marker region when the first similar number is within a first similar range.
The point location feature comparison is carried out on the marker region, and the point location feature, which is within a second similar range with the comparison result of the comparison feature, is marked as the marker point location, which comprises the following steps: extracting point location features of the marker region to obtain comparison reference angles of the point location features; obtaining a difference value between the comparison reference angle and the first fan angle to obtain a point location comparison difference value, and setting the point location characteristic as a marker point location when the point location comparison difference value belongs to a second similar range;
when a plurality of marker points exist in the marker region, the region screening frame is moved, so that only one marker point exists in the region screening frame.
Determining coordinates of the marker points includes: establishing a plane coordinate system, placing a shooting area into the plane coordinate system, acquiring the central point position coordinates of the shooting area, and placing an image to be identified into the plane coordinate system according to the central point position coordinates of the shooting area;
and determining coordinates of the marker points according to the plane coordinate system.
Marking the point location feature of the comparison result with the comparison feature in the third similar range as the point location to be updated comprises: setting a third similar range: the third similar range is between the second angle difference value and the third angle difference value, and does not comprise the second angle difference value and the third angle difference value;
extracting point location features of the marker region to obtain comparison reference angles of the point location features; and obtaining a difference value between the comparison reference angle and the first fan-shaped angle to obtain a point location comparison difference value, and marking the point location comparison difference value as the region characteristic to be updated when the point location comparison difference value belongs to a third phase range.
In a second aspect, the application further provides a marker recognition device based on deep learning, wherein the recognition device is in data connection with the management terminal and comprises a deep learning module, an image acquisition module and a recognition module; the deep learning module and the image acquisition module are respectively connected with the identification module in a data mode; the deep learning module comprises a deep learning database and a deep learning unit, wherein the deep learning database is used for storing a plurality of comparison images output by the management terminal; the deep learning unit is used for extracting comparison features of the comparison images and comparing the images based on the comparison feature marks;
the image acquisition module is used for acquiring an image to be identified and outputting the image to be identified to the identification module;
the identification module comprises a region identification unit, a point location identification unit, a coordinate determination unit and an updating unit, wherein the region identification unit is configured with a region identification strategy, and the region identification strategy comprises the following steps: extracting regional image features from the image to be identified, and marking the regional image features of which the comparison results are in a first similar range as a marker region; the point location identification unit is configured with a point location identification strategy, and the point location identification strategy comprises: performing point location feature comparison on the marker region, marking point location features, which are within a second similar range to the comparison result of the comparison features, as marker points, and eliminating the marker region without the marker points; the coordinate determining unit is used for determining coordinates of the marker points; the updating unit is configured with an updating policy comprising: and marking the point position characteristics of the comparison result with the comparison characteristics within a third similar range as the point position to be updated, and outputting the point position to be updated to the management terminal.
The application has the beneficial effects that: according to the application, the images are compared based on the comparison feature marks by extracting the comparison features of the comparison images; the design can carry out deep learning on the preset comparison image, and improves the comparison efficiency in the subsequent feature extraction process;
according to the method, the image to be identified is obtained, the regional image features are extracted from the image to be identified, and the regional image features, which are in a first similar range with the comparison result of the comparison features, are marked as marker regions; performing point location feature comparison on the marker region, marking point location features, which are within a second similar range to the comparison result of the comparison features, as marker points, and eliminating the marker region without the marker points; the design can firstly carry out region extraction on the point positions to be extracted, and because the data volume required by the comparison of the point positions is larger, the point positions are compared firstly through region comparison and then the successfully compared region is subjected to secondary point position comparison, so that the data processing volume in the comparison process can be reduced; then determining coordinates of the marker points; the accuracy of point location coordinate positioning can be improved;
the application marks the point position characteristics of the comparison result within the third similar range with the comparison characteristics as the point position to be updated, and outputs the point position to be updated to the management terminal.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
FIG. 1 is a flow chart of steps of an identification method of the present application;
FIG. 2 is a schematic block diagram of an identification device of the present application;
FIG. 3 is a schematic view of a plurality of alignment images according to the present application;
fig. 4 is a block diagram of a region screening block according to the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 2-4, the present application provides a marker recognition device based on deep learning, which can improve the accuracy of target point location acquisition by setting markers, and can reduce the processing amount of image data and improve the accuracy of point location screening by performing region recognition and then point location recognition in the recognition process.
Specifically, the identification device is in data connection with the management terminal, and comprises a deep learning module, an image acquisition module and an identification module. The management terminal belongs to a manual monitoring terminal, and can timely adjust problems existing in the identification process.
The deep learning module and the image acquisition module are respectively connected with the identification module in a data mode; the deep learning module provides a deep learning template for the recognition module, and the image acquisition module provides images acquired in real time for the recognition module.
Referring to fig. 3, the deep learning module includes a deep learning database and a deep learning unit, where the deep learning database is used to store a plurality of comparison images output by the management terminal; in the practical application process, the management terminal can set corresponding comparison images according to the marker structure in practical use, and can recognize updated comparison images after deep learning. The deep learning unit is used for extracting comparison features of the comparison images and comparing the images based on the comparison feature marks; the deep learning unit is configured with an area learning strategy including: the method comprises the steps of carrying out gray processing on a comparison image, dividing the comparison image into a blank area and a characteristic area, setting the blank area to be white, setting the characteristic area to be black, and enabling the blank area to completely surround the characteristic area;
setting a first similar range: the first similarity range is between the first quantity threshold value and the second quantity threshold value, and comprises the first quantity threshold value and the second quantity threshold value, wherein the first quantity threshold value is smaller than the second quantity threshold value;
setting comparison characteristics of a first similar range, wherein the comparison characteristics of the first similar range comprise: marking the adjacent areas with black and white, and setting the areas as areas to be screened; obtaining the number of independent white areas and the number of independent black areas in an area to be screened, and adding the number of the independent white areas and the number of the independent black areas to obtain a first similar number; in general, the interval value of the first similarity range is set according to the setting mode of the area screening frame in the identification process, the size of the area screening frame can be specifically set according to the actually required data processing accuracy, if the data processing accuracy requirement is low, the area screening frame can be enlarged, the preliminary screening speed of the whole area can be increased, the setting mode is that at most two comparison images are put into one area screening frame, if a plurality of adjacent comparison images exist, the maximum value of the first similarity number in the images acquired through the area screening frame is equal to 12 in terms of 4 white areas plus 8 black areas, and when only one marker image is arranged in the images acquired through the area screening frame, 1 white area plus 2 black areas are equal to 3, therefore, in the specific setting, the first quantity threshold is set to 2, the second quantity threshold is set to 13, the first similarity range is between 2 and 13, and the first similarity range is between 2 and 13.
The deep learning unit is further configured with a point location learning strategy, the point location learning strategy comprising: the characteristic region comprises two fan-shaped regions which are arranged in a central symmetry way, the central points of the two fan-shaped regions are identical, and a first fan-shaped angle is arranged for the fan-shaped regions; when the fan is specifically arranged, the first fan angle is set to be 45 degrees;
setting the comparison feature of the second similar range, wherein the comparison feature of the second similar range comprises: extracting independent black areas in the areas to be screened to obtain the outline of the black areas, and setting the outline as a black comparison outline;
dividing the black comparison outline, dividing the black comparison outline section by section according to line segments and curves, and when the divided black comparison outline has two groups of line segments and one group of curves, acquiring an included angle between the two groups of line segments, and setting the included angle as a comparison included angle;
extracting comparison included angles of black comparison outlines in the area to be screened one by one, and setting the number of the comparison included angles as point location comparison numbers;
when the point position comparison number is larger than 1, the distance between the vertexes of the two adjacent comparison included angles is obtained, and when the distance between the vertexes of the two adjacent comparison included angles is zero, the point position feature is marked;
averaging comparison included angles at two sides of a point feature to obtain a comparison reference angle, obtaining a difference value between the comparison reference angle and the first fan-shaped angle, and setting the difference value as a point comparison difference value;
setting a second similar range: the second similar range is between the first angle difference value and the second angle difference value, and comprises a first angle difference value and a second angle difference value, wherein the first angle difference value is smaller than the second angle difference value; referring to the setting of the first fan angle, the first angle difference is set to 30 degrees, the second angle difference is set to 60 degrees, and the second similar range is between 30-60, and includes 30 and 60.
The image acquisition module is used for acquiring an image to be identified and outputting the image to be identified to the identification module; the image acquisition module is specifically set as a camera, and the image to be identified is acquired through the camera.
The identification module comprises a region identification unit, a point location identification unit, a coordinate determination unit and an updating unit, wherein the region identification unit is configured with a region identification strategy, and the region identification strategy comprises the following steps: extracting regional image features from the image to be identified, and marking the regional image features of which the comparison results are in a first similar range as a marker region; referring to fig. 4, the area identifying policy further includes: carrying out graying treatment on the image to be identified, setting a region screening frame, and carrying out region image characteristic treatment through the region screening frame;
a first similar number in the region image is acquired, and the region image feature is marked as a marker region when the first similar number is within a first similar range. Through the comparison, a plurality of standard substance areas can be acquired first, so that the basic data quantity in the point location identification process can be reduced, and the overall data processing efficiency is improved.
The point location identification unit is configured with a point location identification strategy, and the point location identification strategy comprises: performing point location feature comparison on the marker region, marking point location features, which are within a second similar range to the comparison result of the comparison features, as marker points, and eliminating the marker region without the marker points; the point location identification strategy further comprises: extracting point location features of the marker region to obtain comparison reference angles of the point location features; obtaining a difference value between the comparison reference angle and the first fan angle to obtain a point location comparison difference value, and setting the point location characteristic as a marker point location when the point location comparison difference value belongs to a second similar range;
when a plurality of marker points exist in the marker region, the region screening frame is moved, so that only one marker point exists in the region screening frame. If a plurality of marker points exist in one region screening frame, the fact that a plurality of markers exist in one region screening frame is indicated, so that the region screening frame needs to be moved to a position where only one marker point exists, and the uniqueness in the subsequent point coordinate determination can be ensured.
The coordinate determining unit is used for determining coordinates of the marker points; the updating unit is configured with an updating policy, and the coordinate determining unit is configured with a coordinate determining policy including: establishing a plane coordinate system, placing a shooting area into the plane coordinate system, acquiring the central point position coordinates of the shooting area, and placing an image to be identified into the plane coordinate system according to the central point position coordinates of the shooting area; and determining coordinates of the marker points according to the plane coordinate system. By putting all the images into the whole coordinate system, the coordinate positions of the images in different shooting periods can be unified and corresponding, and the accuracy of the corresponding positions of the marker points is improved.
The update strategy comprises the following steps: marking point location features of the comparison result with the comparison features in a third similar range as point locations to be updated, and outputting the point locations to be updated to the management terminal; the updating unit is configured with an updating policy comprising: setting a third similar range: the third similar range is between the second angle difference value and the third angle difference value, and does not comprise the second angle difference value and the third angle difference value;
extracting point location features of the marker region to obtain comparison reference angles of the point location features; obtaining a difference value between the comparison reference angle and the first fan-shaped angle to obtain a point location comparison difference value, and marking the point location comparison difference value as the region characteristic to be updated when the point location comparison difference value belongs to a third phase range; in the specific implementation, the third angle difference of the third similar range is set to 90 degrees, the third similar range is between 60 and 90 and does not comprise 60 and 90, the areas similar to the marker point position features in some identification processes are output to the management terminal, and the positions with deviation in the placement of some markers can be calibrated through screening of management personnel.
Referring to fig. 1, the present application further provides a marker identification method based on deep learning, where the identification method is configured with a deep learning database, and the deep learning database is used to store a plurality of comparison images output by a management terminal; the identification method comprises the following steps:
s1, extracting comparison features of comparison images, and comparing the images based on comparison feature marks; step S1 further includes:
step S111, carrying out gray processing on the comparison image, dividing the comparison image into a blank area and a characteristic area, setting the blank area to be white, setting the characteristic area to be black, and completely surrounding the characteristic area by the blank area;
step S112, setting a first similarity range: the first similarity range is between the first quantity threshold value and the second quantity threshold value, and comprises the first quantity threshold value and the second quantity threshold value, wherein the first quantity threshold value is smaller than the second quantity threshold value;
step S113, setting the comparison characteristics of the first similar range; the alignment features of the first similar range include: step S1131, marking the adjacent areas with black and white, and setting the areas as areas to be screened;
step S1132, obtaining the number of independent white areas and the number of independent black areas in the area to be screened, and adding the number of independent white areas and the number of independent black areas to obtain a first similar number;
step S1 further includes:
step S121, the characteristic area comprises two fan-shaped areas which are arranged in a central symmetry mode, the central points of the two fan-shaped areas are identical, and a first fan-shaped angle is arranged for the fan-shaped areas;
step S122, setting the comparison characteristic of a second similar range; the alignment features of the second similar range include: step S1221, extracting independent black areas in the areas to be screened, obtaining the outline of the black areas, and setting the outline as a black comparison outline;
step S1222, dividing the black comparison outline segment by segment according to line segments and curves, obtaining included angles between two groups of line segments and a group of curves when the divided black comparison outline has the two groups of line segments and the group of curves, and setting the included angles as comparison included angles;
step S1223, extracting comparison included angles of black comparison outlines in the area to be screened one by one, and setting the number of the comparison included angles as point location comparison numbers;
step S1224, when the point position comparison number is larger than 1, obtaining the distance between the vertexes of the two adjacent comparison included angles, and when the distance between the vertexes of the two adjacent comparison included angles is zero, marking as a point position feature;
step S1225, averaging comparison included angles at two sides of a point feature to obtain a comparison reference angle, obtaining a difference value between the comparison reference angle and the first fan-shaped angle, and setting the difference value as a point comparison difference value;
step S123, setting a second similar range: the second similar range is between the first angle difference value and the second angle difference value, and comprises a first angle difference value and a second angle difference value, wherein the first angle difference value is smaller than the second angle difference value.
S2, acquiring an image to be identified, extracting regional image features from the image to be identified, and marking the regional image features of which the comparison results with the comparison features are in a first similar range as a marker region; step S2 further includes:
step S21, carrying out gray processing on the image to be identified, setting a region screening frame, and carrying out region image feature processing through the region screening frame;
in step S22, a first similar number in the area image is acquired, and when the first similar number is within a first similar range, the area image feature is marked as a marker area.
S3, carrying out point location feature comparison on the marker region, marking point location features, which are within a second similar range to the comparison result of the comparison features, as marker point locations, and eliminating the marker region without the marker point locations; step S3 further includes:
step S31, extracting point location features of the marker region, and obtaining comparison reference angles of the point location features; obtaining a difference value between the comparison reference angle and the first fan angle to obtain a point location comparison difference value, and setting the point location characteristic as a marker point location when the point location comparison difference value belongs to a second similar range;
and S32, when a plurality of marker points exist in the marker region, moving the region screening frame to enable only one marker point to exist in the region screening frame.
S4, determining coordinates of the marker points; step S4 further includes:
step S41, a plane coordinate system is established, a shooting area is placed in the plane coordinate system, the center point position coordinates of the shooting area are obtained, and an image to be identified is placed in the plane coordinate system according to the center point position coordinates of the shooting area;
and S42, determining coordinates of the marker points according to the plane coordinate system.
S5, marking point location features with comparison results of the comparison features in a third similar range as point locations to be updated, and outputting the point locations to be updated to the management terminal; step S5 further includes:
step S51, setting a third similar range: the third similar range is between the second angle difference value and the third angle difference value, and does not comprise the second angle difference value and the third angle difference value;
step S52, extracting point location features of the marker region, and obtaining comparison reference angles of the point location features; and obtaining a difference value between the comparison reference angle and the first fan-shaped angle to obtain a point location comparison difference value, and marking the point location comparison difference value as the region characteristic to be updated when the point location comparison difference value belongs to a third phase range.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. The storage medium may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Claims (6)
1. The marker identification method based on the deep learning is characterized in that the identification method is configured with a deep learning database, and the deep learning database is used for storing a plurality of comparison images output by a management terminal; the identification method comprises the following steps: extracting comparison features of the comparison images, and comparing the images based on the comparison feature marks;
obtaining an image to be identified, extracting regional image features from the image to be identified, and marking the regional image features of which the comparison results with the comparison features are in a first similar range as a marker region;
performing point location feature comparison on the marker region, marking point location features, which are within a second similar range to the comparison result of the comparison features, as marker points, and eliminating the marker region without the marker points;
determining coordinates of the marker points;
marking point location features of the comparison result with the comparison features in a third similar range as point locations to be updated, and outputting the point locations to be updated to the management terminal;
extracting comparison features of the comparison image, and comparing the image based on the comparison feature marks comprises: the method comprises the steps of carrying out gray processing on a comparison image, dividing the comparison image into a blank area and a characteristic area, setting the blank area to be white, setting the characteristic area to be black, and enabling the blank area to completely surround the characteristic area;
setting a first similar range: the first similarity range is between the first quantity threshold value and the second quantity threshold value, and comprises the first quantity threshold value and the second quantity threshold value, wherein the first quantity threshold value is smaller than the second quantity threshold value;
setting comparison characteristics of a first similar range, wherein the comparison characteristics of the first similar range comprise: marking the adjacent areas with black and white, and setting the areas as areas to be screened; obtaining the number of independent white areas and the number of independent black areas in an area to be screened, and adding the number of the independent white areas and the number of the independent black areas to obtain a first similar number;
extracting comparison features of the comparison image, and comparing the image based on the comparison feature marks further comprises: the characteristic region comprises two fan-shaped regions which are arranged in a central symmetry way, the central points of the two fan-shaped regions are identical, and a first fan-shaped angle is arranged for the fan-shaped regions;
setting the comparison feature of the second similar range, wherein the comparison feature of the second similar range comprises: extracting independent black areas in the areas to be screened to obtain the outline of the black areas, and setting the outline as a black comparison outline;
dividing the black comparison outline, dividing the black comparison outline section by section according to line segments and curves, and when the divided black comparison outline has two groups of line segments and one group of curves, acquiring an included angle between the two groups of line segments, and setting the included angle as a comparison included angle;
extracting comparison included angles of black comparison outlines in the area to be screened one by one, and setting the number of the comparison included angles as point location comparison numbers;
when the point position comparison number is larger than 1, the distance between the vertexes of the two adjacent comparison included angles is obtained, and when the distance between the vertexes of the two adjacent comparison included angles is zero, the point position feature is marked;
averaging comparison included angles at two sides of a point feature to obtain a comparison reference angle, obtaining a difference value between the comparison reference angle and the first fan-shaped angle, and setting the difference value as a point comparison difference value;
setting a second similar range: the second similar range is between the first angle difference value and the second angle difference value, and comprises a first angle difference value and a second angle difference value, wherein the first angle difference value is smaller than the second angle difference value.
2. The method for identifying the marker based on the deep learning according to claim 1, wherein the step of extracting the regional image features from the image to be identified and marking the regional image features of which the comparison result with the comparison features is within the first similar range as the marker region comprises the steps of: carrying out graying treatment on the image to be identified, setting a region screening frame, and carrying out region image characteristic treatment through the region screening frame;
a first similar number in the region image is acquired, and the region image feature is marked as a marker region when the first similar number is within a first similar range.
3. The method for identifying the marker based on the deep learning according to claim 2, wherein the step of comparing the point location features of the marker region and the point location features of the comparison result of the comparison features within the second similar range are marked as the marker point locations comprises the steps of: extracting point location features of the marker region to obtain comparison reference angles of the point location features; obtaining a difference value between the comparison reference angle and the first fan angle to obtain a point location comparison difference value, and setting the point location characteristic as a marker point location when the point location comparison difference value belongs to a second similar range;
when a plurality of marker points exist in the marker region, the region screening frame is moved, so that only one marker point exists in the region screening frame.
4. A method of marker identification based on deep learning as claimed in claim 3, wherein determining coordinates of marker points comprises: establishing a plane coordinate system, placing a shooting area into the plane coordinate system, acquiring the central point position coordinates of the shooting area, and placing an image to be identified into the plane coordinate system according to the central point position coordinates of the shooting area;
and determining coordinates of the marker points according to the plane coordinate system.
5. The method for identifying a marker based on deep learning according to claim 4, wherein marking the point location feature having the comparison result with the comparison feature within the third similar range as the point location to be updated comprises: setting a third similar range: the third similar range is between the second angle difference value and the third angle difference value, and does not comprise the second angle difference value and the third angle difference value;
extracting point location features of the marker region to obtain comparison reference angles of the point location features; and obtaining a difference value between the comparison reference angle and the first fan-shaped angle to obtain a point location comparison difference value, and marking the point location comparison difference value as the region characteristic to be updated when the point location comparison difference value belongs to a third phase range.
6. The identification device suitable for the marker identification method based on deep learning as claimed in any one of claims 1-5, wherein the identification device is in data connection with a management terminal, and comprises a deep learning module, an image acquisition module and an identification module; the deep learning module and the image acquisition module are respectively connected with the identification module in a data mode; the deep learning module comprises a deep learning database and a deep learning unit, wherein the deep learning database is used for storing a plurality of comparison images output by the management terminal; the deep learning unit is used for extracting comparison features of the comparison images and comparing the images based on the comparison feature marks;
the image acquisition module is used for acquiring an image to be identified and outputting the image to be identified to the identification module;
the identification module comprises a region identification unit, a point location identification unit, a coordinate determination unit and an updating unit, wherein the region identification unit is configured with a region identification strategy, and the region identification strategy comprises the following steps: extracting regional image features from the image to be identified, and marking the regional image features of which the comparison results are in a first similar range as a marker region; the point location identification unit is configured with a point location identification strategy, and the point location identification strategy comprises: performing point location feature comparison on the marker region, marking point location features, which are within a second similar range to the comparison result of the comparison features, as marker points, and eliminating the marker region without the marker points; the coordinate determining unit is used for determining coordinates of the marker points; the updating unit is configured with an updating policy comprising: and marking the point position characteristics of the comparison result with the comparison characteristics within a third similar range as the point position to be updated, and outputting the point position to be updated to the management terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310990912.7A CN116704224B (en) | 2023-08-08 | 2023-08-08 | Marker identification method and identification device based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310990912.7A CN116704224B (en) | 2023-08-08 | 2023-08-08 | Marker identification method and identification device based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116704224A CN116704224A (en) | 2023-09-05 |
CN116704224B true CN116704224B (en) | 2023-11-17 |
Family
ID=87839727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310990912.7A Active CN116704224B (en) | 2023-08-08 | 2023-08-08 | Marker identification method and identification device based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116704224B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109711416A (en) * | 2018-11-23 | 2019-05-03 | 西安天和防务技术股份有限公司 | Target identification method, device, computer equipment and storage medium |
CN109978903A (en) * | 2019-03-13 | 2019-07-05 | 浙江大华技术股份有限公司 | A kind of identification point recognition methods, device, electronic equipment and storage medium |
CN110781822A (en) * | 2019-10-25 | 2020-02-11 | 重庆大学 | SAR image target recognition method based on self-adaptive multi-azimuth dictionary pair learning |
CN112883827A (en) * | 2021-01-28 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Method and device for identifying designated target in image, electronic equipment and storage medium |
CN113220928A (en) * | 2020-01-21 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Image searching method and device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5949955B2 (en) * | 2013-01-25 | 2016-07-13 | トヨタ自動車株式会社 | Road environment recognition system |
-
2023
- 2023-08-08 CN CN202310990912.7A patent/CN116704224B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109711416A (en) * | 2018-11-23 | 2019-05-03 | 西安天和防务技术股份有限公司 | Target identification method, device, computer equipment and storage medium |
CN109978903A (en) * | 2019-03-13 | 2019-07-05 | 浙江大华技术股份有限公司 | A kind of identification point recognition methods, device, electronic equipment and storage medium |
CN110781822A (en) * | 2019-10-25 | 2020-02-11 | 重庆大学 | SAR image target recognition method based on self-adaptive multi-azimuth dictionary pair learning |
CN113220928A (en) * | 2020-01-21 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Image searching method and device, electronic equipment and storage medium |
CN112883827A (en) * | 2021-01-28 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Method and device for identifying designated target in image, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116704224A (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107609557B (en) | Pointer instrument reading identification method | |
CN106778823B (en) | Automatic identification method for reading of pointer instrument | |
CN105787466B (en) | A kind of fine recognition methods and system of type of vehicle | |
MX2007008363A (en) | Method for improved image segmentation. | |
CN104537367B (en) | A kind of method of calibration of VIN codes | |
CN110189341B (en) | Image segmentation model training method, image segmentation method and device | |
CN105069395B (en) | Roadmarking automatic identifying method based on Three Dimensional Ground laser scanner technique | |
CN112289416B (en) | Method for evaluating guide needle placement accuracy | |
CN111539330B (en) | Transformer substation digital display instrument identification method based on double-SVM multi-classifier | |
CN110796135A (en) | Target positioning method and device, computer equipment and computer storage medium | |
CN105913069B (en) | A kind of image-recognizing method | |
CN113379777A (en) | Shape description and retrieval method based on minimum circumscribed rectangle vertical internal distance proportion | |
CN116704224B (en) | Marker identification method and identification device based on deep learning | |
CN113792718B (en) | Method for positioning face area in depth map, electronic device and storage medium | |
CN110634131A (en) | Crack image identification and modeling method | |
Xiong et al. | Learning cell geometry models for cell image simulation: An unbiased approach | |
CN112232209A (en) | Pointer type instrument panel reading identification method for transformer substation inspection robot | |
CN110751664B (en) | Brain tissue segmentation method based on hyper-voxel matching | |
CN113128378B (en) | Finger vein rapid identification method | |
CN112488062A (en) | Image identification method, device, equipment and medium | |
CN111539329A (en) | Self-adaptive substation pointer instrument identification method | |
CN110070110A (en) | A kind of adaptive threshold image matching method | |
CN110874850A (en) | Real-time unilateral grid feature registration method oriented to target positioning | |
CN115063613B (en) | Method and device for verifying commodity label | |
CN113392913B (en) | Planar graph matching degree evaluation method, device and system based on boundary feature point set |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |