CN111652292B - Similar object real-time detection method and system based on NCS and MS - Google Patents
Similar object real-time detection method and system based on NCS and MS Download PDFInfo
- Publication number
- CN111652292B CN111652292B CN202010432326.7A CN202010432326A CN111652292B CN 111652292 B CN111652292 B CN 111652292B CN 202010432326 A CN202010432326 A CN 202010432326A CN 111652292 B CN111652292 B CN 111652292B
- Authority
- CN
- China
- Prior art keywords
- detection
- data
- image
- real
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000011897 real-time detection Methods 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 64
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000012795 verification Methods 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims abstract description 17
- 238000010207 Bayesian analysis Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 240000007651 Rubus glaucus Species 0.000 claims abstract description 4
- 235000011034 Rubus glaucus Nutrition 0.000 claims abstract description 4
- 235000009122 Rubus idaeus Nutrition 0.000 claims abstract description 4
- 238000012545 processing Methods 0.000 claims description 38
- 239000013598 vector Substances 0.000 claims description 31
- 239000011159 matrix material Substances 0.000 claims description 21
- 238000012360 testing method Methods 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 230000002195 synergetic effect Effects 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 claims description 3
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000005265 energy consumption Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 239000000203 mixture Substances 0.000 claims 2
- 210000005036 nerve Anatomy 0.000 claims 2
- 239000002994 raw material Substances 0.000 claims 2
- BTCSSZJGUNDROE-UHFFFAOYSA-N gamma-aminobutyric acid Chemical compound NCCCC(O)=O BTCSSZJGUNDROE-UHFFFAOYSA-N 0.000 claims 1
- 239000000758 substrate Substances 0.000 claims 1
- 230000006870 function Effects 0.000 description 10
- 238000003860 storage Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000007796 conventional method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000009776 industrial production Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
- G06F18/24155—Bayesian classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a similar object real-time detection method and system based on NCS and MS, which comprises the steps of collecting image data of similar objects for preprocessing, extracting edge characteristic parameters and constructing a sample data set; inputting the sample data set into a recognition model for training until a training precision threshold value is met, finishing training and outputting a recognition result; detecting and classifying the corresponding object numbers of the recognition results by using a decision model, and outputting corresponding detection results by combining with an AI accelerator to advance a detection process in real time; and importing the detection result into a Bayesian analysis model for secondary verification judgment, and displaying the verification result and the detection result in real time by using a mobile terminal or a raspberry display. The method can rapidly advance the image to detect the object and identify the specific object of the object, thereby solving the problem that the specific object cannot be judged in the process of detecting a plurality of similar objects by the image, and having higher real-time performance, accuracy, applicability and economy.
Description
Technical Field
The invention relates to the technical field of image detection, in particular to a method and a system for detecting similar objects in real time based on NCS and MS.
Background
At present, in the field of image detection, object detection technologies, magnetic field sensors and neural computing sticks based on image workstations or network services are very mature, but in some specific industrial production scenes, equipment is required to detect a plurality of similar parallel objects in a non-internet state in real time, identify specific corresponding numbered objects of the objects and store process data of the objects, and the detection technology cannot meet the requirement in the traditional single-camera object detection technology.
The traditional real-time detection of multiple objects needs to correspond to multiple cameras, data are transmitted to an image workstation with a high-end GPU for real-time detection in real time by utilizing a network, and the cost is high in large-scale industrial application.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, the invention provides a similar object real-time detection method and system based on NCS and MS, which can solve the problem that a plurality of similar objects cannot be accurately identified in real time in certain specific industrial production scenes.
In order to solve the technical problems, the invention provides the following technical scheme: the method comprises the steps of collecting image data of similar objects for preprocessing, extracting edge characteristic parameters and constructing a sample data set; inputting the sample data set into a recognition model for training, finishing training until a training precision threshold is met, and outputting a recognition result; detecting and classifying the corresponding object numbers of the recognition results by using a decision model, and outputting corresponding detection results by combining with an AI accelerator to advance a detection process in real time; and importing the detection result into a Bayesian analysis model for secondary verification judgment, and displaying the verification result and the detection result in real time by using a mobile terminal or a raspberry display.
As a preferred scheme of the NCS and MS-based real-time detection method for similar objects, the method comprises the following steps: preprocessing the image data comprises eliminating noise points of the image data, setting detection values and sequentially detecting pixels in the image data; comparing the pixel with other pixels in the neighborhood, judging whether the pixel is a noise point, if so, replacing the noise point by the average value of the gray scales of all the pixels in the neighborhood, and if not, outputting the noise point by the original gray scale value; sharpening the image data subjected to dryness removal by using a Sobel operator, and enhancing pixels on two sides of an image edge by combining weighted average; carrying out convolution on the Sobel operator and the image data by using a differential approximate differential strategy to complete detection, and finding out a set of partial pixels with most obvious local brightness change in the image; and separating image areas with different meanings according to the gray scale, the color and the geometric properties of the image data, selecting a threshold value, and performing binarization segmentation by using gray scale frequency distribution information to obtain a preprocessed information characteristic image.
As a preferred scheme of the NCS and MS-based real-time detection method for similar objects, the method comprises the following steps: extracting the edge characteristic parameters comprises introducing a selective search strategy based on the edge structure similarity to extract a characteristic parameter candidate region in the information characteristic image and carrying out normalization processing; dividing the candidate region by using a multi-feature strategy to generate a divided region set; and merging the segmentation region set based on the edge structure similarity, taking the merged segmentation region set as the input of a convolutional neural network, and extracting the edge characteristic parameters by using a seven-layer CNN network model.
As a preferred scheme of the NCS and MS-based real-time detection method for similar objects, the method comprises the following steps: and constructing the sample data set, wherein half of the edge characteristic parameters are randomly selected and defined as a training set, and the rest half of the edge characteristic parameters are defined as a testing set.
As a preferred scheme of the NCS and MS-based real-time detection method for similar objects, the method comprises the following steps: training the recognition model includes selecting a radial basis function as an objective function of the LSSVM as follows
Wherein, x = { x 1 ;x 2 ;…;x 14 }: and a characteristic matrix formed by historical data amplitude-frequency characteristic vectors influencing identification factors in the training set, wherein y: the amplitude-frequency characteristic vector influencing the identification factors in the training set, sigma: a target vector, i.e. the distribution or range characteristics of the training set; initializing penalty parameters and the target vector, training the LSSVM by using the training set, and testing by using the test set; if the identification model does not meet the precision threshold requirement, carrying out assignment optimization on the punishment parameters and the target vector according to errors until the precision threshold requirement is met, forming the identification model, and outputting the identification result; the recognition result includes whether the object is a similar object or not.
As a preferred scheme of the NCS and MS-based real-time detection method for similar objects, the method comprises the following steps: the decision model is detected and classified, namely the decision model is constructed based on SVM strategy, the recognition result is input, and LBP of the edge characteristic parameter is calculated; respectively constructing a sample matrix and a category matrix according to the characteristics of the two conditions of the identification result, wherein the values in the category matrix are 1 and-1; the decision model carries out SVM classification solution on the sample matrix and the category matrix to obtain a solution vector which corresponds to the sample data set and the object number to obtain a final classification decision function; and detecting the input similar object pictures by using the classification decision function, and outputting corresponding detection results by combining with an AI accelerator to advance a detection process in real time.
As a preferred scheme of the NCS and MS-based real-time detection method for similar objects, the method comprises the following steps: the secondary verification judgment comprises inputting the detection result into the Bayesian analysis model; and performing fusion processing and error probability calculation on the feature vector, the target vector and the solution vector of the similar object picture, as follows,
P(B i |A i )=B i |A i i=1,2,……n
wherein, B i : the number of correctly identified characteristic parameters in the ith synergistic analysis factor, A i : the number of the characteristic parameters identified in the ith synergistic analysis factor; if the probability of the verification result is greater than or equal to 0.5, the detection result is correct; and if the verification result probability is less than 0.5, the detection result is wrong.
As a preferred scheme of the NCS and MS-based similar object real-time detection system, the invention comprises the following steps: the device comprises an acquisition module, a data acquisition module and a data acquisition module, wherein the acquisition module is used for acquiring image data of the similar object and comprises a camera and a magnetic field sensor, the camera is used for detecting and shooting pictures of the similar object in real time, and the magnetic field sensor is used for capturing the rotating angle of the data acquired by the camera in real time; the image sensing module is connected with the camera and used for acquiring the shot similar object picture, performing characteristic processing on the picture, identifying and converting the picture into the image data and transmitting the image data into the core processing module; the core processing module is used for uniformly calculating and processing the image data and the rotation angle acquired by the acquisition module, and comprises a data operation unit, a database and an input/output management unit, wherein the data operation unit is used for calculating acquired data information and giving an operation result, the database is used for providing a sample data set for the data operation unit and storing input data information and the operation result, and the input/output management unit is used for connecting each unit for information transmission interaction; the AI accelerator is connected with the core processing module and is used for accelerating the operation processing speed of the core processing module and controlling and reducing the energy consumption of the core processing module.
As a preferred scheme of the NCS and MS-based similar object real-time detection system, the invention comprises the following steps: the power supply module is connected with the core processing module and is used for supplying power and charging the core processing module by 5 v; the extension interface module is connected with the core processing module and used for extending a serial port protocol interface for the core processing module.
As a preferred scheme of the NCS and MS-based similar object real-time detection system, the invention comprises the following steps: the device comprises the four cameras, wherein the cameras are USB cameras with 640-480 resolution, and the four cameras are arranged in the acquisition module; the data arithmetic unit searches the angle corresponding object data acquired by the magnetic field sensor by using the angle data, calculates the angle offset with the north pole, and finds out the object closest to the current angle data, namely the closest object near the centers of the four cameras; the AI accelerator is a USB interface neural computing rod of Myriad X, and a trained object image detection deep neural network algorithm is built in the AI accelerator, so that real-time detection operation can be provided in the absence of a network.
The invention has the beneficial effects that: the method carries out model identification through feature extraction, constructs a decision model to calculate classification decision, obtains an accurate detection result through secondary verification and judgment, and the AI accelerator can rapidly advance the image detection object and identify the specific object of the object, thereby solving the problem that the specific object cannot be judged in the process of detecting a plurality of similar objects by the image and having higher real-time property, accuracy, applicability and economy.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic flow chart of a real-time detection method for similar objects based on NCS and MS according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the accuracy of two comparative tests of a real-time detection method for similar objects based on NCS and MS according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of the recognition efficiency of two comparative tests of a real-time detection method for similar objects based on NCS and MS according to a first embodiment of the present invention;
fig. 4 is a schematic block structural distribution diagram of a real-time NCS and MS-based similar object detection system according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Also in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, which are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected" and "connected" in the present invention are to be construed broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
Example 1
Object detection is a core research problem of computer vision, and is widely applied to the fields of visual environment perception, intelligent video analysis, image detection and the like, and because the image acquisition process has influence factors such as illumination condition change, visual angle change, scale change, non-rigid deformation of an object, local shielding, complex background and the like, the appearance characteristics of the object are greatly changed, and great challenge and uncertainty are brought to an object detection algorithm.
Referring to fig. 1, fig. 2 and fig. 3, a first embodiment of the present invention provides a method for real-time detecting similar objects based on NCS and MS, including:
s1: and acquiring image data of similar objects for preprocessing, extracting edge characteristic parameters and constructing a sample data set. It should be noted that preprocessing image data includes:
eliminating noise points of the image data, setting a detection value, and sequentially detecting pixels in the image data;
comparing the pixel with other pixels in the neighborhood, judging whether the pixel is a noise point, if so, replacing the noise point by the average value of the gray levels of all the pixels in the neighborhood, and if not, outputting the noise point by the original gray level value;
sharpening the image data subjected to dryness removal by using a Sobel operator, and enhancing pixels on two sides of the image edge by combining weighted average;
carrying out convolution on the Sobel operator and image data by using a differential approximate differential strategy to complete detection, and finding a set of partial pixels with most obvious local brightness change in the image;
and separating image areas with different meanings according to the gray scale, the color and the geometric properties of the image data, selecting a threshold value, and performing binarization segmentation by using gray scale frequency distribution information to obtain a preprocessed information characteristic image.
Further, the extracting the edge feature parameters includes:
introducing a selective search strategy based on the similarity of the edge structure to extract a characteristic parameter candidate region in the information characteristic image and carrying out normalization processing;
dividing the candidate regions by using a multi-feature strategy to generate a divided region set;
and merging the segmentation region set based on the edge structure similarity, taking the merged segmentation region set as the input of a convolutional neural network, and extracting edge characteristic parameters by using a seven-layer CNN network model.
Note that extracting the candidate region includes:
(1) Initializing an image region similarity set M;
(2) Generating a small set of segmented regions using an image segmentation strategy is denoted as N = { N = } 1 ,n 2 …n i };
(3) Calculating the similarity of all adjacent regions in the set N and storing the similarity into a similarity set S;
(4) Finding out the element M (n) with the highest similarity in the similarity set M x ,n y ) = maxM, region n merging two elements j =n x ∪n y ;
(5) Respectively comparing the similarity set M with n x 、n y Deletion of related elements, calculating element n j Similarity set M with its neighboring regions j ,M=M∪M j Target area set N = N ≧ N x ;
(6) And repeating the calculation until all the candidate areas are obtained.
Specifically, the image segmentation needs to satisfy a certain segmentation condition, which includes:
there is a significant difference between adjacent regions;
the boundary of the divided region is ensured to be complete, and the spatial positioning precision of the edge is ensured;
the segmented region has uniformity and connectivity, wherein the uniformity means that all pixel points in the region meet certain similarity based on gray scale, texture and color, and the connectivity means that a path connecting any two points exists in the region.
Constructing a sample data set comprises:
randomly selecting half of edge characteristic parameters and defining the parameters as a training set;
the remaining half of the edge feature parameters are defined as the test set.
S2: and inputting the sample data set into the recognition model for training, finishing training until the training precision threshold is met, and outputting a recognition result. It should be noted in this step that training the recognition model includes:
the radial basis function is chosen as the objective function of the LSSVM as follows
Wherein,x={x 1 ;x 2 ;…;x 14 }: training a characteristic matrix formed by amplitude-frequency characteristic vectors of historical data influencing recognition factors in a set, wherein y: and (3) training amplitude-frequency characteristic vectors influencing identification factors in the set, wherein the sigma: target vectors, i.e. the distribution or range characteristics of the training set;
initializing punishment parameters and target vectors, training the LSSVM by using a training set, and testing by using a test set;
if the recognition model does not meet the precision threshold requirement, performing assignment optimization on the punishment parameters and the target vectors according to errors until the precision threshold requirement is met, forming a recognition model, and outputting a recognition result;
the recognition result includes whether the object is a similar object and not a similar object.
S3: and detecting and classifying the corresponding object numbers of the recognition results by using a decision model, and outputting corresponding detection results by combining with an AI accelerator to advance a detection process in real time. It should be further noted that the detection and classification performed by the decision model includes:
constructing a decision model based on an SVM strategy, inputting a recognition result, and calculating an LBP (local binary pattern) of an edge characteristic parameter;
respectively constructing a sample matrix and a category matrix according to the characteristics of the two conditions of the identification result, wherein the values in the category matrix are 1 and-1;
the decision model carries out SVM classification solution on the sample matrix and the class matrix to obtain a solution vector which corresponds to the sample data set and the object number to obtain a final classification decision function;
and detecting the input similar object pictures by using a classification decision function, and outputting a corresponding detection result by combining with an AI accelerator to advance a detection process in real time.
Specifically, the method comprises the following steps:
comparing the gray value of the pixel point in the image with the adjacent pixel points around the pixel point, recording the comparison result (namely the result is 0 or 1), obtaining the local binarization of the pixel point, and recording according to the circumferential sequence to obtain the LBP, as follows
Wherein l c : pixel points in the similar object image, l i : with l c Point in the neighborhood of the center, a: the number of sample points selected in the neighborhood, b: neighboring points l within the neighborhood i To the central point l c A distance of (a) and
introducing LBP into SVM to solve to obtain state classification support vector coefficients, and respectively constructing linear separable matrix sample sets according to the characteristics of two conditions of the recognition result as follows
(x 1 ,y 1 )……(x n ,y n ),x i ∈R n ,y i ∈{1,-1},i=1,2……n
Wherein x is i : sample matrix, y i : a category matrix;
performing SVM solution on the two types of matrixes by using the optimal classification plane strategy to maximize the classification interval between the two types of matrixes and obtain the optimal classification plane parameter vector (namely, solution vector) as follows
Wherein alpha is i : solving for product factor, and 0 < alpha i ,M(x i x j ): classification decision function, f (x): optimal class correspondence, α w 、b w : final fractionClass parameter vector, n: numbering the objects;
when alpha is w > 0 and b w When the f (x) classification interval is larger than 0, the detection results are dissimilar;
when alpha is w < 0 and b w > 0 or alpha w > 0 and b w If the classification interval is less than 0, the f (x) classification interval is smaller, and the detection results are similar.
S4: and (4) importing the detection result into a Bayesian analysis model for secondary verification judgment, and displaying the verification result and the detection result in real time by using a mobile terminal or a raspberry display. It should be further noted that the secondary verification determination includes:
inputting the detection result into a Bayesian analysis model;
the feature vector, the target vector and the solution vector of the similar object picture are subjected to fusion processing and error probability calculation, as follows,
P(B i |A i )=B i |A i i=1,2,……n
wherein, B i : the number of correctly identified characteristic parameters in the ith synergistic analysis factor, A i : the number of the characteristic parameters identified in the ith synergistic analysis factor;
if the probability of the verification result is greater than or equal to 0.5, the detection result is correct;
and if the probability of the verification result is less than 0.5, the detection result is wrong.
Preferably, in order to better understand and explain the technical problems and technical effects achieved by the present invention, two conventional image detection methods are used for introduction and explanation, respectively, in the first, conventional template matching detection methods are mainly used for training a built target object model, a template image is used as a sliding window to perform sliding matching on an acquired image, and a matching position can be found based on a similarity measurement method at a matching optimal position, so as to achieve similar object matching detection; 2. the traditional LINE-MOD distance measurement detection method mainly utilizes the difference between a distance measurement template and a test image contour to carry out binary edge image distance measurement from coarse to fine in shape and parameter space, and measures the maximum value of all distances from each edge point in an image to the nearest neighbor in the template, so as to achieve the purpose of image detection.
Preferably, in order to verify and explain the technical effect adopted in the method of the present invention, the embodiment selects a traditional template matching detection method to perform a comparison test with the method of the present invention, and compares the test results by means of scientific demonstration to verify the real effect of the method of the present invention; in order to verify that the method of the present invention has higher accuracy, economy, applicability and practicability compared with the conventional method, the present embodiment respectively performs real-time measurement and comparison on similar data in the PASCAL VOC 2007 data set by using the conventional method and the method of the present invention;
and (3) testing environment: (1) core i5-700 dominant frequency, 2.66GHz, 4GB memory;
(2) The image frame is 320 pixels by 240 pixels, and 2000 frame images are obtained as sample data;
(3) Considering the factors of the change of the illumination, the change of the visual angle, the disordered background and the local shielding of the picture, the two detection methods respectively adopt respective processing methods to preprocess the picture;
(4) Starting automatic test equipment, realizing simulation test of the two detection methods by using MATLB, and outputting a comparison schematic diagram of accuracy and recognition efficiency.
Referring to the schematic diagrams of fig. 2 and fig. 3, it can be seen intuitively that the trend of the solid line (the method of the present invention) is more stable than the trend of the dotted line (the conventional method), and the dotted line is steep (i.e., unstable, suddenly high or suddenly low), under the two detection and comparison conditions of accuracy and recognition efficiency, the curve value output by the method of the present invention is far higher than the curve value output by the conventional method, thereby verifying the authenticity of the technical effect adopted by the method of the present invention.
Example 2
Referring to fig. 4, a second embodiment of the present invention is different from the first embodiment in that it provides a real-time NCS and MS-based system for detecting similar objects, comprising:
the acquisition module 100 is used for acquiring image data of similar objects and comprises a camera 101 and a magnetic field sensor 102, wherein the camera 101 is used for detecting and taking pictures of the similar objects in real time, and the magnetic field sensor 102 is used for capturing the rotation angle of the data acquired by the camera 101 in real time.
The image sensing module 200 is connected to the camera 101, and is configured to obtain a picture of a similar object, perform feature processing on the picture, recognize and convert the picture into image data, and transmit the image data to the core processing module 300.
The core processing module 300 is configured to calculate and process the image data and the rotation angle acquired by the acquisition module 100 in a unified manner, and includes a data operation unit 301, a database 302, and an input/output management unit 303, where the data operation unit 301 is configured to calculate the acquired data information and provide an operation result, the database 302 is configured to provide a sample data set for the data operation unit 301 and store the input data information and the operation result, and the input/output management unit 303 is configured to connect the units for information transmission interaction.
The AI accelerator 400 is connected to the core processing module 300, and is configured to accelerate the operation processing speed of the core processing module 300 and control to reduce the energy consumption of the core processing module 300.
The power supply module 500 is connected to the core processing module 300, and is configured to supply and charge 5v power to the core processing module 300.
The expansion interface module 600 is connected to the core processing module 300, and is configured to expand a serial port protocol interface for the core processing module 300.
Preferably, the cameras 101 in this embodiment are USB cameras with 640 × 480 resolutions, and four cameras are disposed in the acquisition module 100; the data operation unit 301 searches the angle-corresponding object data acquired by the magnetic field sensor 102 by using the angle data, calculates the angle offset from the north pole, and finds out the object closest to the current angle data, namely, the closest object near the centers of the four cameras 101; the AI accelerator 400 is a USB interface neural computing stick of Myriad X, which is built with a trained object image detection deep neural network algorithm, and can provide real-time detection operation in the absence of a network.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, or the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.
Claims (6)
1. A similar object real-time detection method based on NCS and MS is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
acquiring image data of a similar object for preprocessing, wherein preprocessing the image data comprises eliminating noise points of the image data, setting detection values and sequentially detecting pixels in the image data; comparing the pixel with other pixels in a neighborhood, judging whether the pixel is a noise point, if so, replacing the noise point by a gray average value of all pixels in the neighborhood, if not, outputting the noise point by an original gray value, sharpening the de-noised image data by using a Sobel operator, combining with pixels on two sides of a weighted average enhanced image edge, performing convolution on the Sobel operator and the image data by using a differential approximation differential strategy to complete detection, finding a set of partial pixels with the most significant local brightness change in the image, separating image areas with different meanings according to the gray level, the color and the geometric properties of the image data, selecting a threshold value, performing binary segmentation by using gray level frequency distribution information, and obtaining a preprocessed information characteristic image; extracting edge characteristic parameters to construct a sample data set, wherein the extraction of the edge characteristic parameters comprises the steps of introducing a selective search strategy based on edge structure similarity to extract a characteristic parameter candidate region in the information characteristic image and carrying out normalization processing; dividing the candidate region by using a multi-feature strategy to generate a divided region set; merging the segmentation region set based on the edge structure similarity as the input of a convolutional neural network, extracting the edge characteristic parameters by using a seven-layer CNN network model, and constructing the sample data set, wherein the step of randomly selecting half of the edge characteristic parameters is defined as a training set, and the step of defining the rest half of the edge characteristic parameters as a test set;
inputting the sample data set into a recognition model for training, the training of the recognition model comprising,
the radial basis function is selected as the objective function of the LSSVM as follows
Wherein, x = { x 1 ;x 2 ;…;x 14 }: a characteristic matrix formed by amplitude-frequency characteristic vectors of historical data influencing identification factors in the training set, wherein y: the amplitude-frequency characteristic vector influencing the identification factors in the training set, sigma: a target vector, i.e. the distribution or range characteristics of the training set; initializing punishment parameters and the target vector, training the LSSVM by using the training set, and testing by using the test set; if the identification model does not meet the precision threshold requirement, carrying out assignment optimization on the punishment parameters and the target vector according to errors until the precision threshold requirement is met, forming the identification model, and outputting an identification result, wherein the identification result comprises whether the identification model is a similar object or not;
detecting and classifying the corresponding object numbers of the recognition results by using a decision model, and outputting corresponding detection results by combining with an AI accelerator to advance a detection process in real time;
and importing the detection result into a Bayesian analysis model for secondary verification judgment, and displaying the verification result and the detection result in real time by using a mobile terminal or a raspberry display.
2. The NCS and MS-based real-time detection method for similar objects according to claim 1, characterized in that: the decision-making model performs the detection classification including,
constructing the decision model based on an SVM strategy, inputting the recognition result, and calculating the LBP of the edge characteristic parameter;
respectively constructing a sample matrix and a category matrix according to the characteristics of the two conditions of the identification result, wherein the values in the category matrix are 1 and-1;
the decision model carries out SVM classification solution on the sample matrix and the category matrix to obtain a solution vector which corresponds to the sample data set and the object number to obtain a final classification decision function;
and detecting the input similar object pictures by using the classification decision function, and outputting corresponding detection results by combining with an AI accelerator to advance a detection process in real time.
3. The NCS and MS-based real-time detection method for similar objects according to claim 2, characterized in that: the secondary verification determination may include a determination that,
inputting the detection result into the Bayesian analysis model;
and performing fusion processing and error probability calculation on the feature vector, the target vector and the solution vector of the similar object picture, as follows,
P(B i |A i )=B i |A i ,i=1,2,……,n
wherein, B i : number of correctly recognized characteristic parameters, A, in the ith synergistic analysis factor i : the number of the characteristic parameters identified in the ith synergistic analysis factor;
if the probability of the verification result is greater than or equal to 0.5, the detection result is correct;
and if the verification result probability is less than 0.5, the detection result is wrong.
4. A real-time detection system for similar objects based on NCS and MS realizes the method as claimed in claim 1, characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
the acquisition module (100) is used for acquiring the image data of the similar objects and comprises a camera (101) and a magnetic field sensor (102), the camera (101) is used for detecting and taking pictures of the similar objects in real time, and the magnetic field sensor (102) is used for capturing the rotation angle of the data acquired by the camera (101) in real time;
the image sensing module (200) is connected with the camera (101) and is used for acquiring the shot similar object picture, performing characteristic processing on the picture, identifying and converting the picture into the image data and transmitting the image data to the core processing module (300);
the core processing module (300) is used for calculating and processing the image data and the rotation angle acquired by the acquisition module (100) in a unified manner, and comprises a data operation unit (301), a database (302) and an input/output management unit (303), wherein the data operation unit (301) is used for calculating acquired data information and giving an operation result, the database (302) is used for providing a sample data set for the data operation unit (301) and storing the input data information and the operation result, and the input/output management unit (303) is used for connecting each unit for information transmission interaction;
the AI accelerator (400) is connected to the core processing module (300) and is used for accelerating the operation processing speed of the core processing module (300) and controlling and reducing the energy consumption of the core processing module (300).
5. The NCS and MS based similar object real-time detection system according to claim 4, wherein: also comprises the following steps of (1) preparing,
the power supply module (500) is connected to the core processing module (300) and is used for supplying and charging 5v power to the core processing module (300);
the expansion interface module (600) is connected with the core processing module (300) and is used for expanding a serial port protocol interface for the core processing module (300).
6. The NCS and MS based similar object real-time detection system according to claim 5, wherein: comprises the steps of (a) preparing a substrate,
the cameras (101) are USB cameras with 640 × 480 resolutions, and four cameras are arranged in the acquisition module (100);
the data arithmetic unit (301) searches the angle-corresponding object data acquired by the magnetic field sensor (102) by using the angle data, calculates the offset angle with the north pole, and finds out the object closest to the current angle data, namely the closest object near the centers of the four cameras (101);
the AI accelerator (400) is a USB interface nerve computing rod of Myriad X, a trained object image detection deep neural network algorithm is built in the nerve computing rod, and real-time detection operation is provided when no network exists.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010432326.7A CN111652292B (en) | 2020-05-20 | 2020-05-20 | Similar object real-time detection method and system based on NCS and MS |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010432326.7A CN111652292B (en) | 2020-05-20 | 2020-05-20 | Similar object real-time detection method and system based on NCS and MS |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111652292A CN111652292A (en) | 2020-09-11 |
CN111652292B true CN111652292B (en) | 2022-12-06 |
Family
ID=72346683
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010432326.7A Active CN111652292B (en) | 2020-05-20 | 2020-05-20 | Similar object real-time detection method and system based on NCS and MS |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652292B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112116028B (en) * | 2020-09-29 | 2024-04-26 | 联想(北京)有限公司 | Model decision interpretation realization method and device and computer equipment |
CN112116595A (en) * | 2020-10-27 | 2020-12-22 | 河北农业大学 | End-to-end automatic plant root system characteristic segmentation system |
CN112396066B (en) * | 2020-11-27 | 2024-04-30 | 广东电网有限责任公司肇庆供电局 | Feature extraction method suitable for hyperspectral image |
CN112560748A (en) * | 2020-12-23 | 2021-03-26 | 安徽高哲信息技术有限公司 | Crop shape analysis subsystem and method |
CN112818738A (en) * | 2020-12-28 | 2021-05-18 | 贵州电网有限责任公司 | Real-time identification equipment and identification method for distribution network transformer based on neural computing rod |
CN114313851A (en) * | 2022-01-11 | 2022-04-12 | 浙江柯工智能系统有限公司 | Modular chemical fiber material transferring platform and method |
CN114625315B (en) * | 2022-01-21 | 2024-07-09 | 南华大学 | Cloud storage similar data detection method and system based on meta-semantic embedding |
CN115424189B (en) * | 2022-08-17 | 2024-01-23 | 扬州市职业大学(扬州开放大学) | Image recognition system and method capable of recognizing object state and preventing detection leakage |
CN118279923B (en) * | 2024-05-29 | 2024-08-23 | 天津市天益达科技发展有限公司 | Picture character recognition method, system and storage medium based on deep learning training |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103984955A (en) * | 2014-04-23 | 2014-08-13 | 浙江工商大学 | Multi-camera object identification method based on salience features and migration incremental learning |
CN105260749A (en) * | 2015-11-02 | 2016-01-20 | 中国电子科技集团公司第二十八研究所 | Real-time target detection method based on oriented gradient two-value mode and soft cascade SVM |
CN108921218A (en) * | 2018-06-29 | 2018-11-30 | 炬大科技有限公司 | A kind of target object detection method and device |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5937469B2 (en) * | 2012-09-13 | 2016-06-22 | 国立大学法人 東京大学 | Object recognition apparatus, object recognition method, and object recognition program |
CN104751198B (en) * | 2013-12-27 | 2018-04-27 | 华为技术有限公司 | The recognition methods of object in image and device |
CN105574063B (en) * | 2015-08-24 | 2019-02-22 | 西安电子科技大学 | The image search method of view-based access control model conspicuousness |
KR102457029B1 (en) * | 2016-09-20 | 2022-10-24 | 이노비즈 테크놀로지스 엘티디 | LIDAR systems and methods |
WO2018102190A1 (en) * | 2016-11-29 | 2018-06-07 | Blackmore Sensors and Analytics Inc. | Method and system for classification of an object in a point cloud data set |
CN107688829A (en) * | 2017-08-29 | 2018-02-13 | 湖南财政经济学院 | A kind of identifying system and recognition methods based on SVMs |
CN110070557A (en) * | 2019-04-07 | 2019-07-30 | 西北工业大学 | A kind of target identification and localization method based on edge feature detection |
CN110111362A (en) * | 2019-04-26 | 2019-08-09 | 辽宁工程技术大学 | A kind of local feature block Similarity matching method for tracking target |
CN111178405A (en) * | 2019-12-18 | 2020-05-19 | 浙江工业大学 | Similar object identification method fusing multiple neural networks |
-
2020
- 2020-05-20 CN CN202010432326.7A patent/CN111652292B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103984955A (en) * | 2014-04-23 | 2014-08-13 | 浙江工商大学 | Multi-camera object identification method based on salience features and migration incremental learning |
CN105260749A (en) * | 2015-11-02 | 2016-01-20 | 中国电子科技集团公司第二十八研究所 | Real-time target detection method based on oriented gradient two-value mode and soft cascade SVM |
CN108921218A (en) * | 2018-06-29 | 2018-11-30 | 炬大科技有限公司 | A kind of target object detection method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111652292A (en) | 2020-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111652292B (en) | Similar object real-time detection method and system based on NCS and MS | |
CN105844669B (en) | A kind of video object method for real time tracking based on local Hash feature | |
CN111723721A (en) | Three-dimensional target detection method, system and device based on RGB-D | |
CN110082821B (en) | Label-frame-free microseism signal detection method and device | |
CN105354578B (en) | A kind of multiple target object image matching method | |
CN110991389B (en) | Matching method for judging appearance of target pedestrian in non-overlapping camera view angles | |
CN105956560A (en) | Vehicle model identification method based on pooling multi-scale depth convolution characteristics | |
CN105069457B (en) | Image recognition method and device | |
CN111368683A (en) | Face image feature extraction method and face recognition method based on modular constraint CentreFace | |
CN109299664B (en) | Reordering method for pedestrian re-identification | |
CN111009005A (en) | Scene classification point cloud rough registration method combining geometric information and photometric information | |
CN108073940B (en) | Method for detecting 3D target example object in unstructured environment | |
CN116229189B (en) | Image processing method, device, equipment and storage medium based on fluorescence endoscope | |
CN111753119A (en) | Image searching method and device, electronic equipment and storage medium | |
JP2018142189A (en) | Program, distance measuring method, and distance measuring device | |
CN112364881B (en) | Advanced sampling consistency image matching method | |
CN110942473A (en) | Moving target tracking detection method based on characteristic point gridding matching | |
Le et al. | Circle detection on images by line segment and circle completeness | |
Seib et al. | Object recognition using hough-transform clustering of surf features | |
CN110910497B (en) | Method and system for realizing augmented reality map | |
CN109344758B (en) | Face recognition method based on improved local binary pattern | |
CN109241932B (en) | Thermal infrared human body action identification method based on motion variance map phase characteristics | |
CN114627424A (en) | Gait recognition method and system based on visual angle transformation | |
CN117870659A (en) | Visual inertial integrated navigation algorithm based on dotted line characteristics | |
CN103577826A (en) | Target characteristic extraction method, identification method, extraction device and identification system for synthetic aperture sonar image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |