CN108038849A - A kind of excellent robotic vision system of recognition performance - Google Patents

A kind of excellent robotic vision system of recognition performance Download PDF

Info

Publication number
CN108038849A
CN108038849A CN201711289370.1A CN201711289370A CN108038849A CN 108038849 A CN108038849 A CN 108038849A CN 201711289370 A CN201711289370 A CN 201711289370A CN 108038849 A CN108038849 A CN 108038849A
Authority
CN
China
Prior art keywords
mrow
target
msub
msubsup
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201711289370.1A
Other languages
Chinese (zh)
Inventor
梁金凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201711289370.1A priority Critical patent/CN108038849A/en
Publication of CN108038849A publication Critical patent/CN108038849A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of excellent robotic vision system of recognition performance, including image capture module, image pre-processing module, characteristic extracting module, picture recognition module and identification and evaluation module, described image acquisition module is used to be acquired image by camera, described image pretreatment module is used to pre-process the image of collection, the characteristic extracting module is used to extract pretreated characteristics of image, obtain target feature vector, described image identification module is matched according to target feature vector and target template, target is identified, the identification and evaluation module is used to evaluate the recognition performance of picture recognition module.Beneficial effects of the present invention are:Realize the evaluation of identification and recognition performance of the robot to target.

Description

A kind of excellent robotic vision system of recognition performance
Technical field
The present invention relates to robotic technology field, and in particular to a kind of excellent robotic vision system of recognition performance.
Background technology
One of the main means of machine vision as acquisition environmental information, can increase the capacity of will of industrial robot, Improve its flexibility.Industrial robot, how according to acquired image information, correctly, is extracted go to work in real time by vision Part characteristic parameter simultaneously judges the location of workpiece, is machine vision applications in one of key technology of industrial circle.
The method of image recognition is very much, can generally be summarized as statistical picture identification, structural images identification, fuzzy set figure As identification, images match identification.Images match is method that is wherein most representative, being most widely used, it is in moving target The fields such as tracking, remote sensing images identification, robot vision have all been applied, but existing recognition methods recognition performance is poor, Can not effectively it be identified.
The content of the invention
A kind of in view of the above-mentioned problems, the present invention is intended to provide excellent robotic vision system of recognition performance.
The purpose of the present invention is realized using following technical scheme:
Provide a kind of excellent robotic vision system of recognition performance, including image capture module, image preprocessing mould Block, characteristic extracting module, picture recognition module and identification and evaluation module, described image acquisition module are used for by camera to figure As being acquired, described image pretreatment module is used to pre-process the image of collection, and the characteristic extracting module is used for Pretreated characteristics of image is extracted, obtains target feature vector, described image identification module according to target signature to Amount and target template are matched, and target is identified, and the identification and evaluation module is used for the identification to picture recognition module Performance is evaluated.
Beneficial effects of the present invention are:Realize the evaluation of identification and recognition performance of the robot to target.
Brief description of the drawings
Using attached drawing, the invention will be further described, but the embodiment in attached drawing does not form any limit to the present invention System, for those of ordinary skill in the art, without creative efforts, can also obtain according to the following drawings Other attached drawings.
Fig. 1 is the structure diagram of the present invention;
Reference numeral:
Image capture module 1, image pre-processing module 2, characteristic extracting module 3, picture recognition module 4, identification and evaluation mould Block 5.
Embodiment
The invention will be further described with the following Examples.
Referring to Fig. 1, a kind of excellent robotic vision system of recognition performance of the present embodiment, including image capture module 1, Image pre-processing module 2, characteristic extracting module 3, picture recognition module 4 and identification and evaluation module 5, described image acquisition module 1 For being acquired by camera to image, described image pretreatment module 2 is used to pre-process the image of collection, institute State characteristic extracting module 3 to be used to extract pretreated characteristics of image, obtain target feature vector, described image identification Module 4 is matched according to target feature vector and target template, and target is identified, and the identification and evaluation module 5 is used for The recognition performance of picture recognition module 4 is evaluated.
The present embodiment realizes the evaluation of identification and recognition performance of the robot to target.
Preferably, described image pretreatment module 2 includes first object segmentation submodule and the second pretreatment submodule, institute State first object segmentation submodule to extract the outer edge of target in image using Canny operators, by multiple mesh in image Mark is divided into simple target image, and the second pretreatment submodule carries out at greyscale transformation and filtering simple target image Reason.
Due to difference between different target, this preferred embodiment first object splits submodule by the multiple target in image point Multiple simple targets are cut into, each target image is pre-processed, easy to subsequent extracted different target feature;Due in image In gatherer process, inevitably it is subject to various interference and is mixed into random noise, this preferred embodiment second pre-processes submodule Gray processing and filtering process are carried out to image, helps to reduce identification error, preferably keeps the marginal information of target.
Preferably, the characteristic extracting module 3 includes fisrt feature extracting sub-module, second feature extracting sub-module and spy Sign vector generation submodule, the fisrt feature extracting sub-module are used for the fisrt feature for extracting target, and the second feature carries Submodule is taken to be used for the second feature for extracting target, described eigenvector generates submodule and given birth to according to fisrt feature and second feature Into clarification of objective vector.
The fisrt feature extracting sub-module is used for the fisrt feature for extracting target, is specially:Extract the interior outside of target Edge, obtains the coordinate of target outer contour pixel and each Internal periphery pixel, and the fisrt feature of target is:
T1=[W, N1,…,NL]
In above-mentioned formula, T1Represent the fisrt feature of target, W represents the characteristic value of target outer contour, Nl(l=1,2 ..., L the characteristic value of l-th of Internal periphery of target) is represented, L represents the number of the Internal periphery of target;
The characteristic value of the target outer contour obtains in the following manner:Binary conversion treatment is carried out to image, by outer contour Grey scale pixel value is denoted as 1, remaining position grey scale pixel value is denoted as 0;Calculate the characteristic value W of target outer contour:
In above-mentioned formula, I (i, j) represents the gray value that pixel position is (i, j), and n and m represent target image respectively It is wide and high, represent target outer contour number of pixels;
The characteristic value of the target outer contour obtains in the following manner:Binary conversion treatment is carried out to image, by target l The grey scale pixel value of a Internal periphery is denoted as 1, remaining position grey scale pixel value is denoted as 0;Calculate the characteristic value of l-th of Internal periphery of target Nl
The second feature extracting sub-module is used for the second feature for extracting target, is specially:The outer edge of target is extracted, The pixel of the outer contour inclusion region of target is obtained, binary conversion treatment is carried out to image, by the pixel ash of outer contour inclusion region Angle value is denoted as 1, remaining position grey scale pixel value is denoted as 0, calculates the Second Eigenvalue of target:
In above-mentioned formula, T2Represent the second feature of target;
Described eigenvector generates submodule and generates clarification of objective vector according to fisrt feature and second feature, specifically For:By the fisrt feature of target and second feature constitutive characteristic vector:T=[T1,T2], in above-mentioned formula, T represents the spy of target Sign vector.
This preferred embodiment extracts target signature by characteristic extracting module, is established for succeeding target matching and identification Basis is determined, by establishing clarification of objective vector, more accurate identification can have been carried out to target, specifically, fisrt feature Taken into full account the outer contour and Internal periphery of target, second feature has taken into full account whole pixel in outer contour, target signature to Amount combines fisrt feature and second feature, has obtained more complete target signature.
Preferably, the identification and evaluation module 5 includes the first evaluation submodule, the second evaluation submodule and overall merit Module, the first evaluation submodule are used for the first evaluation of estimate for obtaining recognition performance, and the second evaluation submodule is used to obtain The second evaluation of estimate of recognition performance is taken, the overall merit submodule is used for according to the first evaluation of estimate and the second evaluation of estimate to target Recognition performance carries out overall merit;
The first evaluation submodule is used for the first evaluation of estimate for obtaining recognition performance, is specially:
In above-mentioned formula, S1Represent the first evaluation of estimate, A represents the target numbers included in image, A1Represent what can be identified Target number, A2Represent the target number correctly identified;
The second evaluation submodule is used for the second evaluation of estimate for obtaining recognition performance, is specially:
In above-mentioned formula, S2Represent the second evaluation of estimate, B represents picture number to be identified, B1The first evaluation of estimate represented is big In the picture number of given threshold;
The overall merit submodule is used to carry out target identification performance according to the first evaluation of estimate and the second evaluation of estimate comprehensive Evaluation is closed, is specially:Calculate the overall merit factor:
In above-mentioned formula, S represents the overall merit factor;The overall merit factor is bigger, represents that target identification performance is better.
This preferred embodiment realizes the evaluation of picture recognition module recognition performance by identification and evaluation module, ensure that mesh Mark is not horizontal, specifically, the first evaluation of estimate considers the identification accuracy of target, the second evaluation of estimate considers the identification of target Stability, target identification calculate the overall merit factor by the first evaluation of estimate and the second evaluation of estimate, being capable of concentrated expression identification Performance.
Target is identified using recognition performance of the present invention excellent robotic vision system, chooses 5 mesh to be identified Mark is tested, and is respectively target 1 to be identified, target to be identified 2, target to be identified 3, target to be identified 4, target to be identified 5, Recognition efficiency and identification accuracy are counted, compared compared with robotic vision system, generation has the beneficial effect that table It is shown:
Recognition efficiency improves Identify that accuracy improves
Target 1 to be identified 29% 27%
Target 2 to be identified 27% 26%
Target 3 to be identified 26% 26%
Target 4 to be identified 25% 24%
Target 5 to be identified 24% 22%
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than the present invention is protected The limitation of scope is protected, although being explained with reference to preferred embodiment to the present invention, those of ordinary skill in the art should Work as understanding, can be to technical scheme technical scheme is modified or replaced equivalently, without departing from the reality of technical solution of the present invention Matter and scope.

Claims (7)

1. the excellent robotic vision system of a kind of recognition performance, it is characterised in that including image capture module, image preprocessing Module, characteristic extracting module, picture recognition module and identification and evaluation module, described image acquisition module are used to pass through camera pair Image is acquired, and described image pretreatment module is used to pre-process the image of collection, and the characteristic extracting module is used Extracted in pretreated characteristics of image, obtain target feature vector, described image identification module is according to target signature Vector sum target template is matched, and target is identified, and the identification and evaluation module is used for the knowledge to picture recognition module Other performance is evaluated.
2. the excellent robotic vision system of recognition performance according to claim 1, it is characterised in that described image is located in advance Managing module includes first object segmentation submodule and the second pretreatment submodule, and the first object segmentation submodule uses Canny operators extract the outer edge of target in image, and multiple Target Segmentations in image is described into simple target image Second pretreatment submodule carries out greyscale transformation and filtering process to simple target image.
3. the excellent robotic vision system of recognition performance according to claim 2, it is characterised in that the feature extraction Module includes fisrt feature extracting sub-module, second feature extracting sub-module and feature vector generation submodule, and described first is special Sign extracting sub-module is used for the fisrt feature for extracting target, and the second feature extracting sub-module is used for the second spy for extracting target Sign, described eigenvector generate submodule according to fisrt feature and second feature generation clarification of objective vector.
4. the excellent robotic vision system of recognition performance according to claim 3, it is characterised in that the fisrt feature Extracting sub-module is used for the fisrt feature for extracting target, is specially:The outer edge of target is extracted, obtains target outer contour pixel With the coordinate of each Internal periphery pixel, the fisrt feature of target is:
T1=[W, N1,…,NL]
In above-mentioned formula, T1Represent the fisrt feature of target, W represents the characteristic value of target outer contour, Nl(l=1,2 ..., L) table Show the characteristic value of l-th of Internal periphery of target, L represents the number of the Internal periphery of target;
The characteristic value of the target outer contour obtains in the following manner:Binary conversion treatment is carried out to image, by outer contour pixel Gray value is denoted as 1, remaining position grey scale pixel value is denoted as 0;Calculate the characteristic value W of target outer contour:
<mrow> <mi>W</mi> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <mi>i</mi> <mo>&amp;times;</mo> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <mi>j</mi> <mo>&amp;times;</mo> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
In above-mentioned formula, I (i, j) represents the gray value that pixel position is (i, j), n and m represent respectively target image width and Height, represents target outer contour number of pixels;
The characteristic value of the target outer contour obtains in the following manner:Binary conversion treatment is carried out to image, by l-th of target The grey scale pixel value of profile is denoted as 1, remaining position grey scale pixel value is denoted as 0;Calculate the characteristic value N of l-th of Internal periphery of targetl
<mrow> <msub> <mi>N</mi> <mi>l</mi> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <mi>i</mi> <mo>&amp;times;</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <mi>j</mi> <mo>&amp;times;</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>.</mo> </mrow>
5. the excellent robotic vision system of recognition performance according to claim 4, it is characterised in that the second feature Extracting sub-module is used for the second feature for extracting target, is specially:The outer edge of target is extracted, the outer contour for obtaining target includes The pixel in region, carries out binary conversion treatment to image, the grey scale pixel value of outer contour inclusion region is denoted as 1, remaining position picture Plain gray value is denoted as 0, calculates the Second Eigenvalue of target:
<mrow> <msub> <mi>T</mi> <mn>2</mn> </msub> <mo>=</mo> <msup> <mi>e</mi> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>+</mo> <mn>2</mn> <mi>&amp;pi;</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow>
In above-mentioned formula, T2Represent the second feature of target;
Described eigenvector generates submodule according to fisrt feature and second feature generation clarification of objective vector, is specially:Will Fisrt feature and second feature the constitutive characteristic vector of target:T=[T1,T2], in above-mentioned formula, T represent clarification of objective to Amount.
6. the excellent robotic vision system of recognition performance according to claim 5, it is characterised in that the identification and evaluation Module includes the first evaluation submodule, the second evaluation submodule and overall merit submodule, and the first evaluation submodule is used for The first evaluation of estimate of recognition performance is obtained, the second evaluation submodule is used for the second evaluation of estimate for obtaining recognition performance, described Overall merit submodule is used to carry out overall merit to target identification performance according to the first evaluation of estimate and the second evaluation of estimate.
7. the excellent robotic vision system of recognition performance according to claim 6, it is characterised in that first evaluation Submodule is used for the first evaluation of estimate for obtaining recognition performance, is specially:
<mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>A</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>A</mi> <mn>2</mn> </msub> </mrow> <mrow> <mi>A</mi> <mo>+</mo> <msub> <mi>A</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>&amp;times;</mo> <mi>lg</mi> <mrow> <mo>(</mo> <mfrac> <msub> <mi>A</mi> <mn>1</mn> </msub> <mi>A</mi> </mfrac> <mo>+</mo> <mfrac> <msub> <mi>A</mi> <mn>2</mn> </msub> <msub> <mi>A</mi> <mn>1</mn> </msub> </mfrac> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
In above-mentioned formula, S1Represent the first evaluation of estimate, A represents the target numbers included in image, A1Represent the target that can be identified Number, A2Represent the target number correctly identified;
The second evaluation submodule is used for the second evaluation of estimate for obtaining recognition performance, is specially:
<mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mfrac> <msub> <mi>B</mi> <mn>1</mn> </msub> <mi>B</mi> </mfrac> <mo>+</mo> <mfrac> <msub> <mi>B</mi> <mn>1</mn> </msub> <mrow> <mi>B</mi> <mo>+</mo> <msub> <mi>B</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&amp;times;</mo> <msqrt> <mrow> <mfrac> <msub> <mi>B</mi> <mn>1</mn> </msub> <mi>B</mi> </mfrac> <mo>+</mo> <mfrac> <msub> <mi>B</mi> <mn>1</mn> </msub> <mrow> <mi>B</mi> <mo>+</mo> <msub> <mi>B</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow> </msqrt> </mrow>
In above-mentioned formula, S2Represent the second evaluation of estimate, B represents picture number to be identified, B1The first evaluation of estimate represented, which is more than, to be set Determine the picture number of threshold value;
The overall merit submodule is used to target identification performance integrate commenting according to the first evaluation of estimate and the second evaluation of estimate Valency, is specially:Calculate the overall merit factor:
<mrow> <mi>S</mi> <mo>=</mo> <mn>1</mn> <mo>+</mo> <msqrt> <mrow> <msup> <mi>e</mi> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>&amp;times;</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> </msup> <mo>+</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>&amp;times;</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> </msqrt> </mrow>
In above-mentioned formula, S represents the overall merit factor;The overall merit factor is bigger, represents that target identification performance is better.
CN201711289370.1A 2017-12-07 2017-12-07 A kind of excellent robotic vision system of recognition performance Withdrawn CN108038849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711289370.1A CN108038849A (en) 2017-12-07 2017-12-07 A kind of excellent robotic vision system of recognition performance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711289370.1A CN108038849A (en) 2017-12-07 2017-12-07 A kind of excellent robotic vision system of recognition performance

Publications (1)

Publication Number Publication Date
CN108038849A true CN108038849A (en) 2018-05-15

Family

ID=62096025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711289370.1A Withdrawn CN108038849A (en) 2017-12-07 2017-12-07 A kind of excellent robotic vision system of recognition performance

Country Status (1)

Country Link
CN (1) CN108038849A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108839032A (en) * 2018-06-27 2018-11-20 深圳大图科创技术开发有限公司 A kind of intelligent robot
CN111679661A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 Semantic map construction method based on depth camera and sweeping robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108839032A (en) * 2018-06-27 2018-11-20 深圳大图科创技术开发有限公司 A kind of intelligent robot
CN111679661A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 Semantic map construction method based on depth camera and sweeping robot

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN108335331B (en) Binocular vision positioning method and equipment for steel coil
CN104268519B (en) Image recognition terminal and its recognition methods based on pattern match
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN103198477B (en) Apple fruitlet bagging robot visual positioning method
CN107464252A (en) A kind of visible ray based on composite character and infrared heterologous image-recognizing method
CN102722731A (en) Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN110782477A (en) Moving target rapid detection method based on sequence image and computer vision system
CN106408597A (en) Neighborhood entropy and consistency detection-based SAR (synthetic aperture radar) image registration method
CN103530599A (en) Method and system for distinguishing real face and picture face
CN104134208B (en) Using geometry feature from slightly to the infrared and visible light image registration method of essence
CN108257155B (en) Extended target stable tracking point extraction method based on local and global coupling
CN104268602A (en) Shielded workpiece identifying method and device based on binary system feature matching
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN108447016B (en) Optical image and SAR image matching method based on straight line intersection point
CN106023187A (en) Image registration method based on SIFT feature and angle relative distance
CN103136525A (en) Hetero-type expanded goal high-accuracy positioning method with generalized Hough transposition
CN104574401A (en) Image registration method based on parallel line matching
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
Han et al. An improved corner detection algorithm based on harris
CN102446356A (en) Parallel and adaptive matching method for acquiring remote sensing images with homogeneously-distributed matched points
CN110335280A (en) A kind of financial documents image segmentation and antidote based on mobile terminal
CN106127258A (en) A kind of target matching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180515