CN109101982A - The recognition methods of target object and device - Google Patents

The recognition methods of target object and device Download PDF

Info

Publication number
CN109101982A
CN109101982A CN201810835895.9A CN201810835895A CN109101982A CN 109101982 A CN109101982 A CN 109101982A CN 201810835895 A CN201810835895 A CN 201810835895A CN 109101982 A CN109101982 A CN 109101982A
Authority
CN
China
Prior art keywords
edge point
similarity measure
images
recognized
direction vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810835895.9A
Other languages
Chinese (zh)
Other versions
CN109101982B (en
Inventor
杨智慧
覃道赞
宋明岑
张天翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Gree Intelligent Equipment Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Gree Intelligent Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Gree Intelligent Equipment Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201810835895.9A priority Critical patent/CN109101982B/en
Publication of CN109101982A publication Critical patent/CN109101982A/en
Application granted granted Critical
Publication of CN109101982B publication Critical patent/CN109101982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention discloses a kind of recognition methods of target object and devices.Wherein, this method comprises: obtaining the direction vector of each first edge point in the first identification region in sample image, wherein include target object in the first identification region;Obtain the direction vector of each second edge point in the second identification region in images to be recognized;According to the direction vector of all first edge points in the direction vector of each second edge point and the first identification region, the similarity measure of each second edge point is obtained;Based on the similarity measure of each second edge point, the recognition result of images to be recognized is obtained, wherein whether recognition result is for characterizing in images to be recognized comprising target object.The technical issues of present invention solves the influence to the recognition methods of target object by ambient lighting in the prior art, causes recognition accuracy low and low efficiency.

Description

The recognition methods of target object and device
Technical field
The present invention relates to field of image recognition, recognition methods and device in particular to a kind of target object.
Background technique
Object identification orientation in automatic field, the introducing of machine vision keep a few thing stable, reliable, high It is more intelligent while effect, eliminate manual operation the drawbacks of, worker can also be made to be detached from uninteresting single work.But it is actually answering In, machine vision often generates unstable result due to various reasons.Such as in actual production, environment is extremely difficult to manage The state thought, a large amount of illumination variation influence image procossing also very big.Not high deposit is standardized additionally, due to identified object itself In irregular phenomenon, causes various interference to generate, influence discrimination.In this case, often using increasingly complex Image processing method, processing speed will be greatly reduced, it is difficult to meet the certain pairs of high stations of efficiency requirements.In the group of four-way valve It during dress, needs to distinguish stiffening plate front and back sides, has one in its front and stamp out " L " mark come.But due to reinforcing Plate manufacturing process itself will lead to its surface and have various scratches, interfere to the identification of mark, in addition due to station sky Between limitation, environment light can not be blocked, the variation of ambient lighting substantially reduces discrimination.
For above-mentioned problem, currently no effective solution has been proposed.
Summary of the invention
The embodiment of the invention provides a kind of recognition methods of target object and devices, right in the prior art at least to solve The technical issues of recognition methods of target object is influenced by ambient lighting, causes recognition accuracy low and low efficiency.
According to an aspect of an embodiment of the present invention, a kind of recognition methods of target object is provided, comprising: obtain sample In image in the first identification region each first edge point direction vector, wherein in the first identification region include target object; Obtain the direction vector of each second edge point in the second identification region in images to be recognized;According to the side of each second edge point The direction vector of all first edge points into the first identification region of vector sum, obtains the similarity measure of each second edge point; Based on the similarity measure of each second edge point, obtain the recognition result of images to be recognized, wherein recognition result for characterize to Identify in image whether include target object.
According to another aspect of an embodiment of the present invention, a kind of identification device of target object is additionally provided, comprising: first obtains Modulus block, for obtaining the direction vector of each first edge point in the first identification region in sample image, wherein the first identification It include target object in region;Second obtains module, for obtaining in images to be recognized each second side in the second identification region The direction vector of edge point;First processing module, in the direction vector and the first identification region according to each second edge point The direction vector of all first edge points obtains the similarity measure of each second edge point;Second processing module, for based on every The similarity measure of a second edge point, obtains the recognition result of images to be recognized, wherein recognition result is for characterizing figure to be identified It whether include target object as in.
According to another aspect of an embodiment of the present invention, a kind of storage medium is additionally provided, storage medium includes the journey of storage Sequence, wherein equipment where control storage medium executes the recognition methods of above-mentioned target object in program operation.
According to another aspect of an embodiment of the present invention, a kind of processor is additionally provided, processor is used to run program, In, program executes the recognition methods of above-mentioned target object when running.
In embodiments of the present invention, obtain the direction of each first edge point in the first identification region in sample image to Amount, while the direction vector of each second edge point in the second identification region in images to be recognized is obtained, further according to each The direction vector of all first edge points, obtains each second edge in the direction vector of second edge point and the first identification region The similarity measure of point, so as to obtain the recognition result of images to be recognized according to similarity measure.Compared with prior art, from In one identification region obtain first edge point direction vector, from the second identification region obtain second edge point direction to Amount, and according to the direction vector of all first edge points in the direction vector of each second edge point and the first identification region, it obtains To the similarity measure of each second edge point, to avoid being blocked, the influence of chaotic and non-linear illumination variation, reach Discrimination is improved, recognition speed is promoted, vision system is made to have more the technical effect of robustness, and then is solved in the prior art The technical issues of recognition methods of target object is influenced by ambient lighting, causes recognition accuracy low and low efficiency.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is a kind of flow chart of the recognition methods of target object according to an embodiment of the present invention;And
Fig. 2 is a kind of schematic diagram of the identification device of target object according to an embodiment of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
Embodiment 1
According to embodiments of the present invention, a kind of embodiment of the recognition methods of target object is provided, it should be noted that The step of process of attached drawing illustrates can execute in a computer system such as a set of computer executable instructions, also, It, in some cases, can be to be different from shown in sequence execution herein although logical order is shown in flow charts The step of out or describing.
Fig. 1 is a kind of flow chart of the recognition methods of target object according to an embodiment of the present invention, as shown in Figure 1, the party Method includes the following steps:
Step S102 obtains the direction vector of each first edge point in the first identification region in sample image, wherein the It include target object in one identification region.
Specifically, above-mentioned sample image can be stiffening plate image, and above-mentioned target object can be stiffening plate front " L " mark.For sample image, target object can be marked in sample image by way of manually marking.To reinforcement The implementation that part front and back sides distinguish be identification image in whether comprising " L " identify, therefore, can only by reinforcer just Face image is as sample image.
In order to avoid carrying out subsequent processing to entire sample image, promotion recognition efficiency can be previously according to reinforcer just The position that " L " is identified in face, determines the first identification region, cuts to sample image, that is, available " L " identifies institute In region.For images to be recognized, if can identify that " L " is identified in corresponding region, it can determine that the image is to add Strong part direct picture may further determine reinforcer front;If in corresponding region it is unidentified go out " L " mark, can be with It determines that the image is reinforcer verso images, may further determine reinforcer reverse side.
In a kind of optional scheme, for sample image, the sense comprising " L " mark can be extracted in sample image Then interest region obtains the direction vector of each first edge point by boundary filter.Getting each first edge After the direction vector of point, template can be expressed as to point set pi=(ri+ci)TAnd each direction vector di=(ti+ui)T, i =1 ..., n.
Step S104 obtains the direction vector of each second edge point in the second identification region in images to be recognized.
Specifically, above-mentioned images to be recognized can be the image of the reinforcer arrived shot by capture apparatus, can be with It is front or reverse side.In order to promote recognition efficiency, the second identification region, the second identification can be determined according to the first identification region Region can be identical as the first identification region, can also be adjusted according to recognition result.
In a kind of optional scheme, for images to be recognized, in order to ensure in subsequent processes similarity measure not by The influence with confusion is blocked, image can be filtered using boundary filter identical with template, obtain second edge point Direction vector eR, c=(vR, c, wR, c)T
Step S106, according to all first edge points in the direction vector of each second edge point and the first identification region Direction vector obtains the similarity measure of each second edge point.
Specifically, similarity measure is used to characterize the identical percentage of images to be recognized and sample image, for example, figure to be identified " L " as in is identified with 50% and blocks, then similarity measure does not exceed 0.5.Similarity measure is bigger, shows that the goodness of fit is higher, It may thereby determine that being more possible in images to be recognized includes " L " mark.
It should be noted that existing Similarity Measures are the exhausted of difference between calculating sample image and images to be recognized To the quadratic sum (SSD) of value summation (SAD) or all differences, and another kind NCC.First two scheme only light conditions not It just can be used in the case where changing, last one kind can only also be applicable in linear look after and change, and can not accomplish not blocked, mix Random and non-linear illumination variation.
To solve the above-mentioned problems, the present invention in the following method, in images to be recognized any point q=(r, c)T, the dot product of the direction vector of the direction vector and respective point of all first edge points, is finding out average value in calculation template, The similarity measure s of the available point.It, can be according to the side after normalization since direction vector is influenced by brightness of image To vector, available similarity measure s ' is calculated.It further, can be to phase in order to adapt to nonlinear illumination variation Likelihood metric s ' takes absolute value, and obtains eventually for the similarity measure s " for determining recognition result.
Step S108 obtains the recognition result of images to be recognized based on the similarity measure of each second edge point, wherein Whether recognition result is for characterizing in images to be recognized comprising target object.
It specifically, can be by by second edges all in images to be recognized in order to obtain final recognition result The similarity measure of point is compared with threshold value, when similarity measure is more than the threshold value, can determine images to be recognized and sample graph As consistent, identified in images to be recognized comprising " L ", that is, images to be recognized is the direct picture of reinforcer.If all similar Measurement is less than the threshold value, can determine that images to be recognized is inconsistent with sample image, does not mark comprising " L " in images to be recognized Know, that is, images to be recognized is the verso images of reinforcer.
Using the above embodiment of the present invention, the direction of each first edge point in the first identification region in sample image is obtained Vector, while the direction vector of each second edge point in the second identification region in images to be recognized is obtained, further according to every The direction vector of all first edge points, obtains each second side in the direction vector of a second edge point and the first identification region The similarity measure of edge point, so as to obtain the recognition result of images to be recognized according to similarity measure.Compared with prior art, from In first identification region obtain first edge point direction vector, from the second identification region obtain second edge point direction to Amount, and according to the direction vector of all first edge points in the direction vector of each second edge point and the first identification region, it obtains To the similarity measure of each second edge point, to avoid being blocked, the influence of chaotic and non-linear illumination variation, reach Discrimination is improved, recognition speed is promoted, vision system is made to have more the technical effect of robustness, and then is solved in the prior art The technical issues of recognition methods of target object is influenced by ambient lighting, causes recognition accuracy low and low efficiency.
Optionally, in the above embodiment of the present invention, in step S106, according to the direction vector of each second edge point and The direction vector of all first edge points in one identification region, before obtaining the similarity measure of each second edge point, this method Further include: translation transformation is carried out to the direction vector of each first edge point, obtain each first edge point it is transformed to Amount;According to the transformed vector of the direction vector of each second edge point and each first edge point, each second side is obtained The similarity measure of edge point.
Specifically, affine transformation, all the points running transform along images to be recognized, that is, to the can be carried out to template The direction vector of one marginal point carries out translation transformation, is then compared transformed template with images to be recognized, each The position of marginal point calculates similarity measure.Above-mentioned translation transformation can be realized by the multiplication of translation matrix and direction vector.
Optionally, in the above embodiment of the present invention, according to the direction vector of each second edge point and each first edge The transformed vector of point, obtains the similarity measure of each second edge point, comprising: obtain the direction of each second edge point to The dot product of amount and the transformed vector of each first edge point;Obtain direction vector and the first identification of each second edge point The average value of the dot product of the transformed vector of all first edge points in region, obtains the similarity of each second edge point Amount.
Specifically, for any point q=(r, c) in images to be recognizedT, can calculate in transformed template and own The dot product of the direction vector of respective point in the direction vector and images to be recognized of first edge point, then average value is found out as transformation The similarity measure s that template afterwards is put at this.Formula is as follows:
Wherein, n is the quantity of first edge point, d 'iFor the transformed vector of i-th of first edge point, eq+p′For this point Direction vector.
Optionally, in the above embodiment of the present invention, in the direction vector for obtaining each second edge point and each first side Before the dot product of the transformed vector of edge point, method further include: to the direction vector and each first of each second edge point The transformed vector of marginal point is normalized, obtain each second edge point normalized vector and each first side The normalized vector of edge point;Obtain the normalized vector of each second edge point and the normalized vector of each first edge point Dot product.
Specifically, since the direction vector got is influenced by brightness of image, similarity measure s cannot not light completely According to the influence of variation, in order to improve this point, direction vector can be normalized, and calculate new similarity measure S ', formula are as follows:
Wherein, | | | | indicate vector field homoemorphism.
Optionally, in the above embodiment of the present invention, the direction vector and the first identification region of each second edge point are obtained The average value of the dot product of the transformed vector of interior all first edge points obtains the similarity measure of each second edge point, packet It includes: obtaining the average value of the dot product of the normalized vector of each second edge point and the normalized vector of each first edge point, Obtain the initial similarity measure of each second edge point;The absolute value for obtaining the initial similarity measure of each second edge point, obtains To the similarity measure of each second edge point.
Specifically, it in order to adapt to nonlinear illumination variation, takes absolute value and asks to the similarity measure s ' being calculated before Final similarity measure s ", while again not by blocking and chaotic influenced.Formula is as follows:
Wherein, | | indicate absolute value.
It should be noted that similarity measure s " is the numerical value less than 1, for indicating sample image and images to be recognized Identical percentage, s "=1 indicates that template fits like a glove with image, and opposite its closer to 0 indicates that the two is more inconsistent.
Optionally, in the above embodiment of the present invention, step S108 is obtained based on the similarity measure of each second edge point The recognition result of images to be recognized, comprising: judge in the second identification region whether in similarity measure to be greater than or equal to preset threshold Second edge point;If the second edge point in the second identification region there are similarity measure more than or equal to preset threshold, Whether the similarity measure for judging second edge point is local maximum in predeterminable area, wherein second edge point is preset areas The central point in domain, the similarity measure that local maximum is used to characterize second edge point are greater than other second edge points in predeterminable area Similarity measure;If the similarity measure of second edge point is the local maximum in predeterminable area, it is determined that recognition result is It include target object in images to be recognized;If there is no similarity measures to be greater than or equal to preset threshold in the second identification region The similarity measure of second edge point or second edge point is not the local maximum in predeterminable area, it is determined that recognition result is It does not include target object in images to be recognized.
Specifically, above-mentioned preset threshold can be according to experiment, pre-set for determining images to be recognized and sample The whether consistent threshold value s " of this imagemin
In a kind of optional scheme, the similarity measure of each second edge point in the second identification region can be successively calculated S ", then by each similarity measure s " with threshold value s "minIt is compared, when some second edge point position in images to be recognized On, obtain s " > s "min, while similarity measure s " is also local maximum, then can determine and recognize " L " mark, namely really Determine to identify in images to be recognized comprising " L ", is the direct picture of reinforcer;If all second edge points in images to be recognized On position, s " > s " is not obtainedmin, or be not local maximum, then it can determine that unidentified arrive " L " is identified, namely determine It is not identified comprising " L " in images to be recognized, is the verso images of reinforcer.
Optionally, it in the above embodiment of the present invention, in step S104, obtains every in the second identification region in images to be recognized After the direction vector of a second edge point, this method further include: according to any one second edge point in the second identification region Direction vector and the first identification region in all first edge points direction vector, obtain the phase of any one second edge point Likelihood metric;Judge whether the similarity measure of any one second edge point is greater than or equal to preset threshold;If any one The similarity measure of two marginal points be greater than or equal to preset threshold, then judge any one second edge point similarity measure whether be Local maximum in predeterminable area;If the similarity measure of any one second edge point is less than preset threshold or any one The similarity measure of a second edge point is not the local maximum in predeterminable area, then using next second edge point as any One second edge point, and return to execution and own according in the direction vector of any one second edge point and the first identification region The direction vector of first edge point, the step of obtaining the similarity measure of any one second edge point, until next second side Edge point is the last one second edge point in the second identification region;If the similarity measure of any one second edge point is pre- If the local maximum in region, it is determined that recognition result is in images to be recognized comprising target object;If the last one It is not in predeterminable area that the similarity measure of two marginal points, which is less than preset threshold or the similarity measure of the last one second edge point, Local maximum, it is determined that recognition result is in images to be recognized not comprising target object.
It specifically,, can be from first second edge in order to improve recognition speed since the calculation amount of similarity measure is huge Point starts, and successively calculates the similarity measure to j-th of second edge pointIf at this point, calculating The s arrivedj> s "min, while similarity measure sjIt is also local maximum, then can stops all second edge points after calculating Similarity measure, and determine recognize " L " mark, namely determine images to be recognized in comprising " L " identify, be the front of reinforcer Image;If the similarity measure is unsatisfactory for sj> s "min, or be not local maximum, then it can calculate next second edge The similarity measure of point, if the similarity measure for the last one the second edge point being calculated also is unsatisfactory for sj> s "min, or It is not local maximum, then can determines that unidentified does not include that " L " is identified into " L " mark, namely determining images to be recognized, be The verso images of reinforcer.
Optionally, in the above embodiment of the present invention, images to be recognized is in the image pyramid based on original image building Any one tomographic image, wherein the resolution ratio of every tomographic image is different in image pyramid, corresponding second identification region of every tomographic image Difference, the corresponding preset threshold of every tomographic image are different.
Specifically, above-mentioned original image can be the image shot to reinforcer, in order to efficiently search for target Object can create image pyramid for original image.Second edge pixel is obtained in every tomographic image of image pyramid Direction vector.Above-mentioned identification process can be carried out in the every tomographic image of image pyramid, it therefore, can be every by image pyramid Tomographic image is handled as images to be recognized.Since the resolution ratio of different levels image is different, in order to ensure discrimination, from every The second identification region extracted in tomographic image is different, and the preset threshold for obtaining recognition result is also different.
Optionally, it in the above embodiment of the present invention, is obtained in step S108 based on the similarity measure of each second edge point To after the recognition result of images to be recognized, this method further include: any one tomographic image recognition result be images to be recognized In comprising being obtained down using next tomographic image as images to be recognized according to target object present position in the case where target object One tomographic image corresponding second is and to be returned to execution by region and obtained in images to be recognized each second side in the second identification region The direction vector of edge point, obtains the recognition result of next tomographic image, until images to be recognized is the last layer in image pyramid Image;The recognition result of the last layer image be any one tomographic image in include target object in the case where, it is determined that it is original It include target object in image;It is the feelings for not including target object in any one tomographic image in the recognition result of any one tomographic image Under condition, it is determined that do not include target object in original image.
In a kind of optional scheme, since image pyramid is middle-level higher, resolution ratio is lower, therefore, can be in height Threshold value s " in layerminIt is arranged lower, the of next layer can be determined according to the position for searching " L " expression in high-rise image Two identification regions, then calculate similarity measure in the area.It circuits sequentially down, until search is indicated less than " L ", or arrives Up to the bottom of image pyramid.So as to efficiently search accurate " L " mark.
Optionally, in the above embodiment of the present invention, step S102 is obtained in sample image each the in the first identification region The direction vector of one marginal point, comprising: the first identification region is filtered using boundary filter, obtains each first The direction vector of marginal point, wherein during being filtered to the first identification region, at Threshold segmentation Reason;Step S104 obtains the direction vector of each second edge point in the second identification region in images to be recognized, comprising: utilize Boundary filter is filtered the second identification region, obtains the direction vector of each second edge point.
Specifically, it is affected since Threshold segmentation is illuminated by the light, know to first extracted from sample image When other region carries out edge filter processing, without Threshold segmentation, the direction vector of each first edge point is only obtained.Equally Ground, when carrying out edge filter processing to the second identification region for extracting from images to be recognized, also without Threshold segmentation, Only obtain the direction vector of each second edge point.
Through the above scheme, a kind of quick, stable image processing method is provided, there are scratches reinforcing plate surface (confusion), mark is unintelligible, in the case where ambient lighting variation greatly, quick, stable " L " mark searched in stiffening plate, To identify the front and back sides of stiffening plate, discrimination is improved, recognition speed is accelerated, makes vision system with more robustness.
Embodiment 2
According to embodiments of the present invention, a kind of embodiment of the identification device of target object is provided.
Fig. 2 is a kind of schematic diagram of the identification device of target object according to an embodiment of the present invention, as shown in Fig. 2, the dress It sets and includes:
First obtains module 22, for obtain the direction of each first edge point in the first identification region in sample image to Amount, wherein include target object in the first identification region.
Specifically, above-mentioned sample image can be stiffening plate image, and above-mentioned target object can be stiffening plate front " L " mark.For sample image, target object can be marked in sample image by way of manually marking.To reinforcement The implementation that part front and back sides distinguish be identification image in whether comprising " L " identify, therefore, can only by reinforcer just Face image is as sample image.
In order to avoid carrying out subsequent processing to entire sample image, promotion recognition efficiency can be previously according to reinforcer just The position that " L " is identified in face, determines the first identification region, cuts to sample image, that is, available " L " identifies institute In region.For images to be recognized, if can identify that " L " is identified in corresponding region, it can determine that the image is to add Strong part direct picture may further determine reinforcer front;If in corresponding region it is unidentified go out " L " mark, can be with It determines that the image is reinforcer verso images, may further determine reinforcer reverse side.
In a kind of optional scheme, for sample image, the sense comprising " L " mark can be extracted in sample image Then interest region obtains the direction vector of each first edge point by boundary filter.Getting each first edge After the direction vector of point, template can be expressed as to point set pi=(ri+ci)TAnd each direction vector di=(ti+ui)T, i =1 ..., n.
Second obtains module 24, for obtaining the direction of each second edge point in the second identification region in images to be recognized Vector.
Specifically, above-mentioned images to be recognized can be the image of the reinforcer arrived shot by capture apparatus, can be with It is front or reverse side.In order to promote recognition efficiency, the second identification region, the second identification can be determined according to the first identification region Region can be identical as the first identification region, can also be adjusted according to recognition result.
In a kind of optional scheme, for images to be recognized, in order to ensure in subsequent processes similarity measure not by The influence with confusion is blocked, image can be filtered using boundary filter identical with template, obtain second edge point Direction vector eR, c=(vR, c, wR, c)T
First processing module 26, for according in the direction vector of each second edge point and the first identification region all The direction vector of one marginal point obtains the similarity measure of each second edge point.
Specifically, similarity measure is used to characterize the identical percentage of images to be recognized and sample image, for example, figure to be identified " L " as in is identified with 50% and blocks, then similarity measure does not exceed 0.5.Similarity measure is bigger, shows that the goodness of fit is higher, It may thereby determine that being more possible in images to be recognized includes " L " mark.
It should be noted that existing Similarity Measures are the exhausted of difference between calculating sample image and images to be recognized To the quadratic sum (SSD) of value summation (SAD) or all differences, and another kind NCC.First two scheme only light conditions not It just can be used in the case where changing, last one kind can only also be applicable in linear look after and change, and can not accomplish not blocked, mix Random and non-linear illumination variation.
To solve the above-mentioned problems, the present invention in the following method, in images to be recognized any point q=(r, c)T, the dot product of the direction vector of the direction vector and respective point of all first edge points, is finding out average value in calculation template, The similarity measure s of the available point.It, can be according to the side after normalization since direction vector is influenced by brightness of image To vector, available similarity measure s ' is calculated.It further, can be to phase in order to adapt to nonlinear illumination variation Likelihood metric s ' takes absolute value, and obtains eventually for the similarity measure s " for determining recognition result.
Second processing module 28 obtains the identification of images to be recognized for the similarity measure based on each second edge point As a result, wherein whether recognition result is for characterizing in images to be recognized comprising target object.
It specifically, can be by by second edges all in images to be recognized in order to obtain final recognition result The similarity measure of point is compared with threshold value, when similarity measure is more than the threshold value, can determine images to be recognized and sample graph As consistent, identified in images to be recognized comprising " L ", that is, images to be recognized is the direct picture of reinforcer.If all similar Measurement is less than the threshold value, can determine that images to be recognized is inconsistent with sample image, does not mark comprising " L " in images to be recognized Know, that is, images to be recognized is the verso images of reinforcer.
Using the above embodiment of the present invention, the direction of each first edge point in the first identification region in sample image is obtained Vector, while the direction vector of each second edge point in the second identification region in images to be recognized is obtained, further according to every The direction vector of all first edge points, obtains each second side in the direction vector of a second edge point and the first identification region The similarity measure of edge point, so as to obtain the recognition result of images to be recognized according to similarity measure.Compared with prior art, from In first identification region obtain first edge point direction vector, from the second identification region obtain second edge point direction to Amount, and according to the direction vector of all first edge points in the direction vector of each second edge point and the first identification region, it obtains To the similarity measure of each second edge point, to avoid being blocked, the influence of chaotic and non-linear illumination variation, reach Discrimination is improved, recognition speed is promoted, vision system is made to have more the technical effect of robustness, and then is solved in the prior art The technical issues of recognition methods of target object is influenced by ambient lighting, causes recognition accuracy low and low efficiency.
Embodiment 3
According to embodiments of the present invention, a kind of embodiment of storage medium is provided, storage medium includes the program of storage, In, in program operation, equipment where control storage medium executes the recognition methods of the target object in above-described embodiment 1.
Embodiment 4
According to embodiments of the present invention, a kind of embodiment of processor is provided, processor is for running program, wherein journey The recognition methods of the target object in above-described embodiment 1 is executed when sort run.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, Ke Yiwei A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can for personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or Part steps.And storage medium above-mentioned includes: that USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic or disk etc. be various to can store program code Medium.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (13)

1. a kind of recognition methods of target object characterized by comprising
Obtain the direction vector of each first edge point in template, wherein the template is the extracted from sample image One identification region includes target object in the template;
Obtain the direction vector of each second edge point in the second identification region in images to be recognized;
According to the direction vector of the direction vector of each first edge point and corresponding second edge point, obtain described wait know The similarity measure of other image, wherein the similarity measure is used to characterize the similarity of the images to be recognized Yu the template;
Based on the similarity measure of the images to be recognized, the recognition result of the images to be recognized is obtained, wherein the identification knot Whether fruit is for characterizing in the images to be recognized comprising the target object.
2. the method according to claim 1, wherein according to the direction vector of each first edge point and The direction vector of corresponding second edge point, before obtaining the similarity measure of the images to be recognized, the method also includes:
Translation transformation is carried out to the direction vector of each first edge point, after obtaining the transformation of each first edge point Vector;
According to the direction vector of the corresponding second edge point of transformed vector sum of each first edge point, obtain described Similarity measure.
3. according to the method described in claim 2, it is characterized in that, according to the transformed vector of each first edge point With the direction vector of corresponding second edge point, the similarity measure is obtained, comprising:
The dot product for obtaining the direction vector of the corresponding second edge point of transformed vector sum of each first edge point, obtains To the corresponding dot product of each first edge point;
The average value for obtaining the corresponding dot product of all first edge points in the template, obtains the similarity measure.
4. according to the method described in claim 3, it is characterized in that, the normalization for obtaining the corresponding second edge point to The dot product of amount, before obtaining the corresponding dot product of each first edge point, the method also includes:
The transformed vector of each first edge point is normalized, and to the corresponding second edge point Direction vector be normalized, obtain each first edge point normalized vector and corresponding second side The normalized vector of edge point;
Obtain the point of the normalized vector of each first edge point and the normalized vector of the corresponding second edge point Product, obtains the corresponding dot product of each first edge point.
5. according to the method described in claim 4, it is characterized in that, obtaining the corresponding point of all first edge points in the template Long-pending average value obtains the similarity measure, comprising:
It is corresponding absolutely to obtain each first edge point for the absolute value for obtaining the corresponding dot product of each first edge point Value;
The average value for obtaining the corresponding absolute value of each first edge point, obtains the similarity measure.
6. the method according to claim 1, wherein the similarity measure based on the images to be recognized, obtains institute State the recognition result of images to be recognized, comprising:
Whether judge in second identification region in second edge point of the similarity measure more than or equal to preset threshold;
If sentenced in second identification region there are the second edge point that similarity measure is greater than or equal to the preset threshold Whether the similarity measure of the second edge point of breaking is local maximum in predeterminable area, wherein the second edge point is The central point of the predeterminable area, the similarity measure that the local maximum is used to characterize the second edge point are greater than described pre- If the similarity measure of other second edge points in region;
If the similarity measure of the second edge point is the local maximum in the predeterminable area, it is determined that the identification knot Fruit is in the images to be recognized comprising the target object;
If there is no the second edge points that similarity measure is greater than or equal to the preset threshold in second identification region, or The similarity measure of the second edge point is not the local maximum in the predeterminable area, it is determined that the recognition result is institute It states in images to be recognized not comprising the target object.
7. the method according to claim 1, wherein each in the second identification region in obtaining images to be recognized After the direction vector of second edge point, the method also includes:
According to institute in the direction vector of any one second edge point in second identification region and first identification region There is the direction vector of first edge point, obtains the similarity measure of any one second edge point;
Judge whether the similarity measure of any one second edge point is greater than or equal to preset threshold;
If the similarity measure of any one second edge point is greater than or equal to the preset threshold, judge described any Whether the similarity measure of one second edge point is local maximum in predeterminable area;
If the similarity measure of any one second edge point is less than the preset threshold or any one described second side The similarity measure of edge point is not the local maximum in the predeterminable area, then using next second edge point as described any One second edge point, and execution is returned according in the direction vector and first identification region of any one second edge point The direction vector of all first edge points, the step of obtaining the similarity measure of any one second edge point, until described Next second edge point is the last one second edge point in second identification region;
If the similarity measure of any one second edge point is the local maximum in the predeterminable area, it is determined that institute Stating recognition result is in the images to be recognized comprising the target object;
If the similarity measure of the last one second edge point is less than the preset threshold or the last one described second side The similarity measure of edge point is not the local maximum in the predeterminable area, it is determined that the recognition result is the figure to be identified It does not include the target object as in.
8. the method according to claim 1, wherein the images to be recognized is the figure based on original image building As any one tomographic image in pyramid, wherein the resolution ratio of every tomographic image is different in described image pyramid, every tomographic image Corresponding second identification region is different, and the corresponding preset threshold of the every tomographic image is different.
9. according to the method described in claim 8, it is characterized in that, in the similarity measure based on each second edge point, After obtaining the recognition result of the images to be recognized, the method also includes:
The recognition result of any one tomographic image be the images to be recognized in include the target object in the case where, will It is corresponding to obtain next tomographic image according to the target object present position as the images to be recognized for next tomographic image Second be by region, and return execute the direction for obtaining each second edge point in the second identification region in images to be recognized to Amount, obtains the recognition result of next tomographic image, until the images to be recognized is the last layer in described image pyramid Image;
The recognition result of the last layer image be any one tomographic image in include the target object in the case where, It then determines in the original image comprising the target object;
The case where the recognition result of any one tomographic image is not include the target object in any one tomographic image Under, it is determined that do not include the target object in the original image.
10. the method according to claim 1, wherein
Obtain the direction vector of each first edge point in the first identification region in sample image, comprising: utilize boundary filter First identification region is filtered, obtains the direction vector of each first edge point, wherein to described During first identification region is filtered, without Threshold segmentation processing;
Obtain the direction vector of each second edge point in the second identification region in images to be recognized, comprising: utilize the edge Filter is filtered second identification region, obtains the direction vector of each second edge point.
11. a kind of identification device of target object characterized by comprising
First obtains module, for obtaining the direction vector of each first edge point in the first identification region in sample image, In, it include target object in first identification region;
Second obtains module, for obtaining the direction vector of each second edge point in the second identification region in images to be recognized;
First processing module, for owning according in the direction vector of each second edge point and first identification region The direction vector of first edge point obtains the similarity measure of each second edge point;
Second processing module obtains the knowledge of the images to be recognized for the similarity measure based on each second edge point Other result, wherein whether the recognition result is for characterizing in the images to be recognized comprising the target object.
12. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein run in described program When control the storage medium where equipment perform claim require any one of 1 to 9 described in target object recognition methods.
13. a kind of processor, which is characterized in that the processor is for running program, wherein right of execution when described program is run Benefit require any one of 1 to 10 described in target object recognition methods.
CN201810835895.9A 2018-07-26 2018-07-26 Target object identification method and device Active CN109101982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810835895.9A CN109101982B (en) 2018-07-26 2018-07-26 Target object identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810835895.9A CN109101982B (en) 2018-07-26 2018-07-26 Target object identification method and device

Publications (2)

Publication Number Publication Date
CN109101982A true CN109101982A (en) 2018-12-28
CN109101982B CN109101982B (en) 2022-02-25

Family

ID=64847715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810835895.9A Active CN109101982B (en) 2018-07-26 2018-07-26 Target object identification method and device

Country Status (1)

Country Link
CN (1) CN109101982B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304285A1 (en) * 2006-07-17 2009-12-10 Panasonic Corporation Image processing device and image processing method
CN105261012A (en) * 2015-09-25 2016-01-20 上海瑞伯德智能系统科技有限公司 Template matching method based on Sobel vectors
CN105760842A (en) * 2016-02-26 2016-07-13 北京大学 Station caption identification method based on combination of edge and texture features
CN105930858A (en) * 2016-04-06 2016-09-07 吴晓军 Fast high-precision geometric template matching method enabling rotation and scaling functions
CN106485701A (en) * 2016-09-26 2017-03-08 成都交大光芒科技股份有限公司 Based on the whether anti-loaded detection method of the railway overhead contact system catenary seat of image
CN107671896A (en) * 2017-05-19 2018-02-09 重庆誉鸣科技有限公司 Fast vision localization method and system based on SCARA robots

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304285A1 (en) * 2006-07-17 2009-12-10 Panasonic Corporation Image processing device and image processing method
CN105261012A (en) * 2015-09-25 2016-01-20 上海瑞伯德智能系统科技有限公司 Template matching method based on Sobel vectors
CN105760842A (en) * 2016-02-26 2016-07-13 北京大学 Station caption identification method based on combination of edge and texture features
CN105930858A (en) * 2016-04-06 2016-09-07 吴晓军 Fast high-precision geometric template matching method enabling rotation and scaling functions
CN106485701A (en) * 2016-09-26 2017-03-08 成都交大光芒科技股份有限公司 Based on the whether anti-loaded detection method of the railway overhead contact system catenary seat of image
CN107671896A (en) * 2017-05-19 2018-02-09 重庆誉鸣科技有限公司 Fast vision localization method and system based on SCARA robots

Also Published As

Publication number Publication date
CN109101982B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN108596277B (en) Vehicle identity recognition method and device and storage medium
CN108898047B (en) Pedestrian detection method and system based on blocking and shielding perception
CN110738101A (en) Behavior recognition method and device and computer readable storage medium
CN109359666A (en) A kind of model recognizing method and processing terminal based on multiple features fusion neural network
CN106650615B (en) A kind of image processing method and terminal
CN105005565B (en) Live soles spoor decorative pattern image search method
CN108205676B (en) The method and apparatus for extracting pictograph region
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN106373128B (en) Method and system for accurately positioning lips
CN112418360B (en) Convolutional neural network training method, pedestrian attribute identification method and related equipment
WO2019197021A1 (en) Device and method for instance-level segmentation of an image
CN110543838A (en) Vehicle information detection method and device
CN105426899A (en) Vehicle identification method and device and client side
CN108921162A (en) Licence plate recognition method and Related product based on deep learning
Duan et al. Image classification of fashion-MNIST data set based on VGG network
CN109389105A (en) A kind of iris detection and viewpoint classification method based on multitask
CN109977941A (en) Licence plate recognition method and device
CN115049954B (en) Target identification method, device, electronic equipment and medium
CN111444816A (en) Multi-scale dense pedestrian detection method based on fast RCNN
CN111382638B (en) Image detection method, device, equipment and storage medium
CN115272691A (en) Training method, recognition method and equipment for steel bar binding state detection model
CN113436162B (en) Method and device for identifying weld defects on surface of hydraulic oil pipeline of underwater robot
CN109101982A (en) The recognition methods of target object and device
CN113128308B (en) Pedestrian detection method, device, equipment and medium in port scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant