CN109410229A - Multiple target lens position and male and fomale(M&F) know method for distinguishing - Google Patents

Multiple target lens position and male and fomale(M&F) know method for distinguishing Download PDF

Info

Publication number
CN109410229A
CN109410229A CN201810978342.9A CN201810978342A CN109410229A CN 109410229 A CN109410229 A CN 109410229A CN 201810978342 A CN201810978342 A CN 201810978342A CN 109410229 A CN109410229 A CN 109410229A
Authority
CN
China
Prior art keywords
fomale
male
distinguishing
lens
multiple target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810978342.9A
Other languages
Chinese (zh)
Inventor
姚红兵
谢智烜
范宁
唐旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Yunyang Instrument Equipment Co.,Ltd.
Original Assignee
Nanjing Ke Ren Hai Photoelectric Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Ke Ren Hai Photoelectric Technology Co Ltd filed Critical Nanjing Ke Ren Hai Photoelectric Technology Co Ltd
Priority to CN201810978342.9A priority Critical patent/CN109410229A/en
Publication of CN109410229A publication Critical patent/CN109410229A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention discloses multiple target lens positions and male and fomale(M&F) to know method for distinguishing, using the lens in area source irradiating platform, convex surface and concave upright lens is imaged in different location respectively.Then by the processing such as Target Recognition Algorithms, including gray processing, median filtering, binary conversion treatment and morphology operations, the position of each lens and the information of male and fomale(M&F) are identified.The present invention includes illumination part and image recognition section, and device includes: planar light source, places the platform of lens, lens specimen, CCD and PC mono-.The advantages of this method has recognition speed fast, identifies that target is more, while identifying position and front and back sides.

Description

Multiple target lens position and male and fomale(M&F) know method for distinguishing
Technical field
The invention belongs to field of image recognition, and in particular to the side of a kind of multiple target lens position and male and fomale(M&F) identification Method.
Background technique
Assembly is the postposition process of production, is occupied an important position in manufacturing industry.On traditional assembling line, dress Operation with robot be all it is stringent in advance designed, the movement of some fixations can only be done, these robots utilize various biographies Sensor is controlled, referred to as sensitive control robot.When carrying out assembly manipulation, all movements will be preset, simultaneously It is required that the position and direction of part position, packing case be positioned to it is very strict.There are expensive fixture or fixed machine thus Structure, it is also necessary to have well-designed special transmission band.In practical applications, due to various reasons, the position of part tends not to It is stringent fixed, so that error when the people that causes to put together machines picks up part, at this moment needs manipulator can be according to the physical location of workpiece Dynamic adjustment grasping manipulation.Especially for this kind of small part of lens, a small location error this may result in picking up not To part.At present in the factory, manual sorting is still used for the assembly majority of lens, assembly efficiency is low.
Image identification system is introduced into industrial robot, can greatly the service performance of expanding machinery people and apply model It encloses, makes robot during completing appointed task, there is bigger adaptability.
Summary of the invention
The purpose of the present invention is: propose the recognition methods of a kind of lens position and male and fomale(M&F).This method can be according to CCD The image of shooting, by a series of image enhancement, filtering, connected domain is identified, the processes such as front and back sides judgement identify multiple mesh Mark the accurate location and concave-convex surface state of lens.
To achieve the goals above, the technical scheme is that
Multiple target lens position and male and fomale(M&F) know method for distinguishing and first obtain each lens according to the lenticular image of acquisition Central point, then take again with the two o'clock that is located at central point two sides on central point Internal periphery in the same horizontal line, count respectively The two points are calculated at a distance from central point;If the point distance center point on the left side is closer, the convex lens surface is upward;If right The point distance center point on side is closer, then the concave lens surface is upward;Export position and the male and fomale(M&F) information of each lens.
Further, the specific steps of the recognition methods include:
Step 1, collected color image R, G, channel B are separated, is then converted into gray level image;
Step 2, median filtering is carried out to gray level image;
Step 3, binary conversion treatment;
Step 4, morphology operations are carried out to the image after binaryzation and removes noise, identify lens position and male and fomale(M&F).
Further, gray level image is converted to by following formula in the step 1:
Y=0.299R+0.587G+0.114B
Cb=0.568 (B-Y)+128=-0.172R-0.399G+0.511B+128
Cr=0.713 (R-Y)+128=0.511R-0.428G-0.083B+128.
Further, median filtering is counted using the nonlinear smoothing based on sequencing statistical theory in the step 2, tool Body method are as follows: to current pixel to be processed, select a template, which is its neighbouring several pixels composition, to mould The pixel of plate is ascending to be ranked up, then the method for substituting with the intermediate value of template the value of original pixel, using the template of 3*3.
Further, the median filtering uses following formula:
Wherein I (i, j) is the pixel value of gray level image corresponding position, I1(i, j) is filtered image corresponding position Pixel value.
Further, step 3 binary conversion treatment uses fixed threshold method.
Further, the fixed threshold method specifically: calculated first by a large amount of data analysis meter in this environment Under most suitable threshold value TH, then carry out binaryzation according to the following formula:
Further, morphology operations include expansion, corrosion in the step 4;
The expansion takes the template of a 3*3, then this nine values is carried out or grasped for filling up the hole in image Make:
The corrosion takes the template of a 3*3, then carries out this nine values for removing independent and meaningless element With operation:
Further, the method for lens position and male and fomale(M&F) is identified in step 4:
Each individual connection is distinguished, and calculated every by the calibration that connected domain is carried out using the method for progressive scan The area S of a connected domain:Wherein niFor the number of white pixel in i-th of connected domain;Area is less than definite value Connected domain is considered noise, it is given up;
Next the quantity of lens, the information of position and male and fomale(M&F) are determined using the Internal periphery in each connected domain;? Connected domain records the most value X of the X and Y of each connected domain Internal periphery during demarcatingimin,Ximax,Yimin,Yimax, then find out The central point of the boundary rectangle of each Internal periphery:The central point can be approximate Think lens centre point;
After obtaining Internal periphery central point, if the center point coordinate of one of Internal periphery is (X0, Y0), then from center Point starts two sides to the left and right and searches the point that pixel is white, during search, keeps Y=Y0, meets on the central point left side It when first arrived is the point of white, records coordinate (XL, Y0), similarly the record (XR, Y0) on the right;Finally compare X0-XL and The size of XR-X0, if X0-XL < XR-X0, the corresponding convex lens surface of profile is upward;If X0-XL > XR-X0 takes turns Wide corresponding concave lens surface is upward.
Beneficial effects of the present invention:
The manpower consumption that factory can be reduced improves the efficiency of production while reducing production cost.
Detailed description of the invention
Fig. 1 is such lens specimen: the left side is that convex surface is upward, and the right is concave upright.
Fig. 2 is the schematic diagram of the device of the invention.
Fig. 3 is the imaging contexts that light source is radiated in lens different sides.
Fig. 4 is by pretreated bianry image.
Fig. 5 is the case where two lens are closely packed together.
Fig. 6 is Fig. 5 by pretreated bianry image.
Fig. 7 is the Internal periphery image of Fig. 5 lens.
Fig. 8 is Internal periphery scattergram picture.
Fig. 9 is coordinate XL, XR, YU, YD schematic diagram.
In figure, 1 planar light beam generated for area source, 2 be the platform for placing lens, and 3 lens indefinite for quantity, 4 are CCD, 5 be PC machine.
Specific embodiment
The solution of the present invention is mainly made of 2 parts: illumination part and image recognition section.1 produces in Fig. 2 for area source Raw planar light beam, 2 be the platform for placing lens, and 3 lens indefinite for quantity, 4 be CCD, and 5 be PC machine.Wherein 1,2,3 constitute Illumination part, 4,5 constitute image recognition section.The effect of illumination part is to provide the position that suitable illumination condition shows lens Set the characteristic information with front and back sides.An area source is placed on the platform left side, due to mirror-reflection, convex surface is upwards and concave upright Lens can reflect the speck of bulk close to edge in center point left and right side respectively, as shown in Figure 3.Image is known The effect of other part is the image for identifying CCD and providing, the position of last output lens and front and back sides information.First obtain each lens Central point, then take the two o'clock with the left and right sides on central point Internal periphery in the same horizontal line again, then count respectively The two points are calculated at a distance from central point.If the point distance center point on the left side is closer, the convex lens surface is upward;If right The point distance center point on side is closer, then the concave lens surface is upward.Position and the male and fomale(M&F) of each lens are finally exported from the end PC Information.
The present invention will be further explained below with reference to the attached drawings.
Suitable illumination condition is provided first, as shown in Fig. 2, the lens of quantity and Location-Unknown are placed on platform, CCD It is placed on right above platform, platform surrounding is in addition to having area source, other party on some specified direction, such as on Fig. 2 for the left side Light source and the object for capableing of intense emission light are not placed upwards.At this point, convex surface can exist with concave upright lens respectively upwards Center point left and right side reflects the speck of bulk close to edge, as shown in figure 3, the lens on the left of Fig. 3 are convex surface Upwards, right side is concave upright.CCD is sent to the end PC after extracting image.
It is first pre-processed at PC machine end, comprising:
1, gray processing.The collected color image R, G of camera, channel B are separated, then turned using gradation conversion formula Change gray level image into:
Y=0.299R+0.587G+0.114B
Cb=0.568 (B-Y)+128=-0.172R-0.399G+0.511B+128
Cr=0.713 (R-Y)+128=0.511R-0.428G-0.083B+128
(1)
Wherein: Y: brightness (Luminance or Luma), that is, grayscale value." brightness " is come through RGB input signal It establishes, method is that the specific part of rgb signal is superimposed together.
Cb: reflection is difference between RGB input signal blue portion and rgb signal brightness value.Cr: RGB is reflected Difference between input signal RED sector and rgb signal brightness value.
2, median filtering is carried out to gray level image.In real time image collection, noise is inevitably introduced, especially Interference noise and salt-pepper noise, the presence of noise seriously affect the effect of edge detection, and median filtering is using a kind of based on sequence The nonlinear smoothing of statistical theory counts, can effective smooth noise.Median filter method of the invention is, to be processed current Pixel selects a template, which is its neighbouring several pixels composition, ascending to the pixel of template to arrange Sequence, then the method for substituting with the intermediate value of template the value of original pixel.Using the template of 3*3:Median filtering is public Formula are as follows:
Wherein I (i, j) is the pixel value of gray level image corresponding position, I1(i, j) is filtered image corresponding position Pixel value.
3, binary conversion treatment is carried out to the image after median filtering.In image processing process of the present invention, need gray scale Image is converted into 0-1 bianry image to carry out subsequent processing.For the present invention be in a closed environment, therefore use Fixed threshold method, calculating most suitable threshold value TH in this environment by a large amount of data analysis meter first (can be maximum The pixel value of prospect and background is distinguished in degree), binaryzation is then carried out according to formula:
Wherein I2(i, j) is the pixel value of bianry image corresponding position.
4, morphology operations are carried out to the image after binaryzation.Morphology operations of the invention are for making up median filtering Deficiency, there are four types of operations: expansion, corrosion, opening operation and closed operation.Corrosion takes one for removing independent and meaningless element Then this nine values are carried out and are operated by the template of a 3*3:
Wherein I3(i, j) is the pixel value of image corresponding position after etching operation is prominent.
The effect of expansion is the hole filled up in image, takes the template of a 3*3, and then this nine values are carried out or grasped Make:
Wherein I4(i, j) is the pixel value of image corresponding position after expansive working is prominent.
Opening operation is first to corrode to expand afterwards.Closed operation is first to expand post-etching.
It only needs to carry out an opening operation in the present invention, so that it may filtered out the two-value of the high reliability of salt-pepper noise Image, as shown in Figure 4.
Then the method progressively scanned carries out the calibration of connected domain, and each individual connection is distinguished, and calculates every The area S of a connected domain:Wherein niFor the number of white pixel in i-th of connected domain.Area is less than definite value Connected domain may be considered noise, it is given up.But for certain lens abutted fully against together as schemed, even if by pre- Processing and the calibration of connected domain can not also separate them, as shown in Fig. 5 Fig. 6, in Fig. 5, have 4 lens, the lower right corner is It is concave upright, other 3 for convex surface upwards and the left side convex surface Liang Ge is upward is closely packed together.Fig. 6 is the process connected domain of Fig. 5 The image of calibration, it can be seen that the connected domain of two lens in the left side connects together, can not be separated.
The quantity of lens is determined using the Internal periphery in each connected domain in following step, position and male and fomale(M&F) Information.Fig. 7 is lens Internal periphery, it can be seen that the case where being connected with each other and concave upright lens are not present in the Internal periphery of lens The upward presence in Internal periphery domain convex surface significantly distinguish.The quantity of Internal periphery in this way is exactly the quantity of lens.
The X of each connected domain Internal periphery is recorded (in the abscissa that image coordinate is fastened during connected domain calibration Value) and Y (in the value for the ordinate that image coordinate is fastened) most value Ximin,Ximax,Yimin,Yimax, then find out each Internal periphery Boundary rectangle central point:The central point can be approximately considered in lens Heart point, as shown in Figure 8.After obtaining Internal periphery central point, if the center point coordinate of one of Internal periphery is (X0, Y0).So The point that pixel is white is searched in two sides to the left and right since central point afterwards.During search, Y=Y0 is kept, at center Point encounter first of the left side for white point when, record coordinate (XL, Y0), similarly the record (XR, Y0) on the right.Finally compare The size of X0-XL and XR-X0, if X0-XL < XR-X0, the corresponding convex lens surface of profile is upward;If X0-XL > XR- X0, then the corresponding concave lens surface of profile is upward.
Above situation be light source in the case where the left side of platform as a result, if light source setting on right side, upside or On the downside of person, it is only necessary to select Rule of judgment all right according to light source direction.Concrete condition is as follows:
Light source is in left side: convex surface is upward when X0-XL<XR-X0, and when X0-XL>XR-X0 is concave upright;
Light source is on right side: concave upright when X0-XL<XR-X0, convex surface is upward when X0-XL>XR-X0;
Light source is in upside: convex surface is upward when Y0-YU<YD-Y0, and when Y0-YU>YD-Y0 is concave upright;
Light source is in downside: concave upright when Y0-YU<YD-Y0, convex surface is upward when Y0-YU>YD-Y0;
The Y-coordinate of first white point in lower section is wherein put centered on YD, point top first is the Y of white point centered on YU Coordinate.As shown in Figure 9.
At PC machine end, recognition result is exported.
The series of detailed descriptions listed above only for feasible embodiment of the invention specifically Protection scope bright, that they are not intended to limit the invention, it is all without departing from equivalent implementations made by technical spirit of the present invention Or change should all be included in the protection scope of the present invention.

Claims (9)

1. multiple target lens position and male and fomale(M&F) know method for distinguishing, which is characterized in that collection len image first obtains each The central point of mirror, then take again with the two o'clock that is located at central point two sides on central point Internal periphery in the same horizontal line, respectively The two points are calculated at a distance from central point;If the point distance center point on the left side is closer, the convex lens surface is upward;If The point distance center point on the right is closer, then the concave lens surface is upward;Export position and the male and fomale(M&F) information of each lens.
2. multiple target lens position according to claim 1 and male and fomale(M&F) know method for distinguishing, which is characterized in that the knowledge The specific steps of other method include:
Step 1, collected color image R, G, channel B are separated, is then converted into gray level image;
Step 2, median filtering is carried out to gray level image;
Step 3, binary conversion treatment;
Step 4, morphology operations are carried out to the image after binaryzation and removes noise, identify lens position and male and fomale(M&F).
3. multiple target lens position according to claim 2 and male and fomale(M&F) know method for distinguishing, which is characterized in that the step Gray level image is converted to by following formula in rapid 1:
Y=0.299R+0.587G+0.114B
Cb=0.568 (B-Y)+128=-0.172R-0.399G+0.511B+128
Cr=0.713 (R-Y)+128=0.511R-0.428G-0.083B+128.
4. multiple target lens position according to claim 2 and male and fomale(M&F) know method for distinguishing, which is characterized in that the step Median filtering is using the nonlinear smoothing counting based on sequencing statistical theory in rapid 2, method particularly includes: to be processed current Pixel selects a template, which is its neighbouring several pixels composition, ascending to the pixel of template to arrange Sequence, then the method for substituting with the intermediate value of template the value of original pixel, using the template of 3*3.
5. multiple target lens position according to claim 4 and male and fomale(M&F) know method for distinguishing, which is characterized in that in described Value filtering uses following formula:
Wherein I (i, j) is the pixel value of gray level image corresponding position, I1(i, j) is the pixel of filtered image corresponding position Value.
6. multiple target lens position according to claim 2 and male and fomale(M&F) know method for distinguishing, which is characterized in that the step Rapid 3 binary conversion treatment uses fixed threshold method.
7. multiple target lens position according to claim 6 and male and fomale(M&F) know method for distinguishing, which is characterized in that described solid Determine threshold method specifically: calculating most suitable threshold value TH in this environment by a large amount of data analysis meter first (can be with The pixel value of prospect and background is distinguished to the full extent), binaryzation is then carried out according to the following formula:
8. multiple target lens position according to claim 2 and male and fomale(M&F) know method for distinguishing, which is characterized in that the step Morphology operations include expansion, corrosion in rapid 4;
The corrosion takes the template of a 3*3, then this nine values is carried out and grasped for removing independent and meaningless element Make:
The effect of the expansion is the hole filled up in image, takes the template of a 3*3, and then this nine values are carried out or grasped Make:
9. multiple target lens position according to claim 2 and male and fomale(M&F) know method for distinguishing, which is characterized in that step 4 The method of middle identification lens position and male and fomale(M&F):
Each individual connection is distinguished, and calculates each company by the calibration that connected domain is carried out using the method for progressive scan The area S in logical domain:Wherein niFor the number of white pixel in i-th of connected domain;Connection of the area less than a definite value Domain is considered noise, it is given up;
Next the quantity of lens, the information of position and male and fomale(M&F) are determined using the Internal periphery in each connected domain;It is being connected to Domain records the most value X of the X and Y of each connected domain Internal periphery during demarcatingimin,Ximax,Yimin,Yimax, then find out each The central point of the boundary rectangle of Internal periphery:The central point can be approximately considered Mirror central point;
After obtaining Internal periphery central point, if the center point coordinate of one of Internal periphery is (X0, Y0), then opened from central point Begin the point that the pixel of two sides search to the left and right is white, and during search, holding Y=Y0 encounters on the central point left side First for white point when, record coordinate (XL, Y0), similarly the right record (XR, Y0);Finally compare X0-XL and XR-X0 Size, if X0-XL < XR-X0, the corresponding convex lens surface of profile is upward;If X0-XL > XR-X0, profile pair The concave lens surface answered is upward.
CN201810978342.9A 2018-08-27 2018-08-27 Multiple target lens position and male and fomale(M&F) know method for distinguishing Pending CN109410229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810978342.9A CN109410229A (en) 2018-08-27 2018-08-27 Multiple target lens position and male and fomale(M&F) know method for distinguishing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810978342.9A CN109410229A (en) 2018-08-27 2018-08-27 Multiple target lens position and male and fomale(M&F) know method for distinguishing

Publications (1)

Publication Number Publication Date
CN109410229A true CN109410229A (en) 2019-03-01

Family

ID=65464413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810978342.9A Pending CN109410229A (en) 2018-08-27 2018-08-27 Multiple target lens position and male and fomale(M&F) know method for distinguishing

Country Status (1)

Country Link
CN (1) CN109410229A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211183A (en) * 2019-06-13 2019-09-06 广州番禺职业技术学院 The multi-target positioning system and method for big visual field LED lens attachment are imaged based on single
CN110653016A (en) * 2019-11-05 2020-01-07 英华达(上海)科技有限公司 Pipetting system and calibration method thereof
CN112233052A (en) * 2020-10-15 2021-01-15 北京四维图新科技股份有限公司 Expansion convolution processing method, image processing device and storage medium
CN112233052B (en) * 2020-10-15 2024-04-30 北京四维图新科技股份有限公司 Expansion convolution processing method, image processing method, apparatus and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1575524A (en) * 2001-08-23 2005-02-02 华盛顿州大学 Image acquisition with depth enhancement
CN1842701A (en) * 2004-06-16 2006-10-04 株式会社东京大学Tlo Optical tactile sensor
CN101055324A (en) * 2006-04-14 2007-10-17 索尼株式会社 Optical sheet, backlight device and liquid crystal display device
CN102620783A (en) * 2012-04-23 2012-08-01 上海奇阳信息科技有限公司 Image recognition-based electronic soap film gas flow meter
CN103212539A (en) * 2013-03-29 2013-07-24 聊城市人和精工轴承有限公司 Automatic sorting machine for bearing retainer workpiece concave and convex surfaces
CN103317241A (en) * 2013-06-19 2013-09-25 华中科技大学 Laser welding abutted seam measuring system and method based on plano-convex cylindrical lens
CN103752534A (en) * 2014-01-14 2014-04-30 温州中波电气有限公司 Intelligent-vision-based image intelligent recognizing-sorting device and method
CN204197942U (en) * 2014-10-31 2015-03-11 重庆市南川区金鑫纸业有限公司 A kind of conveyer exported for bottle cap positive sequence
CN104458748A (en) * 2013-09-25 2015-03-25 中国科学院沈阳自动化研究所 Aluminum profile surface defect detecting method based on machine vision
CN104636701A (en) * 2014-12-12 2015-05-20 浙江工业大学 Laser two-dimension code identification method based on image restoration

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1575524A (en) * 2001-08-23 2005-02-02 华盛顿州大学 Image acquisition with depth enhancement
CN1842701A (en) * 2004-06-16 2006-10-04 株式会社东京大学Tlo Optical tactile sensor
CN101055324A (en) * 2006-04-14 2007-10-17 索尼株式会社 Optical sheet, backlight device and liquid crystal display device
CN102620783A (en) * 2012-04-23 2012-08-01 上海奇阳信息科技有限公司 Image recognition-based electronic soap film gas flow meter
CN103212539A (en) * 2013-03-29 2013-07-24 聊城市人和精工轴承有限公司 Automatic sorting machine for bearing retainer workpiece concave and convex surfaces
CN103317241A (en) * 2013-06-19 2013-09-25 华中科技大学 Laser welding abutted seam measuring system and method based on plano-convex cylindrical lens
CN104458748A (en) * 2013-09-25 2015-03-25 中国科学院沈阳自动化研究所 Aluminum profile surface defect detecting method based on machine vision
CN103752534A (en) * 2014-01-14 2014-04-30 温州中波电气有限公司 Intelligent-vision-based image intelligent recognizing-sorting device and method
CN204197942U (en) * 2014-10-31 2015-03-11 重庆市南川区金鑫纸业有限公司 A kind of conveyer exported for bottle cap positive sequence
CN104636701A (en) * 2014-12-12 2015-05-20 浙江工业大学 Laser two-dimension code identification method based on image restoration

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211183A (en) * 2019-06-13 2019-09-06 广州番禺职业技术学院 The multi-target positioning system and method for big visual field LED lens attachment are imaged based on single
CN110211183B (en) * 2019-06-13 2022-10-21 广州番禺职业技术学院 Multi-target positioning system based on single-imaging large-view-field LED lens mounting
CN110653016A (en) * 2019-11-05 2020-01-07 英华达(上海)科技有限公司 Pipetting system and calibration method thereof
CN112233052A (en) * 2020-10-15 2021-01-15 北京四维图新科技股份有限公司 Expansion convolution processing method, image processing device and storage medium
CN112233052B (en) * 2020-10-15 2024-04-30 北京四维图新科技股份有限公司 Expansion convolution processing method, image processing method, apparatus and storage medium

Similar Documents

Publication Publication Date Title
CN104749184B (en) Automatic optical detection method and system
CN112347887B (en) Object detection method, object detection device and electronic equipment
JP2686274B2 (en) Cell image processing method and apparatus
CN101882034B (en) Device and method for discriminating color of touch pen of touch device
CN108053449A (en) Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system
CN105493141B (en) Unstructured road border detection
CN105865329B (en) The acquisition system and method for the bundled round steel end face center coordinate of view-based access control model
CN109001212A (en) A kind of stainless steel soup ladle defect inspection method based on machine vision
CN111915704A (en) Apple hierarchical identification method based on deep learning
US7295686B2 (en) Method of processing red eye in digital images
CN110008968B (en) Automatic triggering method for robot settlement based on image vision
CN109410229A (en) Multiple target lens position and male and fomale(M&amp;F) know method for distinguishing
CN108480223A (en) A kind of workpiece sorting system and its control method
CN107976447A (en) A kind of accessory detection method and system based on machine vision
CN107833843A (en) The analysis method and analysis system of defect source, defect detecting device
CN111783693A (en) Intelligent identification method of fruit and vegetable picking robot
CN109284759A (en) One kind being based on the magic square color identification method of support vector machines (svm)
CN108830908A (en) A kind of magic square color identification method based on artificial neural network
CN111242057A (en) Product sorting system, method, computer device and storage medium
CN110570412A (en) part error vision judgment system
CN110009609A (en) A kind of method of quick detection yellow rice kernel
CN108198226B (en) Ceramic color identification method, electronic equipment, storage medium and device
KR20100121250A (en) Vision system for inspection of qfn semiconductor package and classifying method
CN116721039A (en) Image preprocessing method applied to automatic optical defect detection
CN201440274U (en) Target detection equipment and image acquisition device used by same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221024

Address after: 2008-9, Floor 2, No. 108 Daguang Road, Qinhuai District, Nanjing, Jiangsu Province, 210007

Applicant after: Jiangsu Yunyang Instrument Equipment Co.,Ltd.

Address before: No. 6, Yongzhi Road, Qinhuai District, Nanjing, Jiangsu 210008

Applicant before: NANJING KEHAIREN PHOTOELECTRIC TECHNOLOGY Co.,Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190301