CN112834519A - Detection system based on machine vision - Google Patents

Detection system based on machine vision Download PDF

Info

Publication number
CN112834519A
CN112834519A CN202110178858.7A CN202110178858A CN112834519A CN 112834519 A CN112834519 A CN 112834519A CN 202110178858 A CN202110178858 A CN 202110178858A CN 112834519 A CN112834519 A CN 112834519A
Authority
CN
China
Prior art keywords
vision
detection
unit
detection target
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110178858.7A
Other languages
Chinese (zh)
Inventor
徐升
熊登
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vlan Intelligent Technology Co ltd
Original Assignee
Shenzhen Vlan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vlan Intelligent Technology Co ltd filed Critical Shenzhen Vlan Intelligent Technology Co ltd
Priority to CN202110178858.7A priority Critical patent/CN112834519A/en
Publication of CN112834519A publication Critical patent/CN112834519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9515Objects of complex shape, e.g. examined with use of a surface follower device

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application provides a machine vision-based detection system. The detection system comprises a support unit, at least one vision unit and a data processing unit. The supporting unit is used for supporting a detection target. Each of the at least one vision unit includes an illumination unit and an image acquisition unit. The illumination unit may generate a shadowless region in a target space around the detection target, the shadowless region having an isotropic light field intensity such that the detection target placed within the shadowless region is in a shadowless state. The image acquisition unit is configured to acquire an image of the detection target. The data processing unit is connected with the image acquisition unit, receives the image of the detection target acquired by the image acquisition unit during working, and performs target detection on the detection target based on the image. The application detection system's lighting unit can provide better illumination environment to improve and detect the precision.

Description

Detection system based on machine vision
Technical Field
The application relates to the technical field of detection, in particular to a detection system based on machine vision.
Background
With the popularization of networks, the share of smart phones in the market is larger and larger. Meanwhile, mobile phone manufacturers also put strict requirements on the quality detection of mobile phone accessories. Taking defect detection of the mobile phone shell as an example, the current part manufacturers mostly rely on a manual visual mode to detect the defects of the mobile phone shell. And the visual mode is adopted for detection, so that false detection and missed detection are easy to occur. Therefore, there is a need for a device or system that can automatically detect surface defects on a cell phone case.
Disclosure of Invention
In order to solve the above technical problem, the present application discloses a detection system based on machine vision, including: a supporting unit for supporting a detection target; at least one visual unit, each visual unit of the at least one visual unit comprising: an illumination unit configured to generate a shadowless region in a target space around the detection target, the shadowless region having an isotropic light field intensity such that the detection target placed within the shadowless region is in a shadowless state, an image acquisition unit configured to acquire an image of the detection target; and the data processing unit is connected with the image acquisition unit, receives the image of the detection target acquired by the image acquisition unit during working and carries out target detection on the detection target based on the image.
In some embodiments, the lighting unit comprises: a light source that does not directly illuminate the target space; and the light cover is configured to enable the light rays emitted by the light source to be diffused and reflected so as to form the shadowless area in the target space.
In some embodiments, the lighting unit further comprises: and the light shielding plate is arranged between the light source and the target space and shields the light which is directly emitted to the target space by the light source.
In some embodiments, the inner wall of the light shield comprises a reflective area with a curved shape.
In some embodiments, the curved surface comprises a spherical surface, the reflective region comprises a circular edge, and the light source is disposed within an annular band-shaped region corresponding to the circular edge.
In some embodiments, the curved surface comprises a cylindrical surface.
In some embodiments, the reflective region includes a first edge and a second edge that are linear, the first edge and the second edge being parallel to a generatrix of the cylindrical surface, the first edge and the second edge being opposite, the light sources being disposed in two strip-like regions corresponding to the first edge and the second edge.
In some embodiments, the lighting unit comprises: a plurality of light sources arranged around the target space, wherein the plurality of light sources illuminate directly into the target space forming the shadowless area.
In some embodiments, the image acquisition unit comprises: a plurality of cameras surrounding and facing the target space, the fields of view of any two adjacent cameras in the plurality of cameras partially overlapping within the target space.
In some embodiments, the image acquisition unit further comprises: a mounting frame; and one end of the camera adjusting device is installed on the installation frame, the other end of the camera adjusting device is connected with the camera, and the camera adjusting device is configured to adjust at least one of the shooting position and the shooting angle of the camera.
In some embodiments, the camera adjustment device comprises a first connecting rod, a second connecting rod, and a third connecting rod, wherein: one end of the first connecting rod is hinged to the mounting frame through a first spherical hinge, the other end of the first connecting rod is hinged to one end of the second connecting rod through a second spherical hinge, a through hole is formed in the other end of the second connecting rod, one end of the third connecting rod penetrates through the through hole, the other end of the third connecting rod is connected with the camera, and the third connecting rod can slide along the through hole.
In some embodiments, the supporting unit includes: a conveying device configured to convey the detection target along a conveying path that passes through the target space, the conveying device being configured to support the detection target and convey the detection target into the target space.
In some embodiments, the transfer device comprises: a drive wheel; the conveying belt is wound on the driving wheel, moves along the conveying path under the driving of the driving wheel, and is configured to support the detection target and convey the detection target into the target space; and the engine is connected with the driving wheel and is configured to drive the driving wheel to rotate.
In some embodiments, the machine vision-based inspection system further comprises a flipping device configured to flip the inspection target from the conveyor by a preset angle; and the at least one vision unit comprises: the first vision unit is arranged on one side of the turnover device along the conveying path and used for detecting a first detection area of the detection target before turnover, and the second vision unit is arranged on the other side of the turnover device along the conveying path and used for detecting a second detection area of the detection target after turnover.
In some embodiments, the flipping mechanism comprises: the turnover support comprises a mounting seat and a pivoting mechanism, the pivoting mechanism comprises a rotating shaft, a first rotating part and a second rotating part, the first rotating part and the second rotating part are sleeved outside the rotating shaft and are connected with the rotating shaft through a revolute pair, the first rotating part is opposite to the rotating shaft along a first direction and pivots around the axis of the rotating shaft when working, the second rotating part is opposite to the rotating shaft along a second direction and pivots around the axis of the rotating shaft when working, the first direction is opposite to the second direction, the axis is vertical to the conveying path, the mounting seat comprises a shaft hole, two ends of the rotating shaft are mounted in the shaft hole, and the rotating shaft is connected with the shaft hole through the revolute pair; and the clamping device comprises a first clamping plate and a second clamping plate, the first clamping plate is rigidly connected with the first rotating part, and the second clamping plate is opposite to the first clamping plate and rigidly connected with the second rotating part.
In some embodiments, the machine vision-based inspection system further comprises: an automatic feeding unit disposed upstream of the vision unit in a moving direction of the conveyor, configured to automatically move the detection target onto the conveyor.
In some embodiments, the automatic feeding unit comprises: a base; a magazine mounted on the base and configured to load the detection target, wherein a lower end of the magazine is provided with a first opening and a second opening, and the first opening and the second opening are opposite; a first clamping portion provided at a position corresponding to the first opening and configured to clamp one end of a detection target located in the cartridge through the first opening; and a second clamping portion provided at a position corresponding to the second opening and configured to clamp the other end of the detection target through the second opening.
In some embodiments, the machine vision-based inspection system further comprises: a first position adjusting device, provided upstream of the vision unit in a moving direction of the conveyor, configured to adjust a position of a detection target located on the conveyor in a direction perpendicular to the moving direction.
In some embodiments, the machine vision-based inspection system further comprises: a sorting unit disposed downstream of the vision unit in a moving direction of the conveyor, configured to sort the detection target according to a detection result of the target detection by the data processing unit.
In some embodiments, the vision device further comprises a second position adjustment device configured to adjust the position of the detection target to be within the shadowless area.
In some embodiments, the detection target includes a first corner and a second corner opposite to the first corner, and the second position adjustment device includes: the first limiting part is arranged at a position corresponding to the first corner and comprises two limiting surfaces which are perpendicular to each other, and the two limiting surfaces on the first limiting part are configured to adjust the positions of the two sides of the first corner; and the second limiting part is arranged at a position corresponding to the second corner and comprises two limiting surfaces which are vertical to each other, and the two limiting surfaces on the second limiting part are configured to adjust the positions of the two edges of the second corner.
In summary, the machine vision-based detection system provided by the present application:
first, the illumination unit may generate a shadowless area in a target space around a detection target such that the detection target placed within the shadowless area is in a shadowless state. Therefore, when the detection target is detected, the detection target is placed in the shadowless area, and the shadow does not exist in the image acquired by the image acquisition unit, so that the image acquisition unit can acquire the details of each part of the detection target, the detection precision is improved, and the omission ratio and the false detection ratio are reduced.
Secondly, the image acquisition unit adopts a plurality of cameras to encircle and arrange in the target space around the detection target, adopts the mode of similar operation shadowless lamp, can gather the detail in each region on detection target surface, has further improved the detection precision, has reduced false retrieval rate and missed retrieval rate.
Furthermore, two visual units are arranged along the transmission path S to respectively detect the front side and the back side of the mobile phone shell, so that all areas on the surface of the mobile phone shell can be identified by the visual units, and the misjudgment rate and the missing detection rate are reduced. The turnover device is arranged between the two visual units to turn the mobile phone shell from the front side to the back side (or from the back side to the front side), so that the detection efficiency can be improved.
In addition, the automatic feeding unit arranged in the detection system can further improve the feeding speed and improve the detection efficiency of the detection system.
Drawings
FIG. 1 is a schematic diagram illustrating an overall layout of a machine vision-based inspection system according to an embodiment of the present application;
FIG. 2 illustrates a perspective view of a visual element provided in accordance with an embodiment of the present application;
fig. 3 shows a schematic diagram of an operation principle of a lighting unit according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a structure of a mask with a cylindrical reflective area according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a mask with spherical reflective regions according to an embodiment of the present disclosure;
6A, 6B and 6C respectively show a front view, a top view and a perspective view of an image acquisition unit provided according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a second position adjustment apparatus according to an embodiment of the present application;
FIGS. 8A and 8B illustrate a front view and a perspective view, respectively, of an inverting apparatus provided in accordance with an embodiment of the present application; and
fig. 9 shows a schematic structural diagram of an automatic feeding unit according to an embodiment of the present application.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present application. Thus, the present application is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting.
These and other features of the present application, as well as the operation and function of the related elements of structure and the combination of parts and economies of manufacture, may be significantly improved upon consideration of the following description. All of which form a part of this application, with reference to the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the application.
These and other features of the present application, as well as the operation and function of the related elements of the structure, and the economic efficiency of assembly and manufacture, are significantly improved by the following description. All of which form a part of this application with reference to the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the application. It should also be understood that the drawings are not drawn to scale.
The application provides a machine vision-based detection system (hereinafter referred to as detection system). As an example, fig. 1 shows an overall layout schematic diagram of a machine vision-based detection system 001 provided according to an embodiment of the present application. The detection system 001 may be used to detect the detection target a. The detection target a may be various articles. As an example, the detection target a may be a mobile phone case. For convenience of description, in the following description of the present application, the detection target is represented by a mobile phone shell, and the detection system 001 is described by taking a surface defect of the mobile phone shell as a detection index. In particular, the detection system 001 may comprise a support unit 002, a vision unit 003, and a data processing unit (not shown in fig. 1). The supporting unit 002 is used to support the detection target. The vision unit 003 can acquire an image of the detection target in a machine vision manner. The data processing unit may perform object detection on the detection target based on the image of the detection target acquired by the vision unit 003. In some embodiments, the detection system 001 may further include a flipping device 005, an automatic feeding unit 006, a first position adjusting device 007, and/or a sorting unit 008.
As an example, fig. 2 shows a perspective view of a vision unit 003 provided according to an embodiment of the present application. Specifically, the vision unit 003 may include the illumination unit 300 and the image capturing unit 400. The image acquisition unit 400 is configured to acquire an image of the detection target a. The lighting unit 300 may provide a suitable lighting environment for the image capturing unit 400 to capture images. In some embodiments, the vision unit 003 may further include a second position adjustment device 500. The second position adjustment device 500 may adjust the detection target a to the shadow-free area.
The illumination unit 300 may generate a shadowless area in the target space around the detection target a. The light field intensity of the shadowless area is isotropic, so that the detection target A placed in the shadowless area is in a shadowless state. As an example, fig. 3 shows a schematic diagram of an operation principle of a lighting unit 300 provided according to an embodiment of the present application. The lighting unit 300 may include a light source 310 and a light shield 320. In some embodiments, the lighting unit 300 may further include a light shield 330.
The light source 310 is capable of emitting visible light by itself. The visible light emitted by the light source 310 does not directly illuminate the target space. Light source 310 may be an artificial illumination source. Light source 310 may include, but is not limited to, a point source, a collimated light source. As an example, the point light source may be a fluorescent bulb. As an example, the collimated light source may be a fluorescent tube or a strip of fluorescent light. By way of example, the light source may include, but is not limited to, a thermal radiation light source, a gas discharge light source, an electroluminescent light source, and the like.
A light blocking plate 330 may be disposed between the light source 310 and the target space P to block light directed from the light source 310 to the target space P. As an example, the shadow mask 330 may be made of an opaque material.
The mask 320 can make the light emitted from the light source 310 into the mask 320 diffuse to form the shadowless area in the target space P. In some embodiments, the inner wall of the mask 320 includes a curved reflective area. By way of example, the curved surface comprises a cylindrical surface. As an example, fig. 4 shows a schematic structural diagram of a mask 320 with a cylindrical reflective area according to an embodiment of the present disclosure. Referring to fig. 4, the mask 320 includes a cylindrical reflective region. The reflective area includes a first edge 321 and a second edge 322 that are linear. The first edge 321 and the second edge 322 are parallel to the generatrix of the cylindrical surface. The first edge 321 is opposite the second edge 322. The light sources 310 may be disposed in two strip regions corresponding to the first edge 321 and the second edge 322. For example, in fig. 4, two light source mounting grooves 340 may be disposed at positions corresponding to the first edge 321 and the second edge 322. The light source may be installed in the light source installation groove 340. One surface of the light source installation groove 340 close to the target space may shield the light directly emitted from the light source to the target space.
In some embodiments, the curved surface comprises a spherical surface. The reflective area may be a portion of a sphere. As an example, fig. 5 shows a schematic diagram of a mask 320 with a spherical reflective area according to an embodiment of the present disclosure. Referring to fig. 5, the reflective area includes a rounded edge. The light source may be disposed in an annular band-shaped region corresponding to the circular rim 323. For example, the light source may be disposed in a light source mounting groove 340 having a ring shape corresponding to the circular edge 323.
Of course, the reflective area of the mask 320 may be other shapes besides cylindrical and spherical without affecting the core spirit of the present application.
With continued reference to fig. 3, in some embodiments, the lighting unit 300 can form the shadowless area even without the reticle 320. For example, in some embodiments, the lighting unit 300 may include a plurality of light sources 310, the plurality of light sources 310 being arranged around the target space P. The light source 310 may be irradiated to the target space P and form the shadowless area.
In summary, the illumination unit 300 is designed to form a shadowless area in the target space P by the illumination unit 300. When the detection target A is detected, the detection target A is placed in the shadowless area, and the image acquired by the image acquisition unit 400 does not have shadows, so that the image acquisition unit 400 can acquire the details of each part of the detection target A, and the detection precision is improved.
With continued reference to fig. 2, the image capturing unit 400 is configured to capture an image of the detection target a. As an example, fig. 6A, 6B and 6C respectively show a front view, a top view and a perspective view of an image acquisition unit 400 provided according to an embodiment of the present application. Specifically, the image capturing unit 400 may include a camera 410 and a camera adjusting device 420. In some embodiments, the image acquisition unit 400 may further include a mounting bracket 430.
The camera 410 may be used to capture an image of the inspection target a. By way of example, camera 410 may be a macro camera. The number of cameras 410 may be plural. The plurality of cameras 410 surround the detection target a. Each of the plurality of cameras 410 is directed toward the target space P. The fields of view of any adjacent two cameras 410 of the plurality of cameras 410 partially overlap within the target space P. In this way, the total field of view of the plurality of cameras 410 can be guaranteed to cover all areas on the surface of the handset housing to be tested. A plurality of cameras are arranged in a target space around a detection target in a surrounding mode, and details of all areas of the surface of the detection target can be collected in a mode similar to that of an operation shadowless lamp, so that the detection precision is further improved, and the false detection rate and the missing detection rate are reduced.
The camera adjustment device 420 is used to adjust the position and/or angle of the camera 410. One end of the camera adjusting device 420 is fixed on the mounting frame 430, and the other end is connected to the camera 410, and is configured to adjust at least one of a shooting position and an angle of the camera 410. Correspondingly, the number of the camera adjusting devices 420 is the same as that of the cameras 410. Each camera adjustment device 420 may adjust the position and/or angle of one camera 410. The camera adjustment device 430 may provide multiple degrees of freedom for the camera 410. The camera adjustment device 430 may include at least one joint. Each of the at least one joint may provide at least one degree of freedom. By way of example, the joints may include, but are not limited to, rotational joints, translational joints, helical joints, cylindrical joints, universal joints, ball joints, and the like. Taking the camera adjusting device 420 shown in fig. 6C as an example: the camera adjustment device 420 includes a first connection rod 421, a second connection rod 422, and a third connection rod 423. One end of the first connecting rod 421 is hinged to the mounting frame 430 by a first spherical hinge Q1, and the other end of the first connecting rod is hinged to one end of the second connecting rod 422 by a second spherical hinge Q2. Each of the first and second ball joints Q1 and Q2 may provide three directions of rotational freedom. The other end of the second connecting rod 422 is provided with a through hole, and one end of the third connecting rod 423 penetrates through the through hole to form a movable joint Q3 together with the through hole. The moving joint Q3 may provide the third connecting rod 423 with freedom of movement in one direction, and the third connecting rod 423 may slide along the through hole. The other end of the third connecting rod 423 is connected with the camera. Thus, the camera adjusting device 420 can adjust the distance between the camera and the inspection target and/or the angle of the camera toward the inspection target by the first spherical hinge Q1, the second spherical hinge Q2 and the moving joint Q3. In some embodiments, the third connecting rod 423 includes a hollow tube structure, and the hollow tube structure can be penetrated by a wire connected to the camera to ensure that the wire is tidy.
With continued reference to fig. 6C, the mounting bracket 430 is configured to support the camera adjustment device 420. The camera adjustment device 420 may be secured to the mounting bracket 430 by a threaded connection. A wire chase 431 may also be provided on the mounting bracket 430. The line of connecting the camera can be placed in wire casing 431 to guarantee that the line is clean and tidy.
The second position adjustment device 500 may adjust the detection target a to the shadow-free area. As an example, fig. 7 shows a schematic structural diagram of a second position adjustment device 500 provided according to an embodiment of the present application.
Referring to fig. 7, taking a test object as a cell phone case as an example, the test object a may include a first corner W1 and a second corner W2 opposite to the first corner W1. The second position adjustment device 500 includes a first position-limiting portion 510 and a second position-limiting portion 520. The first stopper 510 is disposed at a position corresponding to the first corner W1. The first stopper portion 510 may include a first stopper 511. The first limiting block 511 comprises two limiting surfaces, namely a limiting surface 511-1 and a limiting surface 511-2, which are perpendicular to each other. The stopper surfaces 511-1 and 511-2 are configured to adjust the positions of two sides of the first corner W1. The first position-limiting portion 510 is further provided with a sliding pair 512 and an engine 513. The sliding pair 512 can guide the first stopper 511 to move. The engine 513 may drive the driving rod 512-1 in the sliding pair 512 to move so as to drive the first stopper 511 fixed to one end of the driving rod 512-1 to move so as to limit two sides of the first corner W1 of the detection object a. The second position-limiting portion 520 is disposed at a position corresponding to the second corner W2. The second position-limiting portion 520 includes two position-limiting surfaces, a position-limiting surface 521-1 and a position-limiting surface 522-2, perpendicular to each other. The stopper surface 521-1 and the stopper surface 522-2 of the second stopper portion are configured to adjust the positions of two sides of the second corner W2. The structure of the second position-limiting portion 520 is similar to the structure of the first position-limiting portion 510, and for brevity, the structure of the second position-limiting portion 520 is not repeated.
In summary, the second position adjustment device 500 can adjust the detection target a to the shadowless area, so as to ensure that the detection target a is located in the shadowless area.
Referring to fig. 3, as can be seen from the foregoing description, the supporting unit 002 is used to support the detection target a. In some embodiments, while supporting the detection target a, the supporting unit 002 may also implement a function of conveying the detection target a, so as to implement the running water detection, thereby improving the detection efficiency. Such as shown in fig. 1, the supporting unit 002 may include a transfer device. The transmission device is configured to convey the detection target a along a conveyance path S passing through the target space P, and the conveyance device is configured to support and convey the detection target a into the target space P. As an example, the conveyor may include an engine, a drive wheel, and a conveyor belt. The engine is connected with the driving wheel. The engine is configured to drive the drive wheel to rotate. The conveyor belt is configured to support the detection target a. The conveyor belt may be wound around the drive wheel. The conveyor belt can move along the conveying path S under the driving of the driving wheel so as to convey the detection object a into the object space P.
With continued reference to fig. 1, the flipping unit 005 is configured to flip the inspection target a from the conveying unit by a preset angle. The at least one vision unit 003 can include a first vision unit 003-1 and a second vision unit 003-2. The first vision unit 003-1 and the second vision unit 003-2 are respectively disposed at both sides of the turn-over device 005. The first vision unit 003-1 may be disposed at one side of the inverting device 005 along the conveyance path S to detect a first detection area of the detection target a before inversion. Taking a mobile phone housing as an example, the first detection area may include a front surface of the mobile phone housing. The second vision unit 003-2 may be disposed at the other side of the inverting unit 005 along the transfer path S to detect a second detection area of the detection target a after the inversion. Taking the mobile phone housing as an example, the second detection area may include a reverse side of the mobile phone housing.
Two visual units (a first visual unit 003-1 and a second visual unit 003-2) are arranged along the transmission path S to respectively detect the front and back surfaces of the mobile phone shell, so that all areas on the surface of the mobile phone shell can be ensured to be identified by the visual units, and the misjudgment rate and the omission ratio are reduced. The turning device 005 is arranged between the two vision units 003 to turn the mobile phone shell from the front side to the back side (or from the back side to the front side), so that the detection efficiency can be improved.
As an example, fig. 8A and 8B illustrate a front view and a perspective view, respectively, of a flipping device 005 provided according to an embodiment of the present application. Specifically, the flipping unit 005 may include the flipping bracket 500 and the holding unit 600. In some embodiments, a cover may be disposed outside the roll-over stand 500 and the clamping device 600 to protect the roll-over stand 500 and the clamping device 600.
Referring to fig. 8B, the roll-over stand 500 may include a pivoting mechanism 510, a mount 520, and a driving unit 530.
The mount 520 provides support for the pivot mechanism 510 and the drive unit 530. The mounting seat 520 is provided with a shaft hole 521.
The pivoting mechanism 510 may include a rotating shaft 511, a first rotating member 512, and a second rotating member 513. Both ends of the rotation shaft 511 are installed in the shaft holes 521 of the installation seat 520. The coaxial holes 521 of the rotating shaft 511 can be connected with each other by a bearing pair to allow the rotating shaft 511 to rotate around the axis thereof. The range of angles through which the rotational shaft 511 rotates about its axis may be 360 °.
The first rotating member 512 is sleeved on the rotating shaft 511. The first rotating member 512 is connected with the rotating shaft 511 by a revolute pair to allow the first rotating member 512 to rotate around the rotating shaft 511. The range of angles through which the first rotating member 512 rotates about the rotating shaft 511 may be limited.
The second rotating member 513 is sleeved outside the rotating shaft 511. The second rotating member 513 may have the same or similar structure as the first rotating member 512. The mounting positions of the second rotating member 513 and the first rotating member 512 may be symmetrical. The second rotating member 513 may rotate about the rotating shaft 511.
The clamping device 600 may include a first clamping plate 610 and a second clamping plate 620. The first clamping plate 610 is rigidly connected to the first rotating member 512. As an example, the first clamping plate 610 may be fixed to the first rotating member 512 by a screw connection. The first clamping plate 610 can rotate around the axis of the rotating shaft 511 under the driving of the first rotating member 512. The second clamping plate 620 is opposite to the first clamping plate 610 and is rigidly connected to the second rotating member 513. The second clamping plate 620 can rotate around the axis of the rotating shaft 511 by the second rotating member 513.
The clamping process of the clamping device 600 and the flipping process of the flipping device 005 will be described below. For convenience of description, in the following description of the present application, a direction in which the first rotating member 512 rotates about the rotating shaft 511 is represented in a first direction, and a direction in which the second rotating member 513 rotates about the rotating shaft 511 is represented in a second direction. The first direction and the second direction may be opposite.
Referring to fig. 8A, taking the initial state as an example that the included angle between the first clamping plate 610 and the second clamping plate 620 is in the open state, after the detection object a reaches the position before flipping (shown by the solid line), the first clamping plate 6100 and the second clamping plate 620 are driven by the first rotating member 512 and the second rotating member 513 to rotate along the direction M and the direction N shown in the figure, respectively, so as to clamp the detection object a. Fig. 8A shows a state in which the detection target a is held, with its reverse side facing upward. Thereafter, the driving unit 530 drives the rotating shaft 511 to rotate 180 ° about its axis, and thus, the detection object a is turned to the turned-over position (shown by the dotted line) with the front of the turned-over detection object a facing upward. The first clamping plate 610 and the second clamping plate 620 are driven by the first rotating member 512 and the second rotating member 513 to rotate in opposite directions, so that the first clamping plate 610 and the second clamping plate 620 open at an included angle without applying an acting force on the detection target a, and the detection target a can be supported by the conveying device and continuously move along the moving path S under the driving of the conveying device. In this way, the effect of turning the detection target a from the back side to the front side is achieved by the turning device 005. The whole overturning process does not influence the operation of other stations on the production line, and the efficiency is effectively improved.
With continued reference to fig. 1, an automatic feeding unit 006 is provided upstream of the vision unit 003 in the moving direction of the conveyor 002, and is configured to automatically move the detection target a onto the conveyor 002.
As an example, fig. 9 illustrates a schematic structural diagram of an automatic feeding unit 006 provided according to an embodiment of the present application. The automatic loading unit 700 may include a base 710, a magazine 720, a first clamping portion 730, and a second clamping portion 740.
The base 710 may support the magazine 720, the first clamping portion 730, and the second clamping portion 740.
The magazine 720 is mounted on the base 710, and is configured to load a test object a to be tested. The inner wall of the magazine 720 may be adhered with a soft material to prevent the inner wall from scratching the detection target a (such as a mobile phone housing). The cartridge 720 may be slightly larger than the detection target a. A plurality of detection targets A are stacked up and down in the magazine 720. The lower end of the magazine 720 may be provided with a first opening 721 and a second opening 722. The first opening 721 is opposite to the second opening 722. The first opening 721 and the second opening 722 may be respectively penetrated by the first clamping portion 730 and the second clamping portion 740.
The first clamping portion 730 is disposed at a position corresponding to the first opening 721. The first clamping portion 730 may clamp one end of the detection target a through the first opening 721.
The second clamping portion 740 is disposed at a position corresponding to the second opening 722. The second clamping portion 740 may clamp the other end of the detection target a through the second opening 722.
The conveyor 002 may move along the conveying path S in a stepwise manner. When the detection object A reaches the vision station and the position is adjusted, the conveyor belt stops moving in the process that the vision unit 003 collects the vision image of the detection object A. At this time, the first clamping portion 730 moves leftward by the driving of the driver, and simultaneously the second clamping portion 740 moves rightward by the driving of the driver, and all the detection objects to be tested in the magazine 720 fall onto the conveyor belt by gravity. Then, the first clamping portion 730 moves to the right under the driving of the driver, and simultaneously the second clamping portion 740 moves to the left under the driving of the driver, the first clamping portion 730 and the second clamping portion 740 respectively clamp two ends of the detection target a to be tested on the penultimate layer, and during the process that the detection target is clamped by the first clamping portion 730 and the second clamping portion 740, the detection target a slightly moves upwards by a certain distance, so that the detection target a on the penultimate layer is separated from the detection target a on the lowermost layer, and the detection target on the penultimate layer does not apply pressure on the detection target on the lowermost layer. When the belt starts to move, the detection target at the lowest layer on the belt can be driven to move downstream along the moving path S.
With continued reference to fig. 1, a first position adjusting device 007 is provided upstream of the vision unit 003 in the moving direction S of the conveyor 002, and is configured to adjust the position of the detection target a located on the conveyor 002 in the direction perpendicular to the moving direction S.
A sorting unit 008 is provided downstream of the vision unit 003 in the moving direction S of the conveyor 002, and is configured to sort the detection target a according to the detection result of the target detection a by the data processing unit.
In summary, the present application provides a machine vision-based detection system. The detection system comprises a supporting unit, a vision unit and a data processing unit. The vision unit comprises an illumination unit and an image acquisition unit. The illumination unit may generate a shadowless area in a target space around a detection target such that the detection target placed within the shadowless area is in a shadowless state. Therefore, when the detection target is detected, the detection target is placed in the shadowless area, and the shadow does not exist in the image acquired by the image acquisition unit, so that the image acquisition unit can acquire the details of each part of the detection target, the detection precision is improved, and the omission ratio and the false detection ratio are reduced. Take cell phone case defect detection as an example, cell phone case's edge includes the arc surface, because the complicated bending and the reflection of light of arc surface, consequently, compare in the detection of ordinary plane defect, the surface defect of arc surface detects more difficultly, and this application detecting system's lighting unit can provide better illumination environment. Simultaneously, the image acquisition unit of this application adopts a plurality of cameras to encircle to arrange in the target space around the detection target, adopts the mode of similar operation shadowless lamp, can gather the detail in each region on detection target surface, has further improved the detection precision, has reduced false retrieval rate and omission factor.
Two visual units (a first visual unit 003-1 and a second visual unit 003-2) are arranged along the transmission path S to respectively detect the front and back surfaces of the mobile phone shell, so that all areas on the surface of the mobile phone shell can be ensured to be identified by the visual units, and the misjudgment rate and the omission ratio are reduced. The turning device 005 is arranged between the two vision units 003 to turn the mobile phone shell from the front side to the back side (or from the back side to the front side), so that the detection efficiency can be improved.
By arranging the automatic feeding unit at the upstream of the moving path, the feeding speed can be further improved, and the detection efficiency of the detection system is improved.
In conclusion, upon reading the present detailed disclosure, those skilled in the art will appreciate that the foregoing detailed disclosure can be presented by way of example only, and not limitation. Those skilled in the art will appreciate that the present application is intended to cover various reasonable variations, adaptations, and modifications of the embodiments described herein, although not explicitly described herein. Such alterations, improvements, and modifications are intended to be suggested by this application and are within the spirit and scope of the exemplary embodiments of the application.
Furthermore, certain terminology has been used in this application to describe embodiments of the application. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the application.
It should be appreciated that in the foregoing description of embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of such feature. Alternatively, various features may be dispersed throughout several embodiments of the application. This is not to be taken as an admission that any of the features of the claims are essential, and it is fully possible for a person skilled in the art to extract some of them as separate embodiments when reading the present application. That is, embodiments in the present application may also be understood as an integration of multiple sub-embodiments. And each sub-embodiment described herein is equally applicable to less than all features of a single foregoing disclosed embodiment.
In some embodiments, numbers expressing quantities or properties useful for describing and claiming certain embodiments of the present application are to be understood as being modified in certain instances by the terms "about", "approximately" or "substantially". For example, "about", "approximately" or "substantially" may mean a ± 20% variation of the value it describes, unless otherwise specified. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as possible.
Each patent, patent application, publication of a patent application, and other material, such as articles, books, descriptions, publications, documents, articles, and the like, cited herein is hereby incorporated by reference. All matters hithertofore set forth herein except as related to any prosecution history, may be inconsistent or conflicting with this document or any prosecution history which may have a limiting effect on the broadest scope of the claims. Now or later associated with this document. For example, if there is any inconsistency or conflict in the description, definition, and/or use of terms associated with any of the included materials with respect to the terms, descriptions, definitions, and/or uses associated with this document, the terms in this document are used.
Finally, it should be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present application. Other modified embodiments are also within the scope of the present application. Accordingly, the disclosed embodiments are presented by way of example only, and not limitation. Those skilled in the art may implement the present application in alternative configurations according to the embodiments of the present application. Thus, embodiments of the present application are not limited to those embodiments described with precision in the application.

Claims (21)

1. A machine vision-based inspection system, comprising:
a supporting unit for supporting a detection target;
at least one visual unit, each visual unit of the at least one visual unit comprising:
an illumination unit that generates a shadowless region in a target space around the detection target, the shadowless region having an isotropic light field intensity such that the detection target placed within the shadowless region is in a shadowless state,
an image acquisition unit configured to acquire an image of the detection target; and
and the data processing unit is connected with the image acquisition unit, receives the image of the detection target acquired by the image acquisition unit during working, and performs target detection on the detection target based on the image.
2. The machine-vision-based detection system of claim 1, wherein the illumination unit comprises:
a light source that does not directly illuminate the target space; and
a light shield configured to diffuse light emitted from the light source into the light shield to form the shadowless area in the target space.
3. The machine-vision-based detection system of claim 2, wherein the illumination unit further comprises:
and the light shielding plate is arranged between the light source and the target space and shields the light which is directly emitted to the target space by the light source.
4. The machine-vision based inspection system of claim 2, wherein the interior walls of the reticle include reflective areas that are curved.
5. The machine-vision based detection system of claim 4, wherein the curved surface comprises a spherical surface,
the reflection area comprises a circular edge, and the light source is arranged in an annular band-shaped area corresponding to the circular edge.
6. The machine-vision based detection system of claim 4, wherein the curved surface comprises a cylindrical surface.
7. The machine-vision based detection system of claim 6, wherein the reflective region includes a first edge and a second edge that are linear, the first edge and the second edge being parallel to a generatrix of the cylindrical surface, the first edge and the second edge being opposite,
the light sources are disposed in two strip-shaped regions corresponding to the first edge and the second edge.
8. The machine-vision-based detection system of claim 1, wherein the illumination unit comprises:
a plurality of light sources arranged around the target space, wherein the plurality of light sources illuminate directly into the target space forming the shadowless area.
9. The machine-vision based inspection system of claim 1, wherein the image acquisition unit comprises:
a plurality of cameras surrounding and facing the target space, the fields of view of any two adjacent cameras in the plurality of cameras partially overlapping within the target space.
10. The machine-vision based inspection system of claim 9, wherein the image acquisition unit further comprises:
a mounting frame; and
and one end of the camera adjusting device is installed on the mounting frame, the other end of the camera adjusting device is connected with the camera, and the camera adjusting device is configured to adjust at least one of the shooting position and the shooting angle of the camera.
11. The machine-vision based detection system of claim 10, wherein the camera adjustment device comprises a first connecting rod, a second connecting rod, and a third connecting rod, wherein:
one end of the first connecting rod is hinged on the mounting frame through a first spherical hinge, the other end of the first connecting rod is hinged with one end of the second connecting rod through a second spherical hinge,
the other end of the second connecting rod is provided with a through hole,
one end of the third connecting rod penetrates through the through hole, the other end of the third connecting rod is connected with the camera, and the third connecting rod can slide along the through hole.
12. The machine-vision-based inspection system of claim 1, wherein the support unit comprises:
a conveying device configured to convey the detection target along a conveying path that passes through the target space, the conveying device being configured to support the detection target and convey the detection target into the target space.
13. The machine-vision-based inspection system of claim 12, wherein the conveyor comprises:
a drive wheel;
the conveying belt is wound on the driving wheel, moves along the conveying path under the driving of the driving wheel, and is configured to support the detection target and convey the detection target into the target space; and
an engine coupled to the drive wheel and configured to drive the drive wheel to rotate.
14. The machine-vision-based inspection system of claim 12, further comprising a flipping device configured to flip the inspection target from the conveyor by a preset angle; and
the at least one vision unit comprises:
a first vision unit provided at one side of the turning device along the conveyance path, detecting a first detection area of the detection target before turning, and
and the second vision unit is arranged on the other side of the overturning device along the conveying path and is used for detecting a second detection area of the overturned detection target.
15. The machine-vision-based inspection system of claim 14, wherein said flipping mechanism comprises:
the turnover support comprises a mounting seat and a pivoting mechanism, the pivoting mechanism comprises a rotating shaft, a first rotating part and a second rotating part, the first rotating part and the second rotating part are sleeved outside the rotating shaft and are connected with the rotating shaft through a revolute pair, the first rotating part is opposite to the rotating shaft along a first direction and pivots around the axis of the rotating shaft when working, the second rotating part is opposite to the rotating shaft along a second direction and pivots around the axis of the rotating shaft when working, the first direction is opposite to the second direction, the axis is vertical to the conveying path, the mounting seat comprises a shaft hole, two ends of the rotating shaft are mounted in the shaft hole, and the rotating shaft is connected with the shaft hole through the revolute pair; and
the clamping device comprises a first clamping plate and a second clamping plate, wherein the first clamping plate is rigidly connected with the first rotating part, and the second clamping plate is opposite to the first clamping plate and is rigidly connected with the second rotating part.
16. The machine-vision-based inspection system of claim 12, further comprising:
an automatic feeding unit disposed upstream of the vision unit in a moving direction of the conveyor, configured to automatically move the detection target onto the conveyor.
17. The machine-vision-based inspection system of claim 16, wherein the automated loading unit comprises:
a base;
a magazine mounted on the base and configured to load the detection target, wherein a lower end of the magazine is provided with a first opening and a second opening, and the first opening and the second opening are opposite;
a first clamping portion provided at a position corresponding to the first opening and configured to clamp one end of a detection target located in the cartridge through the first opening; and
a second clamping portion provided at a position corresponding to the second opening and configured to clamp the other end of the detection target through the second opening.
18. The machine-vision-based inspection system of claim 12, further comprising:
a first position adjusting device, provided upstream of the vision unit in a moving direction of the conveyor, configured to adjust a position of a detection target located on the conveyor in a direction perpendicular to the moving direction.
19. The machine-vision-based inspection system of claim 12, further comprising:
a sorting unit disposed downstream of the vision unit in a moving direction of the conveyor, configured to sort the detection target according to a detection result of the target detection by the data processing unit.
20. The machine-vision based detection system of claim 1, wherein the vision device further comprises a second position adjustment device configured to adjust the position of the detection target within the shadowless area.
21. The machine-vision-based detection system of claim 20, wherein the detection target includes a first corner and a second corner opposite to the first corner, and the second position adjustment device includes:
the first limiting part is arranged at a position corresponding to the first corner and comprises two limiting surfaces which are perpendicular to each other, and the two limiting surfaces on the first limiting part are configured to adjust the positions of the two sides of the first corner; and
the second limiting part is arranged at a position corresponding to the second corner and comprises two limiting surfaces which are perpendicular to each other, and the two limiting surfaces on the second limiting part are configured to adjust the positions of the two edges of the second corner.
CN202110178858.7A 2021-02-09 2021-02-09 Detection system based on machine vision Pending CN112834519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110178858.7A CN112834519A (en) 2021-02-09 2021-02-09 Detection system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110178858.7A CN112834519A (en) 2021-02-09 2021-02-09 Detection system based on machine vision

Publications (1)

Publication Number Publication Date
CN112834519A true CN112834519A (en) 2021-05-25

Family

ID=75933271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110178858.7A Pending CN112834519A (en) 2021-02-09 2021-02-09 Detection system based on machine vision

Country Status (1)

Country Link
CN (1) CN112834519A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106248681A (en) * 2016-07-18 2016-12-21 南通大学 Solid object multiclass defect detecting device based on machine vision and method
KR20180087090A (en) * 2017-01-24 2018-08-01 주식회사 래온 Machine vision system of automatic light setting using inspection standard image
CN111595852A (en) * 2020-05-29 2020-08-28 江西绿萌科技控股有限公司 Shadowless LED light source system device for visual inspection of fruits and vegetables
CN111766245A (en) * 2020-05-18 2020-10-13 广州市讯思视控科技有限公司 Button battery negative electrode shell defect detection method based on machine vision
CN215415094U (en) * 2021-02-09 2022-01-04 深圳市微蓝智能科技有限公司 Detection cabinet based on machine vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106248681A (en) * 2016-07-18 2016-12-21 南通大学 Solid object multiclass defect detecting device based on machine vision and method
KR20180087090A (en) * 2017-01-24 2018-08-01 주식회사 래온 Machine vision system of automatic light setting using inspection standard image
CN111766245A (en) * 2020-05-18 2020-10-13 广州市讯思视控科技有限公司 Button battery negative electrode shell defect detection method based on machine vision
CN111595852A (en) * 2020-05-29 2020-08-28 江西绿萌科技控股有限公司 Shadowless LED light source system device for visual inspection of fruits and vegetables
CN215415094U (en) * 2021-02-09 2022-01-04 深圳市微蓝智能科技有限公司 Detection cabinet based on machine vision

Similar Documents

Publication Publication Date Title
KR101960913B1 (en) Vision bearing inspection apparatus
CN109459441B (en) Detection device, system and method
CN107764834B (en) Device for automatically detecting surface defects of transparent part and detection method thereof
CN215415094U (en) Detection cabinet based on machine vision
JP3203237B2 (en) Automatic detection of printing defects on metallized strips or any other printing substrate consisting mostly of specularly colored surfaces
JP2008131025A5 (en)
CN115825078A (en) Resin lens defect detection device and method
CN216081976U (en) Screen detection device
CN112834519A (en) Detection system based on machine vision
CN108613691B (en) Backlight imaging method and device for separated reflective element
CN219512084U (en) Product appearance detection equipment
JPH06341963A (en) Machine for checking bottom part of container made of glass
JP2006258778A (en) Method and device for inspecting surface defect
CN217542892U (en) Multi-surface detection light source device and multi-surface detection equipment
CN217425232U (en) Bright dark field visual device applied to surface detection and detection equipment
CN211014053U (en) High-precision automatic object surface flaw image capturing device
US11119052B2 (en) Dynamic backlighting system and method for inspecting a transparency
JP2002257743A (en) Inspection device for spherical body
CN113252688A (en) Device and method for detecting R corner defect of mobile phone
TW202008000A (en) Detection device for column shape battery
JP2009085883A (en) Defect inspection device
CN218766694U (en) Liquid crystal display panel appearance detection device
KR102493209B1 (en) Image detecting apparatus for visual inspection system based on ai
CN212845011U (en) Near infrared light source
CN221768122U (en) Detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination