WO2023005321A1 - 检测系统、方法、计算机设备及计算机可读存储介质 - Google Patents

检测系统、方法、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2023005321A1
WO2023005321A1 PCT/CN2022/091409 CN2022091409W WO2023005321A1 WO 2023005321 A1 WO2023005321 A1 WO 2023005321A1 CN 2022091409 W CN2022091409 W CN 2022091409W WO 2023005321 A1 WO2023005321 A1 WO 2023005321A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
laser
measured object
camera group
Prior art date
Application number
PCT/CN2022/091409
Other languages
English (en)
French (fr)
Inventor
朱二
朱壹
Original Assignee
江西绿萌科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江西绿萌科技控股有限公司 filed Critical 江西绿萌科技控股有限公司
Priority to CA3224219A priority Critical patent/CA3224219A1/en
Priority to AU2022316746A priority patent/AU2022316746A1/en
Priority to IL309828A priority patent/IL309828A/en
Priority to KR1020247004068A priority patent/KR20240027123A/ko
Priority to EP22847936.6A priority patent/EP4365578A1/en
Publication of WO2023005321A1 publication Critical patent/WO2023005321A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/02Food
    • G01N33/025Fruits or vegetables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8466Investigation of vegetal material, e.g. leaves, plants, fruits
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8861Determining coordinates of flaws
    • G01N2021/8864Mapping zones of defects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Definitions

  • the present application relates to the technical field of detection, and in particular, to a detection system, method, computer equipment, and computer-readable storage medium.
  • fruits and vegetables are easily damaged by collision, extrusion, vibration, etc., which will not only reduce the appearance quality of fruits and vegetables, but also make them vulnerable to fungal or bacterial invasion and cause fruit and vegetable rot (such as late blight, dry rot, soft rot, etc.), affecting its food safety.
  • Optical detection technology usually uses multiple cameras to image the fruit, and then manually calibrates and stitches the obtained images to obtain fruit surface pictures. This method leads to spliced images. There is a phenomenon of misalignment, which leads to the inability to fully display all the images of the fruit surface, and thus the inability to accurately locate and identify the position of the defect on the fruit surface, which brings troubles to the subsequent fruit sorting work and leads to a decrease in sorting accuracy.
  • the present application provides a detection system, method, computer equipment and computer-readable storage medium, which are used to obtain a complete surface image of the measured object, and provide a basis for subsequent accurate positioning and identification of surface defect positions.
  • Some embodiments of the present application provide a detection system, including a laser, a camera group, and computer equipment; the camera group is installed in the upper area of the measured object; the laser is installed directly above the measured object, so The emission port of the laser is facing the measured object; the laser is configured to project a laser plane, the laser plane intersects with the surface of the measured object to form a laser line, and the laser line connects the The surface is divided into a plurality of different regions of interest; the camera group is used to collect images of the measured object from different shooting angles; wherein each image includes a part or part of each region of interest All; the computer device configured to, according to the region of interest contained in each of the images, perform cutting and splicing processing on all the images to obtain a target image of the surface.
  • the laser is used to calibrate the surface of the object to be measured, and the surface is divided into regions of interest, and then the camera group is used to collect images from different angles. Since the laser line divides the surface into non-overlapping regions, it is possible to Image splicing is performed by analyzing the proportion of the region of interest in each image by computing equipment to obtain a complete surface image of the object to be measured, ensuring that subsequent surface defects can be accurately located and identified.
  • the computer device may be specifically configured to: cut the image to be processed according to the position of the laser line in the image to be processed to obtain a plurality of cut images, the images to be processed are all Any one of the images; using the cut image with the largest proportion of the region of interest among the multiple cut images as the image to be spliced corresponding to the image to be processed; traversing all the images to obtain each The images to be stitched corresponding to the images; according to a preset reference coordinate system, each of the images to be stitched is unfolded and stitched to obtain the target image.
  • a complete surface image of the measured object can be obtained, which can improve the accuracy of locating and identifying the position of surface defects.
  • the camera group may contain at least one camera, and when there are multiple cameras, the multiple cameras may be installed side by side.
  • the coordinate system alignment problem caused by different camera orientations can be avoided, and the difficulty of subsequent image processing can be reduced.
  • the camera group can be one group, and the camera group can be moved to different shooting positions to collect images at different shooting angles.
  • the camera groups may be in three groups, namely a first camera group, a second camera group and a third camera group; the first camera group may be located directly above the measured object, and the second camera group
  • the viewing direction of a camera group may be parallel to the laser; the angle between the normal of the second camera group and the normal of the third camera group and the normal of the first camera group may be the same.
  • the value range of the included angle may be 30 degrees to 50 degrees.
  • the value of the included angle may be 40 degrees.
  • the two lasers are distributed on both sides of the first camera group.
  • the surface area of the measured object can be divided more finely by using multiple lasers, and the accuracy of subsequent image stitching can be improved.
  • the width of the laser line may be less than 2 mm.
  • the width of the laser line within a reasonable range, it is possible to prevent the laser line from covering small defects on the surface of the target object, resulting in the inability to accurately obtain all the defects on the surface of the target object.
  • the detection system may further include a rotating device; the rotating device may be configured to drive the measured object to rotate.
  • the rotating device can be composed of a cup, a bracket and a cup wheel, the cup can hold the measured object, the bracket can support the cup, and the cup wheel can be positioned on the In the middle of the support, the cup wheel can rotate around the cup axis, thereby driving the measured object to rotate.
  • each surface of the measured object can be detected in real time, and more surface information can be provided for precise positioning and identification of surface defect positions.
  • the detection system may also include a conveyor belt, which may be in contact with the cup wheel of the rotating device, and which may be driven by a motor to move circularly, through which the conveyor belt contacts the cup The friction between the wheels drives the cup wheel to rotate, thereby driving the measured object to rotate.
  • a conveyor belt which may be in contact with the cup wheel of the rotating device, and which may be driven by a motor to move circularly, through which the conveyor belt contacts the cup The friction between the wheels drives the cup wheel to rotate, thereby driving the measured object to rotate.
  • the measured object may be an object having a circular shape or an elliptical shape.
  • the surface of a circular or elliptical object has a smooth curve, and the laser line formed on the surface is a smooth curve, so that the surface can be evenly divided, and the difficulty of subsequent image processing can be reduced.
  • the detection method may include: acquiring images of the measured object captured by the camera group at different shooting angles, wherein each of the images includes a part of each region of interest or All; wherein, the region of interest is formed by cutting the laser line formed by the intersection of the laser plane projected by the laser located in the area directly above the measured object and the surface of the measured object; according to each cutting and splicing all the images to obtain the target image of the surface.
  • the laser is used to calibrate the surface of the object to be measured, and the surface is divided into regions of interest, and then the camera group is used to collect images from different angles. Since the laser line divides the surface into non-overlapping regions, it is possible to Image splicing is carried out based on the proportion of the region of interest in each image to obtain a complete surface image of the measured object, ensuring that the subsequent accurate positioning and identification of surface defects can be achieved.
  • cutting and splicing all the images to obtain the target image of the surface may include: according to the laser line in the image to be processed The position of the image to be processed is cut to obtain a plurality of cut images, the image to be processed is any one of all the images; the region of interest in the plurality of cut images accounts for the largest The cut image of the image to be processed is used as the image to be spliced corresponding to the image to be processed; all the images are traversed to obtain the image to be spliced corresponding to each of the images; according to the preset reference coordinate system, each of the images to be spliced is The spliced images are unfolded and spliced to obtain the target image.
  • a complete surface image of the measured object can be obtained, which can improve the accuracy of locating and identifying the position of surface defects.
  • FIG. 1 may include a processor and a memory
  • the memory may store a computer program executable by the processor
  • the processor may execute the computer program to implement the computer program according to the present invention. Apply the detection method described in the embodiment.
  • Still other embodiments of the present application provide a computer-readable storage medium, on which a computer program may be stored, and when the computer program is executed by a processor, the detection method as described in the embodiment of the present application is implemented.
  • the detection system may include a laser, a camera group and a computer device, the camera group is installed in the upper area of the measured object; the laser is installed in the front of the measured object Above, the emission port of the laser is facing the measured object; the laser is configured to project a laser plane, the laser plane intersects with the surface of the measured object to form a laser line, and the laser line divides the surface into a plurality of different regions of interest;
  • the camera group is configured to collect images of the measured object from different shooting angles; each image includes part or all of each region of interest; the computer device is configured to use the region of interest contained in each image to , cutting and splicing all the images to obtain the target image of the surface.
  • the related optical detection technology manually calibrates and stitches the obtained images, and the surface images obtained cannot be aligned.
  • the inability to accurately locate and identify will bring troubles to the subsequent fruit sorting work, resulting in a decrease in sorting accuracy.
  • the laser is used to calibrate the surface of the measured object, and the surface is divided into regions of interest by the laser line, and then the camera group is used to collect images from different angles. Since the laser line divides the surface into non-overlapping areas, the The computing device analyzes the proportion of the region of interest in each image to stitch the images, and obtains a complete surface image of the object under test, ensuring that the location of surface defects can be accurately located and identified in the future.
  • Fig. 1 is a schematic diagram of a related fruit optical detection technology
  • Fig. 2 is a structure diagram of a detection system provided by the embodiment of the present application.
  • Fig. 3 is a schematic diagram of a laser plane projected by a laser
  • FIG. 4 is a schematic diagram of a laser line provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a region of interest provided by an embodiment of the present application.
  • Fig. 6 is the schematic diagram that the embodiment of the present application utilizes a camera group to photograph the measured object
  • FIG. 7 is a schematic diagram of the implementation of three camera groups provided in the embodiment of the present application.
  • FIG. 8 is a three-view diagram of a detection system with a laser provided in an embodiment of the present application.
  • FIGS. 9A to 9C are schematic diagrams of shooting angles of the three camera groups provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of another region of interest provided by the embodiment of the present application.
  • FIG. 11 is a three-view diagram of a detection system including two lasers provided in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a reference coordinate system provided by the embodiment of the present application.
  • Figure 13 is a three-view diagram of another detection system provided by the embodiment of the present application.
  • Fig. 14 is a schematic flow chart of a detection method provided by the embodiment of the present application.
  • FIG. 15 is a structural block diagram of a computer device provided by an embodiment of the present application.
  • Optical detection technology usually uses multiple cameras to image the fruit, and then manually calibrates and stitches the obtained images to obtain the fruit.
  • the surface picture, the specific implementation method is shown in Figure 1, which is a schematic diagram of a related fruit optical detection technology.
  • Related fruit optical detection technology adopts multiple cameras to collect fruit surface images, for example, in some possible embodiments, one camera is set directly above the object to be measured, and then one camera is set on each side of the camera The fruit is imaged at the same shooting time, and the collected images can be shown as (a) and (b) in Figure 1, and then manually integrated to obtain the fruit surface image as shown in (c) in Figure 1.
  • image processing personnel first manually determine the calibration line (black straight line in the figure) in (a) and (b) in Figure 1, and then, according to the calibration line in (a) ( Figure middle black line) to cut the image, and then retain the image area above the black line.
  • the image processor cuts (b) according to the black line in Figure 1, and retains the image area below the black line. Furthermore, the reserved part of the images intercepted by (a) and (b) are spliced to obtain the image shown in (c) in Figure 1.
  • FIG. 2 is a schematic flow chart of a detection system provided by the embodiment of the present application.
  • the detection system 10 may include: a laser 11, Camera set 12 and computer equipment 13 .
  • the camera group 12 can be installed in the area above the measured object 14 ; the laser 11 can be installed directly above the measured object 14 , and the emission port of the laser 11 can face the measured object 14 .
  • the laser 11 can be configured to project a laser plane.
  • the laser plane can intersect with the surface of the measured object 14 to form a laser line.
  • the laser line can divide the surface into a plurality of different regions of interest.
  • the above-mentioned region of interest may refer to the non-overlapping region on both sides of the laser line on the surface of the measured object.
  • the region of interest may be the measured object in the image visible area.
  • the camera group 12 can be configured to collect images of the measured object 14 from different shooting angles; wherein, each image can include part or all of each region of interest.
  • the computer device 13 may be configured to perform cutting and splicing processing on all the images according to the region of interest contained in each image to obtain a target image of the surface.
  • the difference from the related technology is that the related optical detection technology manually calibrates and stitches the obtained images, and the surface images obtained cannot be aligned, and there are missing or superimposed areas, resulting in the incomplete or repeated display of all images on the fruit surface.
  • the location of surface defects cannot be accurately located and identified, which brings troubles to the subsequent fruit sorting work and leads to a decrease in sorting accuracy.
  • the laser is used to calibrate the surface of the measured object, and the surface is divided into regions of interest by the laser line, and then the camera group is used to collect images from different angles.
  • the The computing device analyzes the proportion of the region of interest in each image to stitch the images, and obtains a complete surface image of the object under test, ensuring that the location of surface defects can be accurately located and identified in the future.
  • the above-mentioned laser 11 may be, but not limited to, a linear laser generator, and the laser 11 may emit fan-shaped laser light in one direction, as shown in FIG. 3 , which is a schematic diagram of a laser plane projected by a laser.
  • FIG. 4 is a schematic diagram of a laser line provided by an embodiment of the present application. The position of the laser line is shown in FIG.
  • the width of the above-mentioned laser lines may be less than 2 millimeters.
  • the laser line is emitted in a divergent manner, and the width of the laser line is too large.
  • the area occupied by the laser line on the surface of the target object is too large, and there may be cases where the laser line covers small defects on the surface of the target object.
  • the width of the laser line has a greater impact on small fruits. The smaller the fruit, the smaller the width of the laser line should be.
  • the above-mentioned region of interest is the region on both sides of the laser line.
  • the following takes the shooting angle of a camera group directly above the measured object 14 as an example to provide a schematic diagram of the region of interest.
  • 5 is a schematic diagram of a region of interest provided by the embodiment of the present application.
  • the camera group 12 and the laser 11 are located directly above the measured object, and the laser line divides the measured object into two regions of interest, namely, area A and area B.
  • the image captured by the upper camera group includes all of area A and area B. If the camera group is at a certain angle to the normal direction of the measured object, the image captured by the camera group can include part of area A and area B.
  • the surface of the measured object 14 can be divided into 2 regions of interest; if there are at least 2 lasers, the surface of the measured object 14 can be divided into multiple The region of interest, that is, the number of regions of interest is the number of lasers plus 1. It is foreseeable that the more the number of lasers, the finer the surface of the measured object is divided, which can improve the accuracy of subsequent image stitching Spend.
  • the above-mentioned camera group 12 may include but is not limited to one camera.
  • the multiple cameras when there are multiple cameras in the camera group 12, the multiple cameras are installed side by side, and the multiple cameras can capture images simultaneously.
  • the same surface image of the measured object provides more image resources for determining the surface image of the image, and facilitates the subsequent precise positioning and identification of surface defects.
  • the above-mentioned camera group 12 can be one group, and the camera group 12 can be moved to different shooting positions, so as to achieve the effect of capturing images at different shooting angles.
  • the measured object 14 when there is a camera group for image acquisition, the measured object 14 is in a static state, so that the camera group collects the same surface image of the measured object 14 from different angles.
  • FIG. 6 is a schematic diagram of an embodiment of the present application using a camera group to photograph a measured object.
  • the initial position (1) of the camera group 12 is directly above the measured object 14, after the camera group 12 collects images directly above, the camera group 12 can be controlled to move clockwise and reversely for a predetermined period.
  • the initial installation position of the above-mentioned camera group 12 can be implemented in the following several ways, which are not limited here.
  • the initial position of the camera group 12 in the process of image acquisition using a camera group 12, in the first scenario, can be the position (1) in FIG. 6, and the process of changing the shooting position is as described above As described in the content; in the second scenario, the initial position of the camera group 12 can also be the position (2) shown in Figure 6, then the shooting process is: after the image is taken at position (2), control the camera group along the Move the preset angle ⁇ counterclockwise to reach position (1), and after the shooting at position (1), control the camera group 12 to move the preset angle ⁇ counterclockwise to reach position (3) for shooting; in the third scene , the initial position of the camera group 12 can also be the position (3) shown in FIG. Position (1), after the shooting at position (1), control the camera group 12 to move the preset angle ⁇ clockwise to reach position (2) for shooting.
  • the above-mentioned camera groups 12 can also be three groups, namely the first camera group 121 , the second camera group 122 and the third camera group 123 , and the installation positions of the three camera groups are fixed.
  • the measured object 14 can be in a static state, and the three camera groups can simultaneously start shooting at the same time, so as to ensure that the three camera groups capture the image captured at the same time. image of the same surface of the measured object.
  • FIG. 7 is a schematic diagram of an implementation manner of three camera groups provided in an embodiment of the present application.
  • the first camera group 121 may be located directly above the measured object 14 , and the viewing direction of the first camera group may be parallel to the laser.
  • the angle between the normal of the second camera group 122 and the normal of the third camera group 123 and the normal of the first camera group 121 may be the same, wherein the above normal may be perpendicular to the shooting plane of the camera group.
  • the value range of the above-mentioned included angle is 30 degrees to 50 degrees, and in a preferred embodiment, the above-mentioned included angle is preferably 40 degrees.
  • Figure 8 is a three-view view of a detection system with a laser provided in an embodiment of the present application, wherein Including (1) front view, (2) right view and (3) top view, combined with the three views shown in Figure 8, the following also gives a schematic diagram of the shooting angle of each camera group in comparison with Figure 8, please refer to Figure 9A to Figure 9C, FIG. 9A to FIG. 9C are schematic diagrams of shooting angles of the three camera groups provided by the embodiment of the present application.
  • there is one laser 11 and the laser line divides the measured object 14 into two regions of interest, namely region A and region B.
  • the shooting range of the first camera group 121 can include all of the area A and all of the area B, that is, in the image of the first camera group 121 Contains all of Region A and Region B.
  • the shooting range of the second camera group 122 may include part of area A and part of area B, wherein the part included in area A may be smaller than the part included in area B.
  • the included part, that is, the image of the second camera group 122 may include a partial area of area A and a partial area of area B, and the partial area of area A may be smaller than the partial area of area B;
  • the shooting range of the third camera group 123 may also include a partial area A and a partial area B, wherein the included part of the area A may be larger than the area
  • the part included in B that is, the image of the third camera group 123 may include a partial area of area A and a partial area of area B, and the partial area of area A may be larger than the partial area of area B.
  • the image of the first camera group 121 includes all of the area A and area B, it can be expressed as: including 100% of area A and 100% of area B, the second camera group 122 includes 30% of area A and 70% of area B, and the image of the third camera group 123 includes 70% of area A and 30% of area B, then, in the process of determining the target image, the most ideal
  • the reference area is 70% of area B in the image of the second camera group 122 and 70% of area A in the image of the third camera group 123 .
  • the target surface information captured by the camera groups of the three camera groups may overlap.
  • the laser 11 can be installed near the position of the first camera group 121 .
  • the effect of obtaining a complete surface image can be achieved by collecting images of the measured object from different shots by three camera groups. It is foreseeable that the technical effect achieved by using four or more camera groups The technical effect achieved by the three camera groups should be the same.
  • the two lasers may be distributed on both sides of the first camera group 121, and the two lasers 11 may be connected to the first camera group 121.
  • a camera group 121 is parallel.
  • FIG. 10 is another example provided by the embodiment of the present application.
  • two laser lines can be formed on the surface of the measured object 14, and the measured object is divided into three regions of interest, namely region A, region B and region C.
  • the image includes all of area A, area B, and area C. If the camera group 12 is at a certain angle to the normal direction of the measured object 14, the image taken by the camera group 12 may include part of area A, part of area B, and part of area C. Partial area B.
  • Fig. 11 is a three-view view of a detection system containing two lasers provided by an embodiment of the present application, These include (1) front view, (2) right side view, and (3) top view.
  • the shooting range of the first camera group 121 can include all areas A, all Area B and all area C.
  • the shooting range of the second camera group 122 may include a partial area A, a partial area B, and a partial area B.
  • the shooting range of the third camera group 123 may also include a partial area A and a partial area B.
  • the image of the first camera group 121 may contain 100% of the area A, 100% of the area B of 100% and area C of 100%
  • the image of the second camera group 122 may contain 10% of area A, 30% of area B and 60% of area C
  • the image of the third camera group 123 may contain 60% % of area A, 30% of area B, and 60% of area C
  • the most ideal reference area is 100% of area B contained in the first camera group, and 100% of the second camera group 122.
  • the computer device 13 shown in FIG. 1 above can be configured to: according to the position of the laser line in the image to be processed, the The image is cut to obtain multiple cut images, and the image to be processed is any one of all the images.
  • the cut image with the largest proportion of the region of interest may be used as the image to be spliced corresponding to the image to be processed.
  • the image of the second camera group 122 includes 30% of area A and 70% of area B
  • the image of the third camera group 123 includes 70% of area A and 30% of area B
  • the image area containing 70% of the area B is used as the image to be stitched
  • the image area containing 70% of the area A is used as the image area to be stitched Images to be stitched.
  • each image to be stitched is unfolded and stitched to obtain the target image.
  • the above-mentioned preset reference coordinate system can be a schematic diagram of the reference coordinate system shown in FIG. 12 , and the image processing program can expand the image to be stitched by referring to the latitude and longitude of the globe according to the laser line marking position.
  • the images captured by the second camera group 122 and the third camera group 123 may be stitched, and the images captured by the first camera group 121 are not involved in the image stitching process.
  • the images taken by the first camera group 121, the second camera group 122 and the third camera group 123 all need to participate in the image stitching process, and the images taken by the first camera group 121 and the second camera group 122
  • the cut image with the largest ROI is selected for stitching
  • the first camera group 121 selects the cut image including the middle ROI to participate in the image stitching process.
  • the detection system in each of the above embodiments may also include a rotating device configured to drive the measured object to rotate, so that the camera group can collect real-time
  • the images of each surface can improve the accuracy of subsequent positioning and identification of surface defects, and bring higher precision to subsequent sorting work.
  • the above-mentioned rotating device may be composed of a cup, a bracket and a cup wheel, the cup may hold the measured object, the bracket may support the cup, the cup wheel may be located in the middle of the bracket, and the cup wheel may Rotate around the cup axis to drive the measured object to rotate to ensure 360° no dead angle detection of the measured object.
  • the above detection system may also include a conveyor belt
  • Figure 13 is a three-view view of another detection system provided by the embodiment of the present application, in which the The measured object is placed in the cup of the above-mentioned rotating device (the rotating device is omitted in Figure 13), the conveying direction of the conveyor belt can be consistent with the rotating direction of the cup wheel, the cup wheel of the rotating device can be in contact with the conveyor belt, and the conveyor belt can be rotated by the motor Under the driving cycle, the friction between the conveyor belt and the cup wheel drives the cup wheel to rotate, thereby driving the object to rotate.
  • the detected object in each of the above embodiments may be, but not limited to, a circular or elliptical object, for example, the detected object may be, but not limited to, a fruit or vegetable.
  • the surface of a circular or elliptical object has a smooth curve, and the laser line formed on the surface is a smooth curve, so that the surface can be evenly divided, and the difficulty of subsequent image processing can be reduced.
  • FIG. 14 is a detection method provided by the embodiment of this application.
  • a schematic flow chart of the method may include:
  • the region of interest is cut by a laser line formed by the intersection of a laser plane projected by a laser located directly above the measured object and a surface of the measured object.
  • step S32 may include the following sub-steps:
  • sub-step 321 according to the position of the laser line in the image to be processed, the image to be processed is cut to obtain a plurality of cut images, and the image to be processed is any one of all the images;
  • Sub-step 322 the cut image with the largest ratio of the region of interest among the multiple cut images is used as the image to be spliced corresponding to the image to be processed;
  • Sub-step 323, traversing all the images to obtain the image to be spliced corresponding to each image;
  • each image to be stitched is unfolded and stitched to obtain the target image.
  • the detection device may include:
  • the acquiring module can be configured to acquire the images of the measured object captured by the camera group at different shooting angles, wherein each image contains part or all of each region of interest; wherein the region of interest is formed by It is cut by the laser line formed by the intersection of the laser plane projected by the laser in the area directly above the measured object and the surface of the measured object.
  • the processing module may be configured to perform cutting and splicing processing on all the images according to the region of interest existing in each image to obtain the target image of the surface.
  • the processing module may be specifically configured to: cut the image to be processed according to the position of the laser line in the image to be processed to obtain a plurality of cut images, the image to be processed is any one of all images; The cut image with the largest proportion of the region of interest in the cut images is used as the image to be spliced corresponding to the image to be processed; all images are traversed to obtain the image to be spliced corresponding to each image; according to the preset reference coordinate system, each image to be spliced is The stitched image is unfolded and spliced to obtain the target image.
  • the embodiment of the present application also provides a computer device, as shown in FIG. 15 , which is a structural block diagram of a computer device provided in the embodiment of the present application.
  • the computer device 13 may include a communication interface 131 , a processor 132 and a memory 133 .
  • the processor 132 , the memory 133 and the communication interface 131 may be electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, these components can be electrically connected to each other through one or more communication buses or signal lines.
  • the memory 133 can be used to store software programs and modules, such as the program instructions/modules corresponding to the detection method provided in the embodiment of the present application, and the processor 132 can perform various functions by executing the software programs and modules stored in the memory 133 applications and data processing.
  • the communication interface 131 can be used for signaling or data communication with other node devices.
  • the computer device 130 may have a plurality of communication interfaces 131 .
  • memory 133 can be but not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read-only memory (Programmable Read-Only Memory, PROM), erasable In addition to read-only memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable read-only memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
  • RAM Random Access Memory
  • ROM read-only memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically erasable read-only memory
  • the processor 132 may be an integrated circuit chip with signal processing capability.
  • the processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processing, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the above modules can be stored in the memory shown in Figure 15 in the form of software or firmware (Firmware) or solidified in the operating system (Operating System, OS) of the computer device, and can be executed by the processor in Figure 15 . Meanwhile, data necessary for executing the above-mentioned modules, codes of programs, etc. may be stored in the memory.
  • An embodiment of the present application provides a computer-readable storage medium, on which a computer program can be stored, and when the computer program is executed by a processor, any detection method in the foregoing implementation manners can be implemented.
  • the computer-readable storage medium may be, but not limited to, various mediums capable of storing program codes such as U disk, mobile hard disk, ROM, RAM, PROM, EPROM, EEPROM, magnetic disk or optical disk.
  • the application discloses a detection system, a method, a computer device and a computer-readable storage medium.
  • the detection system includes a laser, a camera group and computer equipment.
  • the camera group is installed in the upper area of the measured object; the laser is installed directly above the measured object, and the emission port of the laser is facing the measured object; A laser plane, where the laser plane intersects with the surface of the measured object to form a laser line, and the laser line divides the surface into a plurality of different regions of interest;
  • the camera group is configured to collect images of the measured object from different shooting angles; each Each image contains part or all of each region of interest;
  • the computer device is configured to cut and stitch all the images according to the region of interest contained in each image to obtain a target image of the surface.
  • This application can obtain a complete surface image after splicing, so as to ensure accurate positioning and identification of surface defects in the future.
  • the detection system, method, computer equipment and computer-readable storage medium of the present application are reproducible and can be applied in various industrial applications.
  • the detection system of the present application can be applied in the field of detection.

Abstract

提供了检测系统(10)、方法、计算机设备(13)及计算机可读存储介质,检测系统(10)包括激光器(11)、相机组(12)和计算机设备(13),相机组(12)安装于被测对象(14)的上方区域;激光器(11)安装于被测对象(14)的正上方,激光器(11)的发射口正对被测对象(14);激光器(11),配置成用于投射出激光平面,激光平面与被测对象(14)的表面相交形成激光线,激光线将表面划分为多个不同的感兴趣区域;相机组(12),配置成用于从不同拍摄角度,采集被测对象(14)的图像;每张图像包含每个感兴趣区域的部分或全部;计算机设备(13),配置成用于根据每张图像中包含的感兴趣区域,将全部图像进行切割和拼接处理,获得表面的目标图像。能够获得拼接后的完整表面图像,保证后续可以精准定位和识别表面瑕疵位置。

Description

检测系统、方法、计算机设备及计算机可读存储介质
相关申请的交叉引用
本申请要求于2021年7月30日提交中国国家知识产权局的申请号为202110870046.9、名称为“检测系统、方法、计算机设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及检测技术领域,具体而言,涉及一种检测系统、方法、计算机设备及计算机可读存储介质。
背景技术
果蔬在采摘、分级、包装及运输等过程中容易因碰撞、挤压、振动等原因损伤,不仅会降低果蔬自身外观品质,也会使果蔬易受到真菌或细菌侵入导致果蔬腐烂(如晚疫病、干腐病、软腐病等),影响其食用安全性。
目前,相关技术采用光学检测技术对果蔬表面进行检测,光学检测技术通常利用多个摄像机对水果进行成像,然后对获得的图像进行人工标定和拼接获得水果表面图片,这种方式导致拼接后的图像存在无法对齐现象,导致水果表面全部图像无法完整展示,进而无法对水果表面瑕疵位置精准定位和识别,给后续水果分选工作带来麻烦,导致分选精度下降。
发明内容
有鉴于此,本申请提供一种检测系统、方法、计算机设备及计算机可读存储介质,用以获得被测对象的完整表面图像,为后续表面瑕疵位置的精准定位和识别提供基础。
本申请的技术方案可以这样实现:
本申请一些实施例提供一种检测系统,包括激光器、相机组和计算机设备;所述相机组安装于所述被测对象的上方区域;所述激光器安装于所述被测对象的正上方,所述激光器的发射口正对所述被测对象;所述激光器,配置成用于投射出激光平面,所述激光平面与所述被测对象的表面相交形成激光线,所述激光线将所述表面划分为多个不同的感兴趣区域;所述相机组,用于从不同拍摄角度,采集所述被测对象的图像;其中,每张所述图像包含每个所述感兴趣区域的部分或全部;所述计算机设备,配置成用于根据每张所述图像中包含的所述感兴趣区域,将全部所述图像进行切割和拼接处理,获得所述表面的目标图像。
在本申请实施例中,通过激光器在被测对象表面进行标定,将表面划分感兴趣区域,进而利用相机组采集从不同角度拍摄图像,由于激光线将表面分成了互不重叠的区域,进 而可以通过计算设备分析每张图像中的感兴趣区域占比来进行图像拼接,获得被测对象的完整表面图像,保证后续可以精准定位和识别表面瑕疵位置。
可选地,所述计算机设备,可以具体配置成用于:根据待处理图像中所述激光线的位置,对所述待处理图像进行切割,获得多个切割图像,所述待处理图像为全部所述图像中的任一张;将所述多个切割图像中所述感兴趣区域占比最大的切割图像,作为所述待处理图像对应的待拼接图像;遍历全部所述图像,得到每张所述图像对应的所述待拼接图像;根据预设的参考坐标系,将每个所述待拼接图像展开后拼接,获得所述目标图像。
在本申请实施例中,可以获得被测对象的完整表面图像,可以提高定位和识别表面瑕疵位置的精准度。
可选地,所述相机组内可以至少包含一个相机,当存在多个所述相机时,多个所述相机可以并排安装。
在本申请实施例中,通过将多个相机并排安装,可以避免因相机方位不同造成的坐标系对齐问题,可以降低后续图像处理的难度。
可选地,所述相机组可以为一组,所述相机组可以移动到不同拍摄位置,以在不同拍摄角度采集图像。
在本申请实施例中,通过使一组相机组移动到不同拍摄位置进行表面图像拍摄,可以提供关于被测对象的表面图像的较多参考信息,提高后续图像拼接的准确度。
可选地,所述相机组可以为三组,分别为第一相机组、第二相机组和第三相机组;所述第一相机组可以位于所述被测对象的正上方,所述第一相机组的视野方向可以与所述激光器平行;所述第二相机组的法线和所述第三相机组的法线与所述第一相机组的法线之间的夹角可以相同。
在本申请实施例中,通过将多个相机组进行表面图像拍摄,可以提高为获得完整的被测对象的表面图像提供更多的参考信息,提高后续图像拼接的准确度。
可选地,所述夹角的取值范围可以为30度到50度。
可选地,所述夹角的取值可以为40度。
在本申请实施例中,通过将上述夹角控制在合理的范围内,可以避免的因角度太小或太大造成多个相机组的拍摄区域重叠过多,或者缺失的问题,提高后续图像拼接的准确度。
可选地,所述激光器可以为至少一个,当所述激光器的数量为两个,两个所述激光器分布在所述第一相机组的两侧。
在本申请实施例中,通过多个激光器可以对被测对象的表面区域划分的更精细,提高后续图像拼接的准确度。
可选地,所述激光线的宽度可以小于2毫米。
在本申请实施例中,通过将激光线宽度控制在合理的范围内,可以避免激光线覆盖住目标对象表面小瑕疵,导致无法准确获得目标对象表面的全部瑕疵情况。
可选地,检测系统还可以包括旋转装置;所述旋转装置可以构造成用于带动所述被测对象旋转。
可选地,所述旋转装置可以由杯具、支架和杯轮组成,所述杯具可以盛放所述被测对象,所述支架可以支撑所述杯具,所述杯轮可以位于所述支架中间,所述杯轮可以绕杯轴旋转,从而带动所述被测对象旋转。
在本申请实施例中,可以实时检测被测对象的各个表面,为表面瑕疵位置精准定位和识别提供更多的表面信息。
可选地,检测系统还可以包括输送带,所述输送带可以与所述旋转装置的所述杯轮接触,所述输送带可以在电机驱动下循环运动,通过所述输送带与所述杯轮之间的摩擦力带动所述杯轮旋转,从而带动所述被测对象旋转。
在本申请实施例中,可以有利于实时检测被测对象的各个表面,为表面瑕疵位置精准定位和识别提供更多的表面信息。
可选地,所述被测对象可以为具有圆形形状或者椭圆形形状的物体。
在本申请实施例中,圆形形状或者椭圆形形状的物体表面曲线光滑,在表面形成的激光线是一条光滑的曲线,从而可以将表面均匀划分,可以降低了后续图像处理的难度。
本申请另一些实施例提供一种检测方法,所述检测方法可以包括:获取相机组在不同拍摄角度采集的被测对象的图像,其中,每张所述图像包含每个感兴趣区域的部分或者全部;其中,所述感兴趣区域,是由位于所述被测对象的正上方区域的激光器所投射出激光平面与所述被测对象的表面相交形成的激光线切割而成;根据每张所述图像中存在的所述感兴趣区域,将全部所述图像进行切割和拼接处理,获得所述表面的目标图像。
在本申请实施例中,通过激光器在被测对象表面进行标定,将表面划分感兴趣区域,进而利用相机组采集从不同角度拍摄图像,由于激光线将表面分成了互不重叠的区域,进而可以通过每张图像中的感兴趣区域占比来进行图像拼接,获得被测对象的完整表面图像,保证后续可以精准定位和识别表面瑕疵位置。
可选地,根据每张所述图像中存在的所述感兴趣区域,将全部所述图像进行切割和拼接处理,获得所述表面的目标图像,可以包括:根据待处理图像中所述激光线的位置,对所述待处理图像进行切割,获得多个切割图像,所述待处理图像为全部所述图像中的任一张;将所述多个切割图像中所述感兴趣区域占比最大的切割图像,作为所述待处理图像对应的待拼接图像;遍历全部所述图像,得到每张所述图像对应的所述待拼接图像;根据预设的参考坐标系,将每个所述待拼接图像展开后拼接,获得所述目标图像。
在本申请实施例中,可以获得被测对象的完整表面图像,可以提高定位和识别表面瑕疵位置的精准度。
本申请再一些实施例提供一种计算机设备,可以包括处理器和存储器,所述存储器可以存储有能够被所述处理器执行的计算机程序,所述处理器可执行所述计算机程序以实现根据本申请实施例所述的检测方法。
本申请又一些实施例提供一种计算机可读存储介质,该计算机可读存储介质上可以存储有计算机程序,所述计算机程序被处理器执行时实现如本申请实施例所述的检测方法。
本申请提供的检测系统、方法、计算机设备及计算机可读存储介质,该检测系统可以包括激光器、相机组和计算机设备,相机组安装于被测对象的上方区域;激光器安装于被测对象的正上方,激光器的发射口正对被测对象;激光器,配置成用于投射出激光平面,激光平面与被测对象的表面相交形成激光线,激光线将表面划分为多个不同的感兴趣区域;相机组,配置成用于从不同拍摄角度,采集被测对象的图像;每张图像包含每个感兴趣区域的部分或全部;计算机设备,配置成用于根据每张图像中包含的感兴趣区域,将全部图像进行切割和拼接处理,获得表面的目标图像。与相关技术的区别在于,相关的光学检测技术通对获得的图像进行人工标定和拼接,获得表面图像存在无法对齐现象,存在缺失或者叠加区域,导致水果表面全部图像无法完整展示,水果表面瑕疵位置无法精准定位并进行识别,给后续水果分选工作带来麻烦,导致分选精度下降。而本申请通过激光器在被测对象表面进行标定,通过激光线将表面划分感兴趣区域,进而利用相机组采集从不同角度拍摄图像,由于激光线将表面分成了互不重叠的区域,进而可以通过计算设备分析每张图像中的感兴趣区域占比来进行图像拼接,获得被测对象的完整表面图像,保证后续可以精准定位和识别表面瑕疵位置。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为一种相关的水果光学检测技术的示意图;
图2为本申请实施例提供的一种检测系统的架构图;
图3为一种激光器投射的激光平面的示意图;
图4为本申请实施例提供的一种激光线的示意图;
图5为本申请实施例提供的一种感兴趣区域的示意图;
图6为本申请实施例利用一个相机组拍摄被测对象的示意图;
图7为本申请实施例提供的三个相机组的实现方式示意图;
图8为本申请实施实施例提供的存在一个激光器的检测系统的三视图;
图9A至图9C为本申请实施例提供的三组相机组的拍摄角度示意图;
图10为本申请实施例提供的另一种感兴趣区域的示意图;
图11为本申请实施实施例提供的包含两个激光器的检测系统的三视图;
图12为本申请实施例提供的一种参考坐标系的示意图;
图13为本申请实施例提供的另一种检测系统的三视图;
图14为本申请实施例提供的一种检测方法的示意性流程图;
图15为本申请实施例提供的一种计算机设备结构框图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。
因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
在本申请的描述中,需要说明的是,若出现术语“上”、“下”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,或者是该发明产品使用时惯常摆放的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。
此外,若出现术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
需要说明的是,在不冲突的情况下,本申请的实施例中的特征可以相互结合。
目前,为了准确水果表面瑕疵位置精准定位和识别,相关技术采用光学检测技术对果蔬表面进行检测,光学检测技术通常利用多个摄像机对水果进行成像,然后对获得的图像进行人工标定和拼接获得水果表面图片,具体实现方式如图1所示,图1为一种相关的水果光学检测技术的示意图。
相关的水果光学检测技术采用多个摄像机采集水果表面图像,例如,在一些可能的实施例中,在被测对象的正上方设置1个摄像机,然后在该摄像机的两侧边各设置1个摄像 机在同一拍摄时间对水果进行成像,采集的图像可以如图1中的(a)和(b)所示,然后通过人工进行整合获得如图1中的(c)所示的水果表面图像。
如图1中的(a)所示,图像处理人员先在图1中的(a)和(b)中人工确定标定线(图中黑色直线),进而,根据(a)中标定线(图中黑色直线)对图像进行切割,然后保留黑色直线上方图像区域,在图1的(b)中,图像处理人员根据图1中黑色直线对(b)切割,保留黑色直线下方图像区域。进而,将(a)和(b)截取的保留部分图像进行拼接,获得图1中的(c)所示图像,可以看到拼接后的图像存在无法对齐现象,存在缺失或者叠加区域,导致水果表面全部图像无法完整展示,水果表面瑕疵位置无法精准定位并进行识别,给后续水果分选工作带来麻烦,导致分选精度下降。
为了解决上述技术问题,本申请实施例提供了一种检测系统,参见图2,图2为本申请实施例提供的一种检测系统的示意性流程图,该检测系统10可以包括:激光器11、相机组12和计算机设备13。
相机组12可以安装于被测对象14的上方区域;激光器11可以安装于被测对象14的正上方,激光器11的发射口可以正对被测对象14。
激光器11,可以配置成用于投射出激光平面,激光平面可以与被测对象14的表面相交形成激光线,激光线可以将表面划分为多个不同的感兴趣区域。
在本申请实施例中,上述的感兴趣区域可以指被测对象表面上的激光线两侧的互不重叠的区域,在相机组拍摄的图像中,该感兴趣区域可以是图像中被测对象的可见区域。
相机组12,可以配置成用于从不同拍摄角度,采集被测对象14的图像;其中,每张图像可以包含每个感兴趣区域的部分或全部。
计算机设备13,可以配置成用于根据每张图像中包含的感兴趣区域,将全部图像进行切割和拼接处理,获得表面的目标图像。
与相关技术的区别在于,相关的光学检测技术通过对获得的图像进行人工标定和拼接,获得表面图像存在无法对齐现象,存在缺失或者叠加区域,导致水果表面全部图像无法完整展示或者重复展示,水果表面瑕疵位置无法精准定位并进行识别,给后续水果分选工作带来麻烦,导致分选精度下降。而本申请通过激光器在被测对象表面进行标定,通过激光线将表面划分感兴趣区域,进而利用相机组采集从不同角度拍摄图像,由于激光线将表面分成了互不重叠的区域,进而可以通过计算设备分析每张图像中的感兴趣区域占比来进行图像拼接,获得被测对象的完整表面图像,保证后续可以精准定位和识别表面瑕疵位置。
可选地,上述的激光器11可以但不限于是线性激光发生器,激光器11可以在一个方向发射扇面形的激光,如图3所示,图3为一种激光器投射的激光平面的示意图。
在本申请实施例中,激光器11的发射口正对被测对象14,因此,激光器11投射的激 光平面可以在与被测对象14的表面相交时形成一条激光线,该激光线可以将被测对象14的表面划分为多个不同的感兴趣区域,为了方便理解,请参见图4,图4为本申请实施例提供的一种激光线的示意图,激光线的位置如图4所示。
在一种优选的实施方式中,上述激光线的宽度可以小于2毫米。
可以理解的是,激光线呈发散式出射,激光线宽度过大,照射至目标对象表面时,激光线占据目标对象表面的面积过大,可能存在激光线覆盖住目标对象表面小瑕疵的情况,导致无法准确获得目标对象表面的全部瑕疵情况,可以预见的是,激光线的宽度对小水果的影响比较大,水果越小,激光线宽度应越小。
可以理解的是,上述的感兴趣区域为激光线两侧区域,为了方便理解,下面以被测对象14正上方的一个相机组的拍摄角度为例,给出一种感兴趣区域的示意图,图5为本申请实施例提供的一种感兴趣区域的示意图。
可以看出,该相机组12和激光器11均位于被测对象的正上方,激光线将被测对象划分为2个感兴趣区域,即区域A和区域B,很明显,位于被测对象14正上方的相机组所采集的图像中包含区域A和区域B的全部,若相机组与被测对象的法线方向呈一定的角度,那么相机组拍摄的图像中可以包含部分区域A和区域B。
还可以理解的是,当存在一个激光器11时,被测对象14的表面可以被划分为2个感兴趣区域,若存在至少2个激光器时,则被测对象14的表面可以被划分为多个感兴趣区域,即感兴趣区域的个数为激光器的个数加1,可以预见的是,激光器的个数越多,被测对象的表面被划分的越精细,从而可以提高后续图像拼接的准确度。
可选地,上述的相机组12内可以但不限于包含一个相机,在一些场景中,当相机组12内存在多个相机时,多个相机采用并排安装的方式,多个相机可以同时采集被测对象的同一表面图像,为确定图像表面图像提供更多的图像资源,便于后续对表面瑕疵位置进行精准定位和识别。
在一些可能的实施方式中,上述的相机组12可以为一组,且该相机组12可以移动到不同拍摄位置,从而实现在不同拍摄角度采集图像的效果。
需要说明的是,当存在一个相机组进行图像采集时,则被测对象14处于静止状态,以使得相机组从不同角度采集到被测对象14的同一表面图像。
例如,请参见图6,图6为本申请实施例利用一个相机组拍摄被测对象的示意图。如图6所示,假设相机组12的初始位置(1)为被测对象14的正上方,那么相机组12在正上方采集完图像后,可以控制相机组12沿顺时针反向移动一个预设角度θ(例如40度,或者30度到50度范围内的任意一个角度)到达位置(2),在该拍摄角度采集完图像后,控制相机组12沿逆时针方向移动预设角度θ回到位置(1),然后继续沿逆时针方向移动预设 角度θ到达位置(3),在该拍摄位置采集图像,在该位置(3)拍摄完成后控制相机组12沿顺时针方向移动预设角度θ回归到初始位置(1),从而完成图被测对象14的表面图像采集。
需要说明的是,上述相机组12的初始安装位置可以有以下几种实现方式,此处不做限定。
继续参见图6,在利用一个相机组12进行图像采集的过程中,在第一种场景中,该相机组12的初始位置可以为图6中的位置(1),更换拍摄位置的过程如上述内容所述;在第二种场景中,该相机组12的初始位置还可以是图6所示的位置(2),那么拍摄过程即为:在位置(2)拍摄图像后,控制相机组沿逆时针方向移动预设角度θ到达位置(1),在位置(1)拍摄完成后,控制控制相机组12沿逆时针方向移动预设角度θ到达位置(3)进行拍摄;在第三种场景中,该相机组12的初始位置还可以是图6所示的位置(3),那么拍摄过程即为:在位置(3)拍摄图像后,控制相机组沿顺时针方向移动预设角度θ到达位置(1),在位置(1)拍摄完成后,控制控制相机组12沿顺时针方向移动预设角度θ到达位置(2)进行拍摄。
在另一种优选的实施方式中,上述的相机组12还可以为三组,分别为第一相机组121、第二相机组122和第三相机组123,三组相机组的安装位置固定。
需要说明的是,当利用三个相机组进行图像采集时,则被测对象14可以处于静止状态,且三个相机组可以在同一时间同时启动拍摄,保证三个相机组在同一时间采集到被测对象的同一表面图像。
示例性的,三组相机组的安装位置示意图可以参见图7,图7为本申请实施例提供的三个相机组的实现方式示意图。
如图7所示,第一相机组121可以位于被测对象14的正上方,第一相机组的视野方向可以与激光器平行。第二相机组122的法线和第三相机组123的法线与第一相机组121的法线之间的夹角可以相同,其中,上述的法线可以垂直于相机组拍摄平面。
在一些可能的实施例中,上述夹角的角度太小或太大会存在第二相机组122和第三相机组123的拍摄区域与第一相机组121的拍摄区域重叠过多,或者缺失的问题,因此,上述夹角的取值范围为30度到50度,在一种优选的实施方式中,上述夹角优先为40度。
示例性的,当存在三组相机组时,整个检测系统的视图可以如图8所示,请参见图8,图8为本申请实施实施例提供的存在一个激光器的检测系统的三视图,其中包括(1)主视图、(2)右视图和(3)俯视图,结合图8所示的三视图,下面还对照图8给出每个相机组 的拍摄角度的示意图,请参见图9A至图9C,图9A至图9C为本申请实施例提供的三组相机组的拍摄角度示意图。图9A至图9C中,存在一个激光器11,激光线将被测对象14划分2个感兴趣区域,即区域A和区域B。
首先,以图9A第一相机组121的拍摄角度为例,可以看出,第一相机组121的拍摄范围内可以包含全部的区域A和全部的区域B,即第一相机组121的图像中包含区域A和区域B的全部。
可选地,在图9B第二相机组122的拍摄角度下,第二相机组122的拍摄范围内可以包含部分区域A和部分区域B,其中,区域A中被包含的部分可以小于区域B被包含的部分,即第二相机组122的图像中可以包含区域A的部分区域和区域B的部分区域,区域A的部分区域可以小于区域B的部分区域;
可选地,在参见图9C第三相机组123的拍摄角度下,第三相机组123的拍摄范围内同样可以包含部分区域A和部分区域B,其中,区域A中被包含的部分可以大于区域B被包含的部分,即第三相机组123的图像中可以包含区域A的部分区域和区域B的部分区域,区域A的部分区域可以大于区域B部分区域。
为了方便理解,下面给出一示例,假设第一相机组121的图像中包含区域A和区域B的全部,则可以表示为:包含100%的区域A和100%的区域B,第二相机组122的图像中包含30%的区域A和70%的区域B,第三相机组123的图像中包含70%的区域A和30%的区域B,那么,在确定目标图像过程中,最理想的参考区域为第二相机组122的图像中70%的区域B和第三相机组123的图像中70%的区域A。
需要说明的是,为了尽可能多的让目标物体表面信息被三组相机组视野包含,三组相机组摄像机组拍摄的目标表面信息可以有重叠。为了定位重叠区域,保证第一相机组121、第二相机组122和第三相机组123的图像信息唯一性,可以在第一相机组121位置附近安装激光器11。
需要说明的是,本申请实施例通过三个相机组从不同拍摄采集被测对象的图像即可实现获得完整表面图像的效果,可以预见的是,采用4个及以上的相机组实现的技术效果与3个相机组实现的技术效果理应相同。
在一些可能的实施例中,激光器11可以至少为1个,当激光器11的数量为两个时,则两个激光器可以分布在第一相机组121的两侧,两个激光器11可以均与第一相机组121平行。
示例性的,当存在2个激光器时,下面以被测对象14正上方的一个相机组的拍摄角度为例,给出一种感兴趣区域的示意图,图10为本申请实施例提供的另一种感兴趣区域的示意图。
如图10所示,在被测对象14表面可以形成2条激光线,将被测对象划分为3个感兴趣区域,即区域A、区域B和区域C,很明显,相机组12所采集的图像中包含区域A、区域B和区域C的全部,若相机组12与被测对象14的法线方向呈一定的角度,那么相机组12拍摄的图像中可以包含部分区域A、部分区域B和部分区域B。
在图8的基础上,下面还给出一种包含两个激光器的检测系统的三视图,请参见图11,图11为本申请实施实施例提供的包含两个激光器的检测系统的三视图,其中包括(1)主视图、(2)右视图和(3)俯视图。
示例性的,结合图10和图11,可以看出,首先,以第一相机组121的拍摄角度为例,可以看出,第一相机组121的拍摄范围内可以包含全部的区域A、全部的区域B和全部的区域C,在第二相机组122的拍摄角度下,第二相机组122的拍摄范围内可以包含部分区域A、部分区域B和部分区域B,在第三相机组123的拍摄角度下,第三相机组123的拍摄范围内同样可以包含部分区域A和部分区域B,在一种可能的示例中,第一相机组121的图像中可以包含100%的区域A、100%的区域B和100%的区域C,则第二相机组122的图像中可以包含10%的区域A、30%的区域B和60%的区域C,第三相机组123的图像中可以包含60%的区域A、30%的区域B、60%的区域C,那么,在确定目标图像过程中,最理想的参考区域为第一相机组包含的100%的区域B、第二相机组122的图像中60%的区域C、第三相机组123的图像中包60%的区域A。
可选地,为了最终获得完整的表面图像,下面还给出一种实现方式,即上述图1所示的计算机设备13,可以配置成用来:根据待处理图像中激光线的位置,对待处理图像进行切割,获得多个切割图像,待处理图像为全部图像中的任一张。可以将多个切割图像中感兴趣区域占比最大的切割图像,作为待处理图像对应的待拼接图像。
例如,继续以上述示例进行说明,第二相机组122的图像中包含30%的区域A和70%的区域B,第三相机组123的图像中包含70%的区域A和30%的区域B,那么第二相机组拍摄的待处理图像中,将包含70%的区域B的图像区域作为待拼接图像,第三相机组拍摄的待处理图像中,将包含70%的区域A的图像区域作为待拼接图像。
进而,可以遍历全部图像,得到每张图像对应的待拼接图像。根据预设的参考坐标系,将每个待拼接图像展开后拼接,获得目标图像。
在一些可能的实施例中,上述预设的参考坐标系可以如图12所示的参考坐标系示意图,图像处理程序可以根据激光线标定位置对待拼接图像参照地球仪经纬线方式进行展开图像。
需要说明的是,在存在一个激光器的场景中,可以只对第二相机组122和第三相机组123拍摄的图像进行拼接处理,第一相机组121拍摄的图像不参与图像拼接处理。在存在二个激光器的场景中,第一相机组121、第二相机组122和第三相机组123拍摄的图像均 需要参与图像拼接处理,第一相机组121、第二相机组122拍摄的图像中均选择感兴趣区域占比最大的切割图像进行拼接,第一相机组121则选取包含中间感兴趣区域的切割图像参与图像拼接处理。
可选地,为了保证能够获得被测对象所有表面信息,上述各个实施例中的检测系统还可以包括旋转装置,旋转装置构造成用于带动被测对象旋转,从而可以使得相机组可以实时采集到各个表面的图像,可以提高后续定位和识别表面瑕疵位置的准确度,给后续分选工作带来更高精度。
在一种可能的实施例中,上述的旋转装置可以由杯具、支架和杯轮组成,杯具可以盛放被测对象,支架可以支撑该杯具,杯轮可以位于支架中间,杯轮可以绕杯轴旋转,从而带动被测对象旋转,以确保360°无死角检测被测对象。
在一些可能的实施例中,为了促使旋转装置旋转,上述检测系统还可以包括输送带,请参见图13,图13为本申请实施例提供的另一种检测系统的三视图,图中的被测对象盛放在上述旋转装置的杯具内(图13中省略了旋转装置),输送带输送方向可以与杯轮旋转方向一致,旋转装置的杯轮可以与输送带接触,输送带可以在电机驱动下循环运动,通过输送带与杯轮间的摩擦力带动杯轮旋转,从而带动物体旋转。
可选地,上述各个实施例中的被检测对象可以但不限于是圆形或者椭圆形的物体,例如,被测对象可以但不限于为果蔬。
在本申请实施例中,圆形形状或者椭圆形形状的物体表面曲线光滑,在表面形成的激光线是一条光滑的曲线,从而可以将表面均匀划分,可以降低了后续图像处理的难度。
基于相同的发明构思,本申请实施例还提供了一种检测方法,该检测方法可以应用在图1所示的计算机设备,请参见图14,图14为本申请实施例提供的一种检测方法的示意性流程图,该方法可以包括:
S31,获取相机组在不同拍摄角度采集的被测对象的图像,其中,每张图像包含每个感兴趣区域的部分或者全部;
其中,所述感兴趣区域,是由位于所述被测对象的正上方区域的激光器所投射出激光平面与所述被测对象的表面相交形成的激光线切割而成。
S32,根据每张图像中存在的感兴趣区域,将全部图像进行切割和拼接处理,获得表面的目标图像。
可选地,在一种可能的实施方式中,上述的步骤S32可以包括以下子步骤:
子步骤321,根据待处理图像中激光线的位置,对待处理图像进行切割,获得多个切割图像,待处理图像为全部图像中的任一张;
子步骤322,将多个切割图像中感兴趣区域占比最大的切割图像,作为待处理图像对应 的待拼接图像;
子步骤323,遍历全部图像,得到每张图像对应的待拼接图像;
子步骤324,根据预设的参考坐标系,将每个待拼接图像展开后拼接,获得目标图像。
为了执行上述实施例中的检测方法的步骤,下面给出一种检测装置的实现方式,需要说明的是,本实施例所提供的检测装置,其基本原理及产生的技术效果和上述实施例相同,为简要描述,本实施例部分未提及之处,可参考上述的实施例中相应内容。该检测装置可以包括:
获取模块,可以配置成用于获取相机组在不同拍摄角度采集的被测对象的图像,其中,每张图像包含每个感兴趣区域的部分或者全部;其中,所述感兴趣区域,是由位于所述被测对象的正上方区域的激光器所投射出的激光平面与所述被测对象的表面相交形成的激光线切割而成的。
处理模块,可以配置成用于根据每张图像中存在的感兴趣区域,将全部图像进行切割和拼接处理,获得表面的目标图像。
可选地,处理模块,可以具体配置成用于:根据待处理图像中激光线的位置,对待处理图像进行切割,获得多个切割图像,待处理图像为全部图像中的任一张;将多个切割图像中感兴趣区域占比最大的切割图像,作为待处理图像对应的待拼接图像;遍历全部图像,得到每张图像对应的待拼接图像;根据预设的参考坐标系,将每个待拼接图像展开后拼接,获得目标图像。
本申请实施例还提供一种计算机设备,如图15,图15为本申请实施例提供的一种计算机设备结构框图。该计算机设备13可以包括通信接口131、处理器132和存储器133。该处理器132、存储器133和通信接口131相互之间可以直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。存储器133可用于存储软件程序及模块,如本申请实施例所提供的基于检测方法对应的程序指令/模块,处理器132可以通过执行存储在存储器133内的软件程序及模块,从而执行各种功能应用以及数据处理。该通信接口131可用于与其他节点设备进行信令或数据的通信。在本申请中该计算机设备130可以具有多个通信接口131。
其中,存储器133可以是但不限于,随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,EEPROM)等。
处理器132可以是一种集成电路芯片,具有信号处理能力。该处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor, NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。
可选地,上述模块可以软件或固件(Firmware)的形式存储于图15所示的存储器中或固化于该计算机设备的操作系统(Operating System,OS)中,并可由图15中的处理器执行。同时,执行上述模块所需的数据、程序的代码等可以存储在存储器中。
本申请实施例提供一种计算机可读存储介质,其上可以存储有计算机程序,该计算机程序被处理器执行时可以实现如前述实施方式中任一项检测方法。该计算机可读存储介质可以是,但不限于,U盘、移动硬盘、ROM、RAM、PROM、EPROM、EEPROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请公开了检测系统、方法、计算机设备及计算机可读存储介质。检测系统包括激光器、相机组和计算机设备,相机组安装于被测对象的上方区域;激光器安装于被测对象的正上方,激光器的发射口正对被测对象;激光器,配置成用于投射出激光平面,激光平面与被测对象的表面相交形成激光线,激光线将表面划分为多个不同的感兴趣区域;相机组,配置成用于从不同拍摄角度,采集被测对象的图像;每张图像包含每个感兴趣区域的部分或全部;计算机设备,配置成用于根据每张图像中包含的感兴趣区域,将全部图像进行切割和拼接处理,获得表面的目标图像。本申请能够获得拼接后的完整表面图像,保证后续可以精准定位和识别表面瑕疵位置。
此外,可以理解的是,本申请的检测系统、方法、计算机设备及计算机可读存储介质是可以重现的,并且可以应用在多种工业应用中。例如,本申请的检测系统可以应用于检测领域。

Claims (17)

  1. 一种检测系统,其特征在于,包括激光器、相机组和计算机设备;
    所述相机组安装于被测对象的上方区域;所述激光器安装于所述被测对象的正上方,所述激光器的发射口正对所述被测对象;
    所述激光器,配置成用于投射出激光平面,所述激光平面与所述被测对象的表面相交形成激光线,所述激光线将所述表面划分为多个不同的感兴趣区域;
    所述相机组,配置成用于从不同拍摄角度,采集所述被测对象的图像;其中,每张所述图像包含每个所述感兴趣区域的部分或全部;
    所述计算机设备,配置成用于根据每张所述图像中包含的所述感兴趣区域,将全部所述图像进行切割和拼接处理,获得所述表面的目标图像。
  2. 根据权利要求1所述的检测系统,其特征在于,所述计算机设备,具体配置成用于:
    根据待处理图像中所述激光线的位置,对所述待处理图像进行切割,获得多个切割图像,所述待处理图像为全部所述图像中的任一张;
    将所述多个切割图像中所述感兴趣区域占比最大的切割图像,作为所述待处理图像对应的待拼接图像;
    遍历全部所述图像,得到每张所述图像对应的所述待拼接图像;
    根据预设的参考坐标系,将每个所述待拼接图像展开后拼接,获得所述目标图像。
  3. 根据权利要求1或2所述的检测系统,其特征在于,所述相机组内至少包含一个相机,当存在多个所述相机时,多个所述相机并排安装。
  4. 根据权利要求1至3中的任一项所述的检测系统,其特征在于,所述相机组为一组,所述相机组移动到不同拍摄位置,以在不同拍摄角度采集图像。
  5. 根据权利要求1至3中的任一项所述的检测系统,其特征在于,所述相机组为三组,分别为第一相机组、第二相机组和第三相机组;
    所述第一相机组位于所述被测对象的正上方,所述第一相机组的视野方向与所述激光器平行;
    所述第二相机组的法线和所述第三相机组的法线与所述第一相机组的法线之间的夹角相同。
  6. 根据权利要求5所述的检测系统,其特征在于,所述夹角的取值范围为30度到50度。
  7. 根据权利要求5所述的检测系统,其特征在于,所述夹角为40度。
  8. 根据权利要求5至7中的任一项所述的检测系统,其特征在于,所述激光器为至少 一个,当所述激光器的数量为两个,两个所述激光器分布在所述第一相机组的两侧。
  9. 根据权利要求1至8中的任一项所述的检测系统,其特征在于,所述激光线的宽度小于2毫米。
  10. 根据权利要求1至9中的任一项所述检测系统,其特征在于,还包括旋转装置;所述旋转装置构造成用于带动所述被测对象旋转。
  11. 根据权利要求10所述的检测系统,其特征在于,所述旋转装置由杯具、支架和杯轮组成,所述杯具盛放所述被测对象,所述支架支撑所述杯具,所述杯轮位于所述支架中间,所述杯轮绕杯轴旋转,从而带动所述被测对象旋转。
  12. 根据权利要求11所述的检测系统,其特征在于,还包括输送带,所述输送带与所述旋转装置的所述杯轮接触,所述输送带在电机驱动下循环运动,通过所述输送带与所述杯轮之间的摩擦力带动所述杯轮旋转,从而带动所述被测对象旋转。
  13. 根据权利要求1至12中的任一项所述的检测系统,其特征在于,所述被测对象为具有圆形形状或者椭圆形形状的物体。
  14. 一种检测方法,其特征在于,所述检测方法包括:
    获取相机组在不同拍摄角度采集的被测对象的图像,其中,每张所述图像包含每个感兴趣区域的部分或者全部;
    其中,所述感兴趣区域,是由位于所述被测对象的正上方区域的激光器所投射出激光平面与所述被测对象的表面相交形成的激光线切割而成;
    根据每张所述图像中存在的所述感兴趣区域,将全部所述图像进行切割和拼接处理,获得所述表面的目标图像。
  15. 根据权利要求14所述的检测方法,其特征在于,根据每张所述图像中存在的所述感兴趣区域,将全部所述图像进行切割和拼接处理,获得所述表面的目标图像,包括:
    根据待处理图像中所述激光线的位置,对所述待处理图像进行切割,获得多个切割图像,所述待处理图像为全部所述图像中的任一张;
    将所述多个切割图像中所述感兴趣区域占比最大的切割图像,作为所述待处理图像对应的待拼接图像;
    遍历全部所述图像,得到每张所述图像对应的所述待拼接图像;
    根据预设的参考坐标系,将每个所述待拼接图像展开后拼接,获得所述目标图像。
  16. 一种计算机设备,其特征在于,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的计算机程序,所述处理器执行所述计算机程序以实现根据权利要求14或15所述的检测方法。
  17. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,其特征 在于,所述计算机程序被处理器执行时实现根据权利要求14或15所述的检测方法。
PCT/CN2022/091409 2021-07-30 2022-05-07 检测系统、方法、计算机设备及计算机可读存储介质 WO2023005321A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CA3224219A CA3224219A1 (en) 2021-07-30 2022-05-07 Detection system and method, computer device, and computer readable storage medium
AU2022316746A AU2022316746A1 (en) 2021-07-30 2022-05-07 Detection system and method, computer device, and computer readable storage medium
IL309828A IL309828A (en) 2021-07-30 2022-05-07 System and method for locating, computer device, and computer readable storage medium
KR1020247004068A KR20240027123A (ko) 2021-07-30 2022-05-07 검출 시스템, 검출 방법, 컴퓨터 기기 및 컴퓨터 판독 가능 저장 매체
EP22847936.6A EP4365578A1 (en) 2021-07-30 2022-05-07 Detection system and method, computer device, and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110870046.9 2021-07-30
CN202110870046.9A CN113567453B (zh) 2021-07-30 2021-07-30 检测系统、方法、计算机设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023005321A1 true WO2023005321A1 (zh) 2023-02-02

Family

ID=78169372

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091409 WO2023005321A1 (zh) 2021-07-30 2022-05-07 检测系统、方法、计算机设备及计算机可读存储介质

Country Status (7)

Country Link
EP (1) EP4365578A1 (zh)
KR (1) KR20240027123A (zh)
CN (1) CN113567453B (zh)
AU (1) AU2022316746A1 (zh)
CA (1) CA3224219A1 (zh)
IL (1) IL309828A (zh)
WO (1) WO2023005321A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113567453B (zh) * 2021-07-30 2023-12-01 绿萌科技股份有限公司 检测系统、方法、计算机设备及计算机可读存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008016309A1 (en) * 2006-08-04 2008-02-07 Sinvent As Multi-modal machine-vision quality inspection of food products
DE102011001127A1 (de) * 2011-03-07 2012-09-13 Miho Holding-Gmbh Inspektionsvorrichtung für Leergut und Verfahren zur Inspektion von Leergut
CN103234905A (zh) * 2013-04-10 2013-08-07 浙江大学 用于球状水果无冗余图像信息获取的方法和装置
JP2013231668A (ja) * 2012-04-27 2013-11-14 Shibuya Seiki Co Ltd 農産物検査装置及び農産物検査方法
CN109186491A (zh) * 2018-09-30 2019-01-11 南京航空航天大学 基于单应性矩阵的平行多线激光测量系统及测量方法
US20200182696A1 (en) * 2017-06-09 2020-06-11 The New Zealand Institute For Plant And Food Research Limited Method and system for determining internal quality attribute(s) of articles of agricultural produce
CN210906973U (zh) * 2019-08-22 2020-07-03 江西绿萌科技控股有限公司 一种小水果分级输送果杯
CN112567230A (zh) * 2018-05-15 2021-03-26 克朗斯股份公司 用于借助位置确定来检查容器的方法
CN112884880A (zh) * 2021-01-20 2021-06-01 浙江大学 一种基于线激光的蜜柚三维建模装置和方法
CN113567453A (zh) * 2021-07-30 2021-10-29 江西绿萌科技控股有限公司 检测系统、方法、计算机设备及计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063129B (zh) * 2017-05-25 2019-06-07 西安知象光电科技有限公司 一种阵列式并行激光投影三维扫描方法
CN107830813B (zh) * 2017-09-15 2019-10-29 浙江理工大学 激光线标记的长轴类零件图像拼接及弯曲变形检测方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008016309A1 (en) * 2006-08-04 2008-02-07 Sinvent As Multi-modal machine-vision quality inspection of food products
DE102011001127A1 (de) * 2011-03-07 2012-09-13 Miho Holding-Gmbh Inspektionsvorrichtung für Leergut und Verfahren zur Inspektion von Leergut
JP2013231668A (ja) * 2012-04-27 2013-11-14 Shibuya Seiki Co Ltd 農産物検査装置及び農産物検査方法
CN103234905A (zh) * 2013-04-10 2013-08-07 浙江大学 用于球状水果无冗余图像信息获取的方法和装置
US20200182696A1 (en) * 2017-06-09 2020-06-11 The New Zealand Institute For Plant And Food Research Limited Method and system for determining internal quality attribute(s) of articles of agricultural produce
CN112567230A (zh) * 2018-05-15 2021-03-26 克朗斯股份公司 用于借助位置确定来检查容器的方法
CN109186491A (zh) * 2018-09-30 2019-01-11 南京航空航天大学 基于单应性矩阵的平行多线激光测量系统及测量方法
CN210906973U (zh) * 2019-08-22 2020-07-03 江西绿萌科技控股有限公司 一种小水果分级输送果杯
CN112884880A (zh) * 2021-01-20 2021-06-01 浙江大学 一种基于线激光的蜜柚三维建模装置和方法
CN113567453A (zh) * 2021-07-30 2021-10-29 江西绿萌科技控股有限公司 检测系统、方法、计算机设备及计算机可读存储介质

Also Published As

Publication number Publication date
IL309828A (en) 2024-02-01
AU2022316746A1 (en) 2024-01-25
CN113567453B (zh) 2023-12-01
CA3224219A1 (en) 2023-02-02
EP4365578A1 (en) 2024-05-08
CN113567453A (zh) 2021-10-29
KR20240027123A (ko) 2024-02-29

Similar Documents

Publication Publication Date Title
CN111435162B (zh) 激光雷达与相机同步方法、装置、设备和存储介质
US9074878B2 (en) Laser scanner
CN107707810B (zh) 基于红外热像仪的热源追踪方法、装置及系统
US11587219B2 (en) Method and apparatus for detecting pixel defect of optical module, and device
US20140078519A1 (en) Laser Scanner
WO2023005321A1 (zh) 检测系统、方法、计算机设备及计算机可读存储介质
JPWO2011070927A1 (ja) 点群データ処理装置、点群データ処理方法、および点群データ処理プログラム
JP2006030157A (ja) カメラ校正のための透過型校正器具とその校正法
CN110753167B (zh) 时间同步方法、装置、终端设备及存储介质
US9736468B2 (en) Calibration method of an image capture system
WO2021226716A1 (en) System and method for discrete point coordinate and orientation detection in 3d point clouds
US9273954B2 (en) Method and system for analyzing geometric parameters of an object
CN109982074B (zh) 一种获取tof模组的倾斜角度的方法、装置及组装方法
CN204944450U (zh) 深度数据测量系统
CN111638227A (zh) Vr光学模组画面缺陷检测方法及装置
CN109060308B (zh) 用于图像融合系统的延时测量设备及方法
US20160274366A1 (en) Device, System And Method For The Visual Alignment Of A Pipettor Tip And A Reference Point Marker
US11943539B2 (en) Systems and methods for capturing and generating panoramic three-dimensional models and images
WO2022084977A1 (en) Parallax-tolerant panoramic image generation
JP6255819B2 (ja) 計測用コンピュータプログラム、計測装置及び計測方法
JPS6256814A (ja) 3次元位置計測カメラ較正方式
US10551187B2 (en) Method and device for determining the leading edges of two overlapping image captures of a surface
JP2005106820A (ja) 光沢面測定のための協調偏光
US20210407134A1 (en) System and method of calibrating a directional light source relative to a camera's field of view
WO2023167984A1 (en) System and method for use of polarized light to image transparent materials applied to objects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22847936

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 3224219

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 309828

Country of ref document: IL

WWE Wipo information: entry into national phase

Ref document number: 2022316746

Country of ref document: AU

Ref document number: AU2022316746

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 807236

Country of ref document: NZ

WWE Wipo information: entry into national phase

Ref document number: 000103-2024

Country of ref document: PE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112024000697

Country of ref document: BR

WWE Wipo information: entry into national phase

Ref document number: 2401000498

Country of ref document: TH

ENP Entry into the national phase

Ref document number: 2022316746

Country of ref document: AU

Date of ref document: 20220507

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2022847936

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20247004068

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020247004068

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2022847936

Country of ref document: EP

Effective date: 20240130

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112024000697

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20240112