CN115111970B - Firework forming detection device integrating 2D and 3D visual perception and detection method thereof - Google Patents

Firework forming detection device integrating 2D and 3D visual perception and detection method thereof Download PDF

Info

Publication number
CN115111970B
CN115111970B CN202210781973.8A CN202210781973A CN115111970B CN 115111970 B CN115111970 B CN 115111970B CN 202210781973 A CN202210781973 A CN 202210781973A CN 115111970 B CN115111970 B CN 115111970B
Authority
CN
China
Prior art keywords
bright bead
image
bright
firework
particles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210781973.8A
Other languages
Chinese (zh)
Other versions
CN115111970A (en
Inventor
吴鑫
余绍黔
周慧
魏建好
周博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Technology
Original Assignee
Hunan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Technology filed Critical Hunan University of Technology
Priority to CN202210781973.8A priority Critical patent/CN115111970B/en
Publication of CN115111970A publication Critical patent/CN115111970A/en
Application granted granted Critical
Publication of CN115111970B publication Critical patent/CN115111970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F42AMMUNITION; BLASTING
    • F42BEXPLOSIVE CHARGES, e.g. FOR BLASTING, FIREWORKS, AMMUNITION
    • F42B4/00Fireworks, i.e. pyrotechnic devices for amusement, display, illumination or signal purposes
    • F42B4/30Manufacture
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F42AMMUNITION; BLASTING
    • F42BEXPLOSIVE CHARGES, e.g. FOR BLASTING, FIREWORKS, AMMUNITION
    • F42B35/00Testing or checking of ammunition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications

Abstract

The invention discloses a firework forming detection device integrating 2D and 3D visual perception, wherein an RGBD camera is fixedly arranged on a light source camera adjusting seat and is positioned at an opening of a surface light source; the light source camera adjusting seat is arranged on the detection device bracket; the industrial control computer is respectively connected with the RGBD camera and the granulator through wires. The invention also discloses a detection method of the firework forming detection device. The intelligent detection of the firework bright bead forming process is realized, the purpose of separating human medicine is achieved, the potential safety hazard in the firework bright bead production process is eliminated, and the quality of the finished firework bright bead product is improved.

Description

Firework forming detection device integrating 2D and 3D visual perception and detection method thereof
Technical Field
The invention belongs to the technical field of automatic detection, and relates to a firework forming detection device integrating 2D and 3D visual perception and a detection method thereof.
Background
At present, china is the largest global firework production, export and consumption country, and the firework yield accounts for 90% of the global yield and approximately 80% of the world trade volume. However, firework production belongs to the typical labor-intensive and high-risk manufacturing industry, and pyrotechnic compositions are easily excited by unexpected energy in the production process to generate explosion accidents, so that great losses are caused to life and property of people. In recent years, under the efforts of government and enterprises, the firework production industry is greatly pushed to be mechanized, the separation of human medicine is realized, and the occurrence of malignant casualty safety accidents is effectively reduced. However, the fireworks production process still has the problems of on-line quality detection and control deficiency, poor product consistency, low automation degree and the like, and has urgent requirements on safe, efficient and reliable on-line detection and control technology.
The firework bright bead forming is a link with larger drug-related quantity, higher danger coefficient and frequent occurrence of safety accidents in the firework production process. At present, the specific process of forming the firework bright beads is that powder mixed according to a specific proportion in a storage bin is sent into a granulator which is obliquely placed (the inclination angle is 40-60 degrees) and rotates around a central shaft at a high speed (the diameter is 1-1.5 m, the rotating speed is 15-30 revolutions per minute) through a discharge hole, loose powder is rotated together under the combined action of gravity, centrifugal force and friction force in the oblique and rotating granulator, when the powder reaches a certain height, the powder slides under the action of gravity, and in the process, a spraying device intermittently sprays vaporous adhesive (mainly alcohol) onto the surface of the powder. The powder is adhered and rolled with each other under the action of the adhesive, and gradually grows into spherical bright beads. When the particle size of the bright beads reaches a certain requirement, powder feeding and slurry spraying are stopped, but the bright beads continue to roll in the granulator for a certain time to polish so as to improve the mechanical strength and roundness. And finally, conveying the qualified bright beads with the particle size into a drying procedure by a conveyor belt.
At present, the working condition identification and control of the firework bright bead forming process mainly depends on the observation of field operators, the bright bead state (bright bead particle diameter, bright bead growth rate, surface smoothness and the like) is judged by experience, and the bright bead process parameters (slurry spraying amount, powder feeding speed, polishing time and the like) are manually adjusted, so that the quality of the produced firework bright beads meets the requirements. However, the production mode of manually controlling the firework bright bead forming process has the advantages of high labor intensity, high subjectivity, large error and low efficiency, objective evaluation and unified cognition of working conditions are difficult to realize, the fluctuation of the working conditions in the bright bead process is large, the quality and the yield of the bright beads cannot meet the production requirements, workers directly contact gunpowder production, and serious production safety accidents are easily caused by improper operation. In view of the foregoing, there is a need for an automatic detection method and apparatus for forming firework bright beads to solve these problems.
Disclosure of Invention
In order to achieve the purpose, the invention provides the firework forming detection device and the firework forming detection method integrating 2D and 3D visual perception, which realize intelligent detection of the firework bright bead forming process, achieve the purpose of human medicine separation, eliminate potential safety hazards in the firework bright bead production process and improve the quality of finished firework bright bead products.
The technical scheme adopted by the embodiment of the invention is that the firework forming detection device integrating 2D and 3D visual perception comprises: RGBD camera, area light source, light source camera adjusting seat, detection device bracket and industrial control computer; the RGBD camera is fixedly arranged on the light source camera adjusting seat, and is positioned at an opening of the surface light source; the light source camera adjusting seat is arranged on the detection device bracket; the industrial control computer is respectively connected with the RGBD camera and the granulator through wires.
The other technical scheme adopted by the embodiment of the invention is that the detection method of the firework forming detection device by utilizing the fusion of 2D and 3D visual perception comprises the following steps:
step S1, acquiring 2D and 3D images in the firework bright bead forming process: an RGBD camera (101) is used for collecting bright bead particle images in a granulator (106) in the process of forming the bright beads of fireworks and sending the bright bead particle images to an industrial control computer (105);
s2, manufacturing a bright bead particle image edge profile training sample and a test sample: manufacturing a training sample and a test sample by using the collected 2D firework bright bead particle images;
step S3, extracting network training and testing edge contours of bright bead particle images: training a transducer-UNet network model by using the prepared bright bead particle image training sample, and inputting a 2D bright bead particle image in the test sample into the network model to extract the edge contour of the 2D bright bead particle image after the training of the network model is completed;
s4, bright bead particle shielding identification judgment: on the basis of extracting the edge contour of the bright bead particle image, the collected 3D image information is utilized to identify and judge the shielding existing among the bright bead particles, and the interference of the shielded bright bead particles on the detection result is removed;
step S5, bright bead particle image segmentation and feature extraction: after edge contour extraction and shielding identification judgment are carried out on the bright bead particle images, the extracted edge contour information is utilized to segment the single bright bead particle images, and shape characteristics and gray characteristic information of the single bright bead particle images are respectively extracted from the 2D images;
and S6, judging the working condition of the bright bead forming process and adjusting parameters in time according to statistics of the bright bead particle image characteristic information.
The beneficial effects of the invention are as follows: the 3D technology is led to the formation of the firework bright beads through the combination of 2D and 3D, the rapid and intelligent detection of the firework bright bead forming process is realized, the existing manual detection mode is replaced, the rapid feedback of the change condition of the working condition in the bright bead forming process can be realized, and the quality of the finished firework bright bead product is improved. The detection method and the detection device can realize the separation of human medicine in the bright bead forming process of the firework production process, eliminate the potential safety hazard in the firework bright bead production process and prevent major production accidents.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a detection apparatus according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a transducer-UNet network according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an embodiment of the present invention in elevation.
Fig. 4 is a flowchart of a detection method according to an embodiment of the present invention.
Fig. 5 is a diagram of a result of identifying and determining the occlusion of a bright bead particle by fusing 2D and 3D information (the occlusion bright bead particle is a contour area).
FIG. 6 is a graph showing the comparison between the particle size distribution of bright beads and the manual measurement method according to the present invention.
Fig. 7 is a comparison result of detecting the particle size distribution of the bright bead particles by using only the 2D image detection method and the manual measurement method.
In fig. 1, 101.Rgbd camera, 102. Light source camera adjustment stand, 103. Surface light source, 104. Detection device holder, 105. Industrial control computer, 106. Granulator.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the invention provides a firework molding detection device integrating 2D and 3D visual perception, comprising: RGBD camera 101, surface light source 103, light source camera mount 102, detection device holder 104, and industrial control computer 105. The RGBD camera 101 performs image acquisition on the bright bead particles in the granulator 106 and sends acquired image information to the industrial control computer 105, wherein the acquired image comprises 2D and 3D information images of the bright bead particles; the surface light source 103 is used for improving the gray level intensity of the collected bright bead particle image, overcoming the interference of external ambient light, ensuring the stability of the collected bright bead particle image, and in addition, through carrying out the perforation in the center of the surface light source 103, the RGBD camera 101 is installed in the perforation, so that the RGBD camera 101 is prevented from being blocked to generate shadows in the image; the light source camera adjusting seat 102 is used for adjusting the position height and the inclination angle of the surface light source 103 and the RGBD camera 101 so as to match with the firework granulator 106 with different heights and enable the shooting direction of the RGBD camera to be perpendicular to the horizontal plane of the granulator 106; the detection device bracket 104 is mainly used for fixing the light source camera adjusting seat 102; the industrial control computer 105 mainly processes the collected bright bead particle image information, forms decision information according to the processing result, and sends the decision information to the control system of the granulator 106 to adjust the working condition of the bright bead forming process in time.
The invention also provides a detection method of the firework forming detection device by utilizing the fusion 2D and 3D visual perception, which comprises the following steps:
s1, acquiring 2D and 3D images in the firework bright bead forming process: collecting bright bead particle images in a granulator 106 in the process of forming the bright beads of the fireworks;
s2, manufacturing a bright bead particle image edge profile training sample and a test sample: manufacturing a training sample and a test sample by using the collected 2D firework bright bead particle images;
s3, extracting network training and testing edge contours of bright bead particle images: training a transducer-UNet network model by using the prepared bright bead particle image training sample, and inputting a 2D bright bead particle image in the test sample into the network model after the training of the network model is completed to obtain the edge contour of the 2D bright bead particle image;
s4, bright bead particle shielding identification judgment: on the basis of extracting the edge contour of the bright bead particle image, the collected 3D image information is utilized to identify and judge the shielding existing among the bright bead particles, and the interference of the shielded bright bead particles on the detection result is removed;
s5, bright bead particle image segmentation and feature extraction: after edge contour extraction and shielding identification judgment are carried out on the bright bead particle images, the extracted edge contour information can be utilized to segment the single bright bead particle images, and shape characteristics and gray characteristic information of the single bright bead particle images are respectively extracted from the 2D images;
s6, judging the working condition of the bright bead forming process according to statistics of the bright bead particle image characteristic information, and adjusting parameters in time.
In some embodiments, in step S1, the RGBD camera 101 is used to collect the RGB image (2D) and the depth image (3D) of the bright bead particles at the same time, so as to lay a foundation for extracting the bright bead particle features in the subsequent 2D and 3D visual perception information fusion.
In some embodiments, in step S2, the training sample is a 2D bright bead particle edge contour image, and the bright bead particle edge contour pixels are marked with a gray value of 255 during the manufacturing process, and the rest of pixels are marked with a gray value of 0.
Further, in the step S3, because the bright bead particles are blocked and extruded during the process of forming the bright bead, the edges between the particles are blurred, darkish and even absent, and the traditional image processing algorithm based on edge detection or threshold segmentation is difficult to accurately extract the weak edges of the particles, so that over-segmentation and under-segmentation are easily caused. The invention provides an edge contour point extraction method based on a deep learning network, which comprises the following steps: the network is shown in figure 2, and mainly consists of two parts, wherein the right dotted line frame is a network structure of the UNet, and the network is a framework structure of the whole network of the transducer-UNet; the left Bian Xuxian box characterizes the extraction of the transducers, the part consisting of a total of 3 layers of 6 transducer layers, each layer consisting of two transducer layers connected in sequence.
The transform-UNet network has the advantages that a full convolution network is adopted to replace a full connection layer in the UNet network with the framework structure, the network can meet the requirement of training of small samples, and the training process and the testing process consume less time. However, since the Unet cannot model the context relation of the features with a longer distance in the image, the global feature information in the image is extracted. Therefore, the invention utilizes the strong self-attention mechanism of the Transformer to extract the global features in the image, namely, utilizes the convolution layer (Convolutional layer) and the Transformer layer of the UNet network as the encodings to extract the local feature information and the global feature information of the image respectively, and carries out Fusion (Fusion) of the image features by adding the extracted local feature information and the global feature information, and inputs the Fusion to the Encoder layer of the UNet skeleton network to recover the image feature information, and finally obtains the output image of the edge contour points of the bright bead particles.
Further, in step S4, on the basis of extracting the edge profile of the bright bead particle in step S3, it is proposed to identify and determine the particle occlusion by using a depth image (3D) for the phenomenon that the subsequent feature extraction is error due to the occlusion between particles. Specifically, as shown in FIG. 3, in the target bright bead particlesOn the edge contourExtracting a fixed number of edge points to obtain edge pointsFor example, taking it as a tangent point and pointing in the direction of the edge to the target bright bead particlesIs a circle with a radius of 5 pixels in the normal vector directionWhereinTo be the target bright bead particleAnd (3) withIs a cross-over region of (c). Also by edge pointsIs a tangential point and is directed to adjacent bright bead particles along the edge directionIs a circle with a radius of 5 pixels in the normal vector directionWhereinIs adjacent bright bead particlesAnd (3) withIs a cross-over region of (c). Converting depth information using acquired depth image (3D) informationFor the height information of the bright bead particles, respectively for the crossing areasAnd (3) withThe average height value of the middle pixel point is calculatedAndwhereinIs thatThe average height value of the middle pixel point,is thatAverage height value of the middle pixel point; if it isIt is indicated that the target bright bead particles are not blocked by the adjacent particles, and all the edge contour points of the target bright bead particles are searched and calculated to meet the following requirementsJudging that the target bright bead particles are not shielded, otherwise ifIt is indicated that the target bright bead particle is occluded by the adjacent particle.
In step S5, on the basis of performing edge contour extraction and shielding identification determination on the bright bead particle image in step S4, removing the shielded bright bead particles, eliminating errors caused by subsequent statistics, extracting edge contour points of the non-shielded bright bead particles, performing least square circle fitting on the edge contour points based on priori knowledge that the bright bead particles are in a circular shape, calculating the radius of the edge contour points, calculating a gray level co-occurrence matrix of a communication area surrounded by the edge contour points, and finally counting radius distribution values (shape features) and gray level co-occurrence matrix average values (gray level features) of all the non-shielded bright bead particles to complete statistical extraction of image feature information.
Further, in step S6, on the basis of the characteristic information of the bright bead particle image counted in step S5, the characteristic information of the bright bead particle image is sent to a control decision system of the granulator 106, and the working condition of the bright bead forming process is determined and the parameters are adjusted in time, so that the bright bead particle meets the quality requirement.
Example 1
As shown in fig. 4, the following steps are performed:
s1, preparation work before detection comprises the following steps: 1.RGBD camera 101 parameter adjustment, physical and pixel coordinate calibration; 2. the height and angle of the area light source 103 are adjusted, so that the area light source is parallel to the surface layer of the bright bead particles, and shadows are not generated among the bright bead particles; 3. brightness of the area light source 103 is adjusted, gray level intensity of the collected bright bead particle image is improved, interference of external environment light is overcome, and stability of the collected bright bead particle image is guaranteed; 4. and acquiring a bright bead particle 2D image, manufacturing an edge contour training sample image, and training a transducer-UNet network model by using a training sample to finish the training of an edge contour extraction network.
S2, powering up and starting the detection device.
S3, the RGBD camera 101 collects 2D and 3D image information of the bright bead particles.
S4, extracting edge contour information of the 2D bright bead particle image by using a trained transducer-UNet network.
S5, identifying the shielding bright bead particles by utilizing the collected 3D bright bead particle image information and combining the 2D edge contour point information.
S6, after the blocking bright bead particles are removed, calculating the radius of the edge contour of the non-blocking bright bead particles by using a least square circle fitting method, and calculating a gray level co-occurrence matrix surrounding the communication area.
S7, counting bright bead particle image information and sending the bright bead particle image information to a granulator control decision system to adjust working condition parameters;
s8, finishing detection, and preparing to enter the next batch detection.
Fig. 5 shows the result of the method for fusing 2D and 3D image information provided in the present invention after occlusion of the bright bead particles, wherein the contour region is determined as the presence of the occluded bright bead particles. Fig. 6 and fig. 7 are comparison results of the detection method and the manual detection method according to the present invention, and the detection of the particle size distribution of the bright bead particles by using only the 2D image detection method. It can be seen that, after the detection method provided by the invention is adopted to identify and judge the bright bead particles with shielding, the obtained particle size distribution percentage result and particle size distribution accumulation percentage result are closer to the result of the manual detection method than the measurement result obtained by the 2D image detection method.
The method has the characteristics of non-contact detection, high detection precision, high detection speed and the like, can replace the existing manual detection method, can rapidly detect the firework bright bead forming process, and timely feeds back the condition change condition, so that the granulator control system can adjust and control the bright bead forming process parameters, the quality of finished firework bright bead products is improved, and can realize human powder separation in the firework bright bead production process, thereby eliminating potential safety hazards in the firework bright bead production process and avoiding major safety accidents.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (7)

1. The detection method of the firework forming detection device integrating 2D and 3D visual perception is characterized by comprising the following steps of:
step S1, acquiring 2D and 3D images in the firework bright bead forming process: an RGBD camera (101) is used for collecting bright bead particle images in a granulator (106) in the process of forming the bright beads of fireworks and sending the bright bead particle images to an industrial control computer (105);
s2, manufacturing a bright bead particle image edge profile training sample and a test sample: manufacturing a training sample and a test sample by using the collected 2D firework bright bead particle images;
step S3, extracting network training and testing edge contours of bright bead particle images: training a transducer-UNet network model by using the prepared bright bead particle image training sample, and inputting a 2D bright bead particle image in the test sample into the network model to extract the edge contour of the 2D bright bead particle image after the training of the network model is completed;
s4, bright bead particle shielding identification judgment: on the basis of extracting the edge contour of the bright bead particle image, the collected 3D image information is utilized to identify and judge the shielding existing among the bright bead particles, and the interference of the shielded bright bead particles on the detection result is removed;
step S5, bright bead particle image segmentation and feature extraction: after edge contour extraction and shielding identification judgment are carried out on the bright bead particle images, the extracted edge contour information is utilized to segment the single bright bead particle images, and shape characteristics and gray characteristic information of the single bright bead particle images are respectively extracted from the 2D images;
s6, judging the working condition of the bright bead forming process according to statistics of the bright bead particle image characteristic information, and timely adjusting parameters;
the method for identifying and judging the shielding existing between the bright bead particles in the step S4 comprises the following steps: at the target bright bead particles S 1 Extracting a fixed number of edge points from the edge contour, taking the edge points as tangent points, and pointing to the target bright bead particles S along the edge direction 1 Is a circle R in the normal vector direction g Wherein R is 1 =S 1 ∩R g To target bright bead particles S 1 And R is R g Is a cross region of (2); also take the edge point as the tangential point and point to the adjacent bright bead particles S along the edge direction 2 Radius in the normal vector direction of (d) and circle R g Circle R with same radius s Wherein R is 2 =S 2 ∩R s Is adjacent bright bead particles S 2 And R is R s Is a cross region of (2); converting depth information into height information of bright bead particles by using acquired 3D image information, and respectively aiming at the crossing region R 1 And R is R 2 The average height value H of the middle pixel point is calculated 1 And H 2 If H 1 ≥H 2 Then it is indicated that the target bright bead particles are not blocked by the adjacent particles, and all the edge contour points of the target bright bead particles are searched and calculated to meet H 1 ≥H 2 Judging that the target bright bead particles are not shielded, otherwise if H 1 <H 2 It is indicated that the target bright bead particle is occluded by the adjacent particle.
2. The method of claim 1, wherein the bright bead image in step S1 comprises a 2D image and a 3D image.
3. The method for detecting the firework forming detection device by utilizing the fusion of 2D and 3D visual perception according to claim 1, wherein the training sample in the step S2 is a 2D bright bead particle edge contour image, the bright bead particle edge contour pixel points are marked with gray values 255 in the manufacturing process, and the rest pixel points are marked with gray values 0.
4. The method for detecting the firework forming detection device by using the 2D and 3D visual perception according to claim 1, wherein the transformation-UNet network model in the step S3 comprises a transformation network and UNet network structure, UNet is a skeleton structure of the whole transformation-UNet network; the transducer network has 3 layers, and each layer is formed by sequentially connecting two transducer layers; the UNet network adopts a full convolution network to replace a full connection layer.
5. The method for detecting a firework molding detection device by using 2D and 3D visual perception according to claim 1 or 4, wherein the method for extracting the edge profile in step S3 is as follows: and the convolution layer of the UNet network is used as an Encoder to extract the local feature information of the image, the transform layer is used as the Encoder to extract the global feature information of the image, the extracted local feature information and the global feature information are fused in an adding mode, the fusion of the image features is input into the Decode layer of the UNet skeleton network, the image feature information is restored, and finally the output image of the edge contour point of the bright bead particle is obtained.
6. The method for detecting the firework molding detection device by using the 2D and 3D visual perception fusion method according to claim 1, wherein the method for extracting the shape feature and gray feature information in the step S5 is as follows: removing the blocked bright bead particles, extracting edge contour points of the non-blocked bright bead particles, firstly carrying out least square circle fitting on the edge contour points based on priori knowledge that the bright bead particles are in a circular shape, calculating the radius of the edge contour points, then calculating gray level co-occurrence matrixes of connected areas surrounded by the edge contour points, and finally counting radius distribution values and gray level co-occurrence matrix average values of all the non-blocked bright bead particles to finish the statistical extraction of image characteristic information.
7. The method for detecting the firework forming detection device by utilizing the 2D and 3D visual perception according to claim 1, wherein the step S6 is specifically: on the basis of the image characteristic information of the bright bead particles counted in the step S5, the image characteristic information of the bright bead particles is sent to a control decision system of a granulator (106) through an industrial control computer (105), and the working conditions of the bright bead forming process are judged and parameters are timely adjusted, so that the bright bead particles meet the quality requirements.
CN202210781973.8A 2022-07-05 2022-07-05 Firework forming detection device integrating 2D and 3D visual perception and detection method thereof Active CN115111970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210781973.8A CN115111970B (en) 2022-07-05 2022-07-05 Firework forming detection device integrating 2D and 3D visual perception and detection method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210781973.8A CN115111970B (en) 2022-07-05 2022-07-05 Firework forming detection device integrating 2D and 3D visual perception and detection method thereof

Publications (2)

Publication Number Publication Date
CN115111970A CN115111970A (en) 2022-09-27
CN115111970B true CN115111970B (en) 2023-11-10

Family

ID=83330590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210781973.8A Active CN115111970B (en) 2022-07-05 2022-07-05 Firework forming detection device integrating 2D and 3D visual perception and detection method thereof

Country Status (1)

Country Link
CN (1) CN115111970B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058218A (en) * 2023-07-13 2023-11-14 湖南工商大学 Image-depth-based online measurement method for filling rate of disc-type pelletizing granule powder

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2225017A1 (en) * 1997-01-09 1998-07-09 The Boeing Company Method and apparatus for rapidly rendering computer generated images of complex structures
KR100974413B1 (en) * 2009-03-18 2010-08-05 부산대학교 산학협력단 A method for estimation 3d bounding solid based on monocular vision
EP2680228A1 (en) * 2012-06-25 2014-01-01 Softkinetic Software Improvements in or relating to three dimensional close interactions.
CN108920643A (en) * 2018-06-26 2018-11-30 大连理工大学 Weight the fine granularity image retrieval algorithm of multiple features fusion
CN109934297A (en) * 2019-03-19 2019-06-25 广东省农业科学院农业生物基因研究中心 A kind of rice species test method based on deep learning convolutional neural networks
JP2019174931A (en) * 2018-03-27 2019-10-10 日本製鉄株式会社 Contour extraction device and contour extraction method
CN210163356U (en) * 2017-09-29 2020-03-20 浏阳市鸿安机械制造有限公司 Detection control device for firework granulating and medicine wrapping machine
RU2726160C1 (en) * 2019-04-29 2020-07-09 Самсунг Электроникс Ко., Лтд. Repeated synthesis of image using direct deformation of image, pass discriminator and coordinate-based remodelling
WO2021198343A1 (en) * 2020-04-03 2021-10-07 Kverneland Group Nieuw-Vennep B.V. Method and measurement system for determining characteristics of particles of a bulk material

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2967774C (en) * 2014-11-12 2023-03-28 Covar Applied Technologies, Inc. System and method for measuring characteristics of cuttings and fluid front location during drilling operations with computer vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2225017A1 (en) * 1997-01-09 1998-07-09 The Boeing Company Method and apparatus for rapidly rendering computer generated images of complex structures
KR100974413B1 (en) * 2009-03-18 2010-08-05 부산대학교 산학협력단 A method for estimation 3d bounding solid based on monocular vision
EP2680228A1 (en) * 2012-06-25 2014-01-01 Softkinetic Software Improvements in or relating to three dimensional close interactions.
CN210163356U (en) * 2017-09-29 2020-03-20 浏阳市鸿安机械制造有限公司 Detection control device for firework granulating and medicine wrapping machine
JP2019174931A (en) * 2018-03-27 2019-10-10 日本製鉄株式会社 Contour extraction device and contour extraction method
CN108920643A (en) * 2018-06-26 2018-11-30 大连理工大学 Weight the fine granularity image retrieval algorithm of multiple features fusion
CN109934297A (en) * 2019-03-19 2019-06-25 广东省农业科学院农业生物基因研究中心 A kind of rice species test method based on deep learning convolutional neural networks
RU2726160C1 (en) * 2019-04-29 2020-07-09 Самсунг Электроникс Ко., Лтд. Repeated synthesis of image using direct deformation of image, pass discriminator and coordinate-based remodelling
WO2021198343A1 (en) * 2020-04-03 2021-10-07 Kverneland Group Nieuw-Vennep B.V. Method and measurement system for determining characteristics of particles of a bulk material

Also Published As

Publication number Publication date
CN115111970A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN104792788B (en) A kind of gluing visible detection method and its device
CN115111970B (en) Firework forming detection device integrating 2D and 3D visual perception and detection method thereof
CN104574389A (en) Battery piece chromatism selection control method based on color machine vision
CN108760747A (en) A kind of 3D printing model surface defect visible detection method
CN106969706A (en) Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision
CN104315977B (en) Rubber stopper quality detection device and detection method
CN106780526A (en) A kind of ferrite wafer alligatoring recognition methods
CN110146516A (en) Fruit sorter based on orthogonal binocular machine vision
CN109241983A (en) A kind of cigarette image-recognizing method of image procossing in conjunction with neural network
CN105728339B (en) A kind of ceramic cartridge vision automatic checkout system
CN113460851B (en) Segment automatic grabbing and transporting system and method based on monocular vision and laser
CN104949990A (en) Online detecting method suitable for defects of woven textiles
CN107633502B (en) Target center identification method for automatic centering of shaft hole assembly
CN105738294A (en) Automatic spikelike fruit detection device and method based on monocular multi-view imaging
CN103868924A (en) Bearing appearance defect detecting algorithm based on visual sense
CN106353336A (en) Lens coating automatic detection system
CN110174065A (en) Fruit size lossless detection method based on orthogonal binocular machine vision
CN112489025A (en) Method for identifying pit defects on surface of continuous casting billet
CN106018422A (en) Matching-based visual outline defect inspection system and method for specially-shaped stamping parts
CN111398291A (en) Flat enameled electromagnetic wire surface flaw detection method based on deep learning
CN103245666B (en) Automatic detecting method for appearance defects of storage battery polar plate
CN110310275A (en) A kind of chain conveyor defect inspection method based on image procossing
CN104976959B (en) A kind of spring sizes on-line measurement system and its method based on machine vision
CN104952754A (en) Coated silicon chip sorting method based on machine vision
CN115184368A (en) Casting defect detection control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant