CN110602355A - Image acquisition method - Google Patents

Image acquisition method Download PDF

Info

Publication number
CN110602355A
CN110602355A CN201810966719.9A CN201810966719A CN110602355A CN 110602355 A CN110602355 A CN 110602355A CN 201810966719 A CN201810966719 A CN 201810966719A CN 110602355 A CN110602355 A CN 110602355A
Authority
CN
China
Prior art keywords
camera
image
target object
glass
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810966719.9A
Other languages
Chinese (zh)
Inventor
张磊
王玉国
王天雄
童磊
孙叠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Neighborhood Information Technology Co Ltd
Original Assignee
Shanghai Neighborhood Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Neighborhood Information Technology Co Ltd filed Critical Shanghai Neighborhood Information Technology Co Ltd
Publication of CN110602355A publication Critical patent/CN110602355A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/89Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
    • G01N21/8901Optical details; Scanning details
    • G01N21/8903Optical details; Scanning details using a multiple detector array
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Textile Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an image acquisition method, which comprises the following steps: driving the camera or/and the target object to make them move relatively so that the part of the target object to be shot can appear in the clear imaging plane of the camera in time sharing; collecting an image of a target object; the image taken by the camera is processed to identify the portion of the camera where the sharp imaging plane coincides with the target object surface. The invention has the advantages that: an image acquisition system and method are provided that can obtain a sharp image of the surface of a target object without having to adjust the spatial position of the object/camera to be measured in advance.

Description

Image acquisition method
Technical Field
The invention relates to an image acquisition system and a method.
Background
When an image of a target object is captured by an image capturing device such as a camera or a video camera, it is often necessary to adjust parameters such as a focal length and an aperture of the camera or the video camera or a relative relationship such as a direction and a distance between the camera or the video camera and the target object according to an optical imaging relationship. Such an adjustment step complicates the task of acquiring images of the target object abnormally.
Although the existing camera has an auto-zoom function, the camera cannot acquire a complete and clear image of an object surface with a certain length through an adjusting operation such as auto-zoom particularly in macro photography due to the limitation of the depth of field range of the camera itself.
Disclosure of Invention
An image acquisition method comprising the steps of: driving the camera or/and the target object to make them move relatively so that the part of the target object to be shot can appear in the clear imaging plane of the camera in time sharing; collecting an image of a target object; the image taken by the camera is processed to identify the portion of the camera where the sharp imaging plane coincides with the target object surface.
Further, the camera and/or the object are driven to rotate around a rotation axis, and the rotation axis is parallel to a clear imaging plane of the camera.
Further, the camera and/or the target are driven to move along a straight line.
Further, processing the image captured by the camera includes: the peak value of the gradient values of the image parameters of a pixel/group of pixels in the image is calculated.
Further, processing the image captured by the camera includes: and denoising the image.
Further, the image acquisition method further comprises: acquiring position information of the contour of a target object; acquiring position information of a clear imaging surface of a camera; and calculating the coincidence position of the surface of the target object and the clear imaging surface of the camera according to the position information of the contour of the target object and the position information of the clear imaging surface of the camera.
Further, the position information includes coordinate information.
Further, the image acquisition method further comprises: emitting laser, wherein the region formed by the laser comprises a clear imaging surface of a camera; and acquiring the position information of the laser spot in the image, and matching the position information of the laser spot with the image.
Further, acquiring the laser spot position information in the image includes: filtering the image by adopting a channel corresponding to the laser color in the multi-color camera to obtain a filtered image; and acquiring the position information of the laser spots in the filtered image.
Further, the image acquisition method further comprises: collecting a plurality of images of a target object; and fusing the parts of the clear imaging surfaces of the cameras in the plurality of images, which are overlapped with the surface of the target object.
The invention has the advantages that: an image acquisition system and method are provided that can obtain a sharp image of the surface of a target object without having to adjust the spatial position of the object/camera to be measured in advance.
Drawings
FIGS. 1A-1C are schematic diagrams of various image acquisition systems of the present invention;
FIG. 2 is a schematic diagram of the operation of a scanning camera;
FIG. 3A is a system block diagram of an industrial detection system;
FIG. 3B is a schematic diagram of an industrial detection system;
FIG. 4 is a schematic view of an encoder installation;
FIG. 5A is a perspective view of a camera with an offset configuration;
FIG. 5B is an internal schematic view of a camera with an offset configuration;
FIG. 6A is a schematic diagram of an image acquisition subsystem comprising 3 cameras;
6B-6D are schematic diagrams of the positional relationship of the clear imaging planes P of the respective cameras of the image acquisition subsystem comprising 3 cameras;
FIG. 7A is a schematic illustration of a flaw portion image of an edge ground glass;
FIG. 7B is a schematic view of an alternative flaw portion image of edge ground glass;
FIG. 8A is a schematic view of a strip light source of an industrial detection system;
8B-8D are schematic diagrams of an annular light source of an industrial inspection system;
FIG. 9A is a flow chart of an industrial inspection system including a position learning system for determining edge flaws in edge frosted glass using pre-fusion images;
FIG. 9B is a flow chart of an industrial inspection system including a position learning system using fused images to determine edge flaws in edge frosted glass.
Detailed Description
Clear image plane P
For an image capturing device, such as a camera, a video camera, a scanner, etc., it is known from the optical principle that the distance and range that the camera can clearly image are often fixed if the physical conditions of the optics, the model, the position, etc. of the electronic components in the image capturing device are not changed.
As shown in fig. 1A to 1C, an area that can be clearly photographed by the camera 101 is defined as a clear imaging plane P. In general, the shape of the clear imaging plane P depends on the hardware of the camera 101, particularly the shape of the image sensor, which is illustrated as a rectangle in fig. 1A-1C for illustrative purposes only, and in practice, the clear imaging plane P may be any shape. The size of the clear image plane P and the distance and angle between the clear image plane P and the camera 101 are determined by the relative relationship between the lens and the image sensor.
Image acquisition system
The image acquisition system comprises at least an image acquisition device and a processor, wherein a camera is taken as the image acquisition device.
The clear image plane P of the camera is parallel to the surface of the target object
In the case of macro imaging, the subject 104 often needs to be imaged by the camera at a short distance.
As shown in fig. 1A, 102 is a carrier device for placing a target object 104. 103 is a positioning block for fixing the position of the target object 104. If the part of the object 104 to be shot is exactly planar and exactly in the sharp imaging plane P, a complete and sharp image of the shot part can be obtained.
However, if the position of the object 104 changes (or there is no planar structure itself) as shown in fig. 1B, then if the object 104 is captured by the camera 101, the depth of field of the camera in close-range imaging is small, and thus a clear image cannot be obtained on the part of the object 104 not in the clear imaging plane P.
In this case, even by adjusting the focal length of the camera 101 or adjusting the relative distance between the camera 101 and the object 104, a complete and clear image of the object 104 cannot be obtained.
The clear image plane P of the camera is not parallel to the surface of the target object
The invention provides an image acquisition system comprising: a camera 101, a driving device 107, and a processor (not shown in fig. 1A and 1B).
The camera 101 is used to capture an image of a subject. The camera 101 may be a zoom camera or a fixed focus camera.
The driving device 107 can drive both the camera 101 and the subject 104, or drive them at the same time. In this embodiment, the driving device 107 is a motor. In a specific embodiment, the driving device includes a carrying device 102 for contacting, holding, or transporting the object 104, and the carrying device 102 is driven by the driving device to move the object 104. This has the advantage that the camera 101 does not have to be adjusted frequently.
As shown in fig. 1B, 1C, in the case where the clear imaging plane P of the camera is not parallel to the surface of the object, the clear imaging plane P of the camera cannot completely overlap the surface of the target object regardless of how the focal length/object distance is adjusted. At this time, the clear image plane P of the camera and the surface of the target object often have only one intersection line in geometric relation. In this case, only a part of the image of the picture taken by the camera is clear, which is the intersection between the clear imaging plane P of the camera and the surface of the target object.
Taking the example of acquiring an image of the surface of a cube whose surface is not parallel to the sharp imaging plane P of the camera, the sharp imaging plane P of the camera is considered to intersect the surface of the target object.
Identifying sharp portions in an image by peaks of image parameter gradients
As an alternative embodiment, the step of collecting and judging the clear part in the cube image is as follows:
and acquiring an original image of the surface of the cube. In the image, the image of the intersection of the sharp imaging plane P of the camera and the surface of the object is sharp, and the image of the other part is blurred. The image acquired in this step may be a grayscale or color image.
And denoising the original image of the cubic surface to obtain a denoised image. The denoising process can reduce the influence of noise generated by hardware or environment on the subsequent image processing work.
The absolute value of the parameter value gradient between adjacent pixels/pixel groups in the denoised image is calculated. The parameter may be a gray scale value, a luminance value, or other parameters such as contrast and saturation.
Judging the peak point x of the gradient by adopting a Gaussian function according to the absolute value of the gradient value calculated in the previous stepmax. In the same picture, a "blurred" pixel can be seen as taking the average of the pixels around the blurred pixel, reducing the contrast of the pixel with the surrounding pixels, thereby creating a blurred visual effect. The contrast between the 'clear' pixel and the surrounding pixels is more vivid, so that the details of the pixel can be reflected more, and the visual effect is clearer. Excluding the influence of noise, the larger the absolute value of the gradient in the same picture is, the sharper the image is. Alternatively, the manner of determining the gradient value peak point may be other calculation methods besides the gaussian function. Peak point XmaxThe place is the clear part in the picture.
In the process of calculating the gradient values, the gradient values may be arranged in order according to the positions of the pixels/pixel groups in the picture, that is, the calculation result of each gradient value corresponds to a specific position in the picture. At this time, the peak point X is obtainedmaxMeanwhile, the position of the peak point in the picture, that is, the specific position of the clear part in the image, can also be obtained.
This peak determination may be implemented using a gradient processing unit in the processor.
The method for judging the clear part through the peak value of the image parameter can obtain better effect when shooting objects with rich surface textures.
Identifying sharp portions in an image by calculating intersections
Establishing a coordinate system
As an alternative embodiment, a line segment representing the clear imaging plane P of the camera and the surface contour of the target object are represented in the same coordinate system. The coordinate system may be a two-dimensional coordinate system, the plane of the coordinate system being perpendicular to the clear imaging plane P of the camera.
The position of the camera is fixed and the parameters of the lens and image sensor of the camera are unchanged. At this time, the position of the line segment P corresponding to the clear imaging plane P of the camera in the coordinate system is also fixed. And selecting the x axis, the y axis and the coordinate origin, wherein each point on the line segment p has a fixed coordinate value. The line segment p may be represented by a linear function within a certain interval. The intersection point of the line segment p and the surface contour of the target object is located at a position corresponding to a sharp portion in the image.
Obtaining coordinate information of surface contour of target object
As shown in fig. 2, coordinate information of the surface profile of the object is acquired using a scanning camera 201. The scanning camera is positioned above, typically directly above, the target object. The motion mechanism causes relative motion between the scanning camera and the target object to enable the scanning camera to traverse the object surface. In this schematic view, the scanning camera 201 is mounted on a rail 202 by a motion mechanism. The shooting direction of the scanning camera is perpendicular to the plane of the coordinate system so as to obtain the projection profile of the target object in the coordinate system. During the relative movement of the two, the distance between the scanning camera 201 and the plane of the coordinate system is kept constant. The scanning camera 201 may employ a line camera or an area camera. The direction of relative motion between the camera and the target object is defined as the x-direction, perpendicular to which is the y-direction.
Linear array camera for acquiring object surface contour coordinate information
The linear array camera is a camera adopting a linear array image sensor. Compared with an area-array camera, the linear-array camera has higher resolution of the image, but only one line of image can be obtained due to one-time shooting, and subsequent processing operation is needed to obtain a complete image of an object to be shot.
The method comprises the following steps of acquiring coordinate information of the surface profile of an object by adopting a linear array camera:
scanning by a linear array camera: the linear array camera has a fixed motion range, continuously scans at fixed interval distances, and outputs images, wherein each image is a line.
Threshold processing: and (3) subtracting the image obtained by the linear array camera from the background image without shielding, comparing the difference value or the absolute value of the difference value with a preset threshold value, judging that the line of image obtained by the linear array camera does not contain the target object when the difference value is within a certain range, and adjusting the gray value of the image to be a certain fixed value at the moment. When the difference value exceeds a certain range, the image is regarded as the image of the target object, at the moment, the gray value of the image is adjusted to another fixed value, and two end points of the image (line image) of the target object obtained by the line camera are recorded. By such an operation, an image of an area covered by an object can be obtained, and the image is a binary image. Binary images are more convenient to process. And the image information only records two end points of the line image and is used for forming the object outline, so that other useless information is abandoned, and the storage capacity of the system is reduced.
Obtaining the coordinate in the x direction: the motion mechanism enables the linear array camera and the target object to generate relative motion, and displacement information of the motion mechanism at any moment is acquired by utilizing a displacement acquisition device such as an encoder, and the displacement information of the target object at the moment is acquired. The encoder outputs displacement information corresponding to the line of images. Since the shooting direction of the line camera is perpendicular to the plane of the coordinate system, the position information of the encoder corresponding to the line image is the x-coordinate information of the line image.
And y-direction coordinate acquisition: the length of a scanning line of the line camera is fixed, the y coordinate of one end of the scanning line is set to be 0, and then the y coordinates of two end points in the line image are obtained by calculating the distance between the two end points and the end point of the scanning line.
Smoothing treatment: splicing front and back multiframe images obtained by the linear array camera, and performing smooth filtering on spliced outline information to obtain curve information of the edge of the whole object. The smooth filtering can eliminate errors caused by single-frame image processing, so that the outline of the target object can more accurately reflect real information, and the subsequent processing is facilitated.
Area-array camera for obtaining object surface contour coordinate information
The area-array camera adopts an area-array image sensor. The imaging area is a plane, and a complete image of an object to be shot can be obtained through one-time shooting.
The method comprises the following steps of acquiring coordinate information of the surface contour of an object by using an area-array camera, wherein the specific working steps are as follows:
shooting by an area-array camera: the target object is placed in the shooting range of the area array camera to ensure that the area array camera can obtain a complete image of the target object by shooting at one time. If the image collected by the area-array camera is not a complete image, the complete image of the target object can be collected by referring to the working steps of the linear array camera.
Threshold processing: the steps of thresholding are similar to line cameras.
Obtaining coordinates in the x direction and the y direction: the position of the area-array camera is fixed, and the shooting range is also fixed. When the coordinates of a certain boundary point defining the imaging range of the area-array camera are (0,0), the x and y coordinate information of the contour point represented by the binary image can be obtained by calculating the distance between the contour point and the boundary point.
Smoothing treatment: the smoothing process is similar to the line camera described above.
Calculating intersections to obtain sharp locations in the image
As an alternative embodiment, the step of collecting and judging the clear part in the cube image is as follows:
an original image of the cube surface is acquired. The acquired image can be a gray scale image or a color image.
The scanning camera scans the image of the target object to acquire the coordinate information of the edge profile of the target object. The coordinate information of a line segment P projected by a clear imaging plane P of the camera in a coordinate system is calibrated in advance.
The intersection point between the line segment p and the edge contour of the target object is calculated. The image at the intersection is the sharp part of the original image. At this time, the processor knows the relative position information of the clearly imaged portion in the target object.
Laser marking of sharp imaging surfaces to identify sharp portions in images
Under the condition that the clear imaging plane of the camera is not parallel to the surface of the target object, if a clear part still exists in an image acquired by the camera at the moment, the clear imaging plane P of the camera and the surface of the target object can be approximately considered to be an intersection line in a geometrical relation.
And marking the position of the clear imaging plane by using laser, wherein the position of the surface of the target object illuminated by the laser at any time is the intersection line of the clear imaging plane P of the camera and the surface of the target object. When reflected in the image shot by the camera, the position illuminated by the laser in the image is the position of the clear part in the image.
As shown in fig. 1B, as an embodiment, the laser emitting device 108 is disposed at a side of the clear imaging plane P of the camera, and includes a plurality of laser emitters, laser beams emitted by the plurality of laser emitters are parallel to each other, and each laser beam is located on the same plane to form a laser area, and the laser area covers the clear imaging plane P of the camera. As another alternative, as shown in fig. 1C, the laser emitting device 108 is disposed above the clear image plane P, the laser emitter emits planar laser from top to bottom, and since the laser has a certain width, a portion of the target object surface coinciding with the laser area still has a laser spot on the object surface due to the diffuse reflection of the surface itself and is not completely blocked by the object. The area-shaped laser area covers a clear imaging plane P of the camera. In this embodiment, the laser light is red laser light having a wavelength of 650 nm. As other alternative embodiments, the color of the laser may be green or other colors.
As a way of judging a sharp portion in an image according to laser spots, a camera adopts a multi-color camera, which includes a first channel and a second channel. In this embodiment, the camera is a color camera, and includes three channels of RGB, in this embodiment, the laser is a red laser with a wavelength of 650nm, the first channel is an R channel, and the second channel is a G channel or a B channel. It should be noted that the first channel and the second channel defined herein may include several channels, for example, in this embodiment, the second channel may be a GB channel. The processor comprises a channel selection unit for selecting a channel of the multi-color camera. The processor extracts a first channel image acquired by the multi-color camera and judges a laser spot area in the first channel image. And then extracting a second channel image, and matching the laser spot area in the first channel image with the second channel image. The laser spot area in the first channel image is the sharp image area in the second channel image. As an alternative embodiment, when the color of the laser light is green, the first channel is a G channel. The multicolor camera is adopted to respectively judge the laser spot area and the clear image position, so that the interference of the irradiation of laser to image information can be avoided, and the subsequent operations such as flaw detection and the like can be conveniently carried out by utilizing the image.
As an alternative embodiment, the light intensity distribution of the laser region is made uniform by using an optical element such as a grating. The uniform laser light intensity distribution does not bring excessive interference to the surface information of the target object. In this case, there is no need to filter the laser light with a multi-color camera.
The mode of laser marking the clear imaging surface can adapt to different shooting scenes, and equipment such as a scanning camera and the like is omitted.
Image fusion
In order to obtain a complete image of the surface of the object, clear images corresponding to different positions of the surface of the object are fused by a processor to form a complete clear image of the surface of the object. Before image fusion, the processor has acquired the relative positional relationship of each sharply imaged portion with respect to other sharply imaged portions in the same image, in accordance with the above-described systems and methods.
As an embodiment, the camera is mounted on the driving device, and the shooting position can be changed with the driving device. As an embodiment, the driving device comprises a mounting seat, a stepping motor, a transmission mechanism, wheels and a guide rail. The camera is fixedly connected with the mounting seat. The stepping motor can be arranged inside the mounting seat and also can be arranged outside the mounting seat. One end of the transmission mechanism is connected with the wheel, and the other end of the transmission mechanism is connected with the stepping motor. The stepping motor drives the transmission mechanism to drive the wheels to rotate. The wheels can move along the guide rails, and the moving tracks of the wheels are determined by the guide rails. In the process of one-time shooting, the driving device moves along a certain direction according to the motion track of the guide rail, and the camera changes the same distance every time under the driving of the stepping motor. The camera takes pictures of the surface of the object at different positions according to the walking sequence of the driving device. The processor receives the pictures, obtains clear parts in the pictures by judging gradient peak values or calculating intersection points, and splices the clear parts in the pictures in sequence. (D in the drawing indicates the direction of relative movement between the object and the camera)
As an alternative embodiment, the motor of the drive device may be a servo motor.
As another embodiment, the driving device may drive the target object or the camera to rotate around a rotation axis, which is parallel to the clear imaging plane P of the camera. In this way, images of different parts of the surface of the target object can also be acquired.
As another alternative, the camera is fixed in position, and the driving device drives the object to be photographed to move. The camera takes multiple pictures at different positions of the target object. The processor receives the pictures, obtains clear parts in the pictures by judging gradient peak values or calculating intersection points, and splices the clear parts in the pictures in sequence. In this case, the driving device may be an industrial line, a roller, a transfer cart, or the like.
In summary, a driving device for driving a camera or/and a target object so that a portion to be photographed of the target object can appear in a clear imaging plane P of the camera by time division. As an alternative embodiment, the image capturing system may further include a positioning device, which is respectively connected to the driving device and the processor in a data manner, so that the processor can match the clear portion of the image with the portion corresponding to the target object.
As an alternative embodiment, the camera may be connected to the processor. The camera transmits the captured picture to a processor, which processes the image to obtain information about the surface of the object contained in the image. The information of the object surface comprises flaw information of the object surface, characters of the object surface, mark information, color, texture, pattern and the like of the object surface. Wherein the defect information at least includes: information of whether a flaw exists in the image; location information of defects in the image.
The clear part in the image is judged by calculating the position of the intersection of the clear imaging plane and the object surface, so that the method can adapt to various shooting scenes and cannot be influenced by illumination conditions or object surface textures.
Multiple cameras shoot together
Several cameras may be included in the image acquisition system to improve the efficiency of the photographing. For example, when the object is rotated by the rotary driving device 107, if there is only one camera, the rotary driving device usually needs to rotate 360 ° to acquire a complete image of the surface of the target object. If two oppositely arranged cameras are adopted, the target object is placed between the two cameras, and the clear imaging surfaces of the two cameras are perpendicular to the same plane. At the moment, the driving device only needs to rotate 180 degrees to complete shooting, and the time is shortened by half. If three cameras are adopted, the included angle of the central lines between every two cameras is 120 degrees, and the planes where the clear imaging surfaces of every two cameras are located are inclined and intersected. At this time, the driving apparatus only needs to rotate 120 ° to complete photographing.
On-line detection system
Industrial detection tends to have higher detection standards. If image acquisition detection is used for replacing manual visual detection, a camera needs to shoot an object to be detected in a short distance so as to acquire finer image information of the surface of the object.
As shown in fig. 3A and 3B, an industrial inspection system based on image acquisition includes an industrial production line 30, a position learning subsystem 31, an image acquisition subsystem 32, and a processor 33.
In the following, an on-line detection system for glass edge is taken as an example, and an on-line detection system including the image acquisition system of the present invention is specifically described, in this embodiment, the glass 39 to be detected is sheet glass. As an alternative embodiment, the detection system can also detect surface flaws of other objects besides glass, such as surface flaws of wood, steel, stone, and the like.
The position learning subsystem 31 is used for acquiring coordinate information of the edge of the glass to be detected, and the position learning subsystem can be omitted when clear parts in images are identified by calculating gradient peak values of image parameters or marking clear imaging surfaces by using laser. The image acquisition subsystem 32 includes one or more cameras for acquiring images of the glass edges. The processor 33 is at least in data connection with the position-learning subsystem 31 and the image-capturing subsystem 32, and is used for performing various image and data processing operations.
The industrial detection system can obtain a clear image of the surface of an object to be detected through a machine. The labor is saved, and the detection precision is improved. By adopting the industrial detection system, the object to be detected can be placed at any position and direction on an industrial production line.
Industrial production line
The industrial line 30 is used for placing the photographed glass. The industrial production line can convey an object to be detected along a conveying straight line at least in a conveying direction on a conveying surface. In this embodiment, the industrial production line moves the glass placed on the industrial production line.
The industrial production line comprises a conveying unit used for conveying a product to be detected.
As an optional implementation mode, the industrial production line comprises a roller frame and a plurality of conveying rollers, the conveying rollers are rotatably connected with the roller frame, and the axes of the conveying rollers are located on the same plane.
As other alternative embodiments, the industrial production line may also include a conveyor such as a conveyor belt or a transfer car.
The surface of the industrial production line can be made of anti-skid rubber materials or provided with suckers for increasing the friction force between the industrial production line and conveyed objects, so that the friction force between the industrial production line and glass is increased, and the conveying efficiency of the industrial production line is improved.
Location awareness subsystem 31
The position learning subsystem is used for acquiring coordinate information of a product to be detected, and can be omitted when clear parts in the image are identified by calculating gradient peak values of image parameters or marking clear imaging surfaces by using laser.
And establishing a coordinate system by taking the conveying surface of the industrial production line as a plane where the coordinate system is located. The coordinate system may be a two-dimensional coordinate system. The contour of an object placed on the industrial production line is projected into a closed figure in the coordinate system. The conveying direction of the industrial production line is taken as the x direction, and the direction perpendicular to the conveying direction is taken as the y direction. The position learning subsystem needs to know at least the relative position between the product to be detected and the camera.
Front camera
As shown in fig. 3B, the front camera 312 is disposed above the industrial process line 30, typically directly above the industrial process line 30. Of course, as an alternative embodiment, the front camera 312 may be set according to actual needs. However, no matter how the arrangement is, the imaging range of the front camera 312 is ensured to cover the proper industrial production line area. The front camera 312 is spaced a distance from the industrial line 30. The lens of the front camera faces the industrial production line 30, and the optical axis of the front camera is perpendicular to the plane where the axes of the conveying rollers are located.
The front camera may be a line scan camera (line camera), or an area camera. The scanning camera is used for acquiring coordinate information of an object to be detected.
As an alternative embodiment, the front camera 312 is a line scan camera.
Acquisition of industrial production line position information (glass contour x coordinate value)
The glass to be measured is placed on the industrial production line and moves along with the industrial production line. When the position information of the industrial production line is obtained at any time, the x-coordinate value of the glass to be measured at that time can be obtained on the coordinate plane.
In one embodiment, the image acquired by the front camera is determined, and the x-coordinate value of the portion of the same glass to be measured which is first acquired by the front camera is set to 0.
As an alternative embodiment, the y-direction of the industrial production line is provided with a position detection device. As an implementation mode, the position detecting device is a photoelectric door, and includes a signal transmitting device disposed on one side of the industrial production line and a signal receiving device disposed opposite to the signal transmitting device, and a connection line between the signal transmitting device and the signal receiving device is perpendicular to a transport straight line of the industrial production line. The x coordinate of any point on the connection line is set to 0. Under the condition of no shielding, the receiving device of the photoelectric door can always receive the light signal emitted by the light emitting device of the photoelectric door. In the case of a shield, the receiving means of the photovoltaic gate cannot receive the light signal emitted by the light emitting means of the photovoltaic gate.
The photoelectric gate is connected with the processor, and the distance between the photoelectric gate and the front camera is kept constant. In this case, x-coordinate information of the scanning area of the front camera can also be obtained.
Encoder acquisition coordinates
As a displacement detecting device, an encoder includes an encoder disk and an encoder reading unit for converting an angular displacement or a linear displacement into an electric signal.
As shown in fig. 3B and fig. 4, the industrial inspection system based on image acquisition includes an encoder 311, which may be a rotary encoder, including an absolute value encoder and an incremental encoder.
The coding disc of the encoder can be connected with a motor through a coupler and also can be connected with a transmission device for direct industrial production line movement. When the encoder is connected with the motor, the encoder directly obtains the rotation information of the motor, and the corresponding motion coordinate of the industrial production line is obtained through the reduction ratio conversion.
The encoder is coupled to the processor for transmitting information to the processor. According to the image acquired by the scanning camera for the first time or the starting of the photoelectric door, the motion coordinate of the industrial production line recorded by the encoder at any time is the x coordinate of the front point of the glass.
The position information of the industrial production line is obtained as an original step and a key step of the detection process of the whole detection system. If an error is generated in this step, the output results of the subsequent steps may be erroneous. It is therefore necessary to ensure the accuracy of the information acquired by the encoder.
As an alternative to the photogate, the processor calculates the real-time speed of the industrial production line from the data transmitted by the encoder. And judging that the glass is placed on the industrial production line at the moment when the real-time speed of the industrial production line suddenly drops. The motion coordinate of the industrial production line at the time of the sudden speed drop is marked as 0, and the x coordinate of the front point of the glass is 0 at this time.
As an alternative embodiment, the glass edge detection system includes a plurality of encoders mounted at different locations or on different components of the industrial process line. For example, in the case of using two encoders, the main encoder 311a and the sub encoder 311b are connected to a motor and an industrial line transmission mechanism, respectively. If the information obtained by the sub-encoder 311b is consistent with the information obtained by the main encoder 311a, or the error is within a certain range, the information obtained by the encoder at this time is considered to be accurate. The main encoder and the auxiliary encoder can find the encoder fault in time, and the detection precision is improved.
Coordinate acquisition using brushless motor parameters
The brushless motor is controlled by electronic commutation commands, the speed of which can be controlled by pulse width modulation signals. Due to the characteristic of the brushless motor, the brushless motor controller can obtain the rotation angle of the brushless motor in a mode of recording the quantity of control signals, and therefore motion information of an industrial production line is obtained.
When the photoelectric sensor outputs zero coordinates, the glass enters a detection area, the brushless motor controller records the pulse width and the number of control signals of the brushless motor, and calculates the motion coordinate of the industrial production line at any time according to the pulse width and the number of the control signals, namely the x coordinate of the front point of the glass.
Time interval of front camera shooting
No matter an encoder or a brushless motor is adopted, the real-time speed of an industrial production line can be obtained. Because the loads placed on the industrial production line are different, or because of the difference of the lubricating environments of mechanical devices of the industrial production line, the industrial production line does not always move at a constant speed. At the moment, the processor calculates and obtains the real-time movement speed of the industrial production line. And calculating the time required by the industrial production line to move for a fixed interval at the speed, and triggering the shooting of the front camera at a corresponding time point. By adopting the triggering mode of the front camera at the shooting time, the picture shot by the front camera can meet the requirement of post-processing. For example, when the front camera adopts a linear array camera, the linear array camera can ensure that the motion interval of the corresponding industrial production line is constant when the linear array camera shoots every time; under the condition that the front camera adopts the area-array camera, the area-array camera can be ensured to obtain a complete glass surface image every time of shooting.
The specific shooting interval setting when the linear array camera is adopted is subject to the requirements under different imaging quality requirements. For example, for the glass to be measured with a large size, the shooting frequency of the line scanning camera is relatively reduced, the energy consumption is reduced, and the heat generation of the line scanning camera is reduced. Of course, the line scan camera shooting frequency can also be increased to increase the fineness of the imaging.
After the front camera finishes shooting, coordinate information of any point of the edge of the glass at any moment can be obtained by combining the industrial production line coordinate information corresponding to the control parameters of the encoder or the brushless motor.
Image acquisition subsystem 32
The image acquisition subsystem acquires an image, and the processor identifies a clear part in the acquired image by judging the gradient peak value of the image parameter, calculating the intersection position or marking a clear imaging surface by using laser. In this embodiment, the image acquisition subsystem 32 is configured to acquire an image of an edge of the glass to be measured. The image acquisition subsystem 32 may include a plurality of cameras 321.
In one embodiment, the image capturing subsystem 32 is disposed behind the position learning subsystem 31, and the glass to be measured passes through the position learning subsystem 31 and then passes through the image capturing subsystem 32 when being conveyed. By the arrangement, when the image acquisition subsystem 32 acquires an image, the coordinate information of the edge of the glass to be detected is known, so that the subsequent processing operation is facilitated.
Working process of single camera in image acquisition subsystem
In the case of detecting a glass edge flaw, the position and direction in which the glass is placed on the industrial production line are often arbitrary.
As an embodiment, as shown in the figure, the position of the camera is fixed, the center of the imaging range is kept horizontal with the industrial production line, and the projection of the clear imaging plane P of the camera from the position right above the industrial production line at least partially covers the industrial production line. During the movement of the industrial production line, the position of the camera remains unchanged.
The glass to be measured is placed on the industrial production line in any direction. In this case, the glass edge tends to be non-parallel to the camera's image. In the glass edge image taken by the camera, the image at the intersection of the glass edge and the camera clear imaging plane P is clear, and the other parts of the image except the intersection are not clear.
As an optional implementation mode, the processor controls the shooting of the camera according to the real-time position of the glass to be measured acquired by the position acquisition subsystem. When the front point of the glass to be measured is about to enter the camera shooting area or enters the camera shooting area, the processor controls the camera to start shooting.
In order to ensure that each point of the glass edge can intersect with a clear imaging plane P of the image acquisition subsystem 32 during the movement of the glass to be measured, as an optional implementation manner, a limiting mechanism can be arranged at the edge of the industrial production line to prevent the glass edge to be measured from exceeding the width range of the industrial production line; as another alternative, an alarm device/function may be introduced: when the front camera 312 judges that the shape and the position information of the glass to be measured exceed the width range of the industrial production line, an alarm is given.
Shooting multiple pictures and carrying out image fusion
The camera has a depth of field, and the clear part in the picture taken by the camera has a certain range.
In order to obtain a complete image of the glass edge, a camera continuously takes a plurality of pictures at a specific moment, and a clear part in each image can be spliced to form a complete image of the glass edge.
Recording the horizontal central point of a clear imaging area in the ith frame image as xi
With xiBy intercepting the image in a central, horizontal and vertical directionxAnd Wy. Wherein x isi,WxAnd WyAre in units of pixels. The sharpness range in the image taken by the camera is: [ x ] ofi-Wx/2,xi+Wx/2]。
The expression method of each parameter in the following formula comprises the following steps: image distance: u, object distance: v, belt moving speed: v, distance that the object moves perpendicular to the optical axis direction of the camera: svDistance moved perpendicular to the optical axis of the camera: suAbscissa like on the sensor plane: x (pixel horizontal position corresponding to the point on the picture), sensor pixel size: and d, the thickness of the glass is T, the focal length of the lens is: f, sensor inclination angle: theta, reference value x0And u0Given the horizontal coordinate x on the sensor0The image distance of the corresponding lens is u0
In the method of calculating a sharp portion of an image through a gradient peak point of the image, a region of a sharp range in the image needs to be calculated. The moving speed V of the belt and the angle alpha of the optical axis of the camera relative to the belt; exposure interval t of two frame images.
In the method of calculating the sharp portion of an image through the gradient peak point of the image,
Wx=Su/cos(θ)/d,Wy=(u*T)/(v*d)
wherein S isu=u*Sv/v,v=1/(1/f–1/u),u=u0+Δu;Sv=V*sin(α)*t;Δu=(xi-x0) The shooting interval of sin (θ) camera is t.
In algorithms where sharp parts of the image are obtained by calculating the intersection position,
Wx=Su[ theta ]/d with Wy=(u*T)/(v*d)
Wherein S isu=u*Sv/v,v=1/(1/f–1/u),u=u0+Δu;Sv=V*sin(α)*t
In the method, the rear camera selects proper time to take pictures according to the glass edge position information provided by the front camera and the reading of the encoder.
And the processor controls the shooting work of the image acquisition subsystem according to the known relative position between the product to be detected and the camera. Clear position x of camera shot imageiThe timing of (d) is specifically calculated by the following method:
assume that the interval between the front and back camera photographing encoders is Ci-1+CpbAnd Ci+CpbThen, the position of the double-shot glass is changed to (C)i-Ci-1)*Sc
Because the camera sensor is tilted by the angle θ, the pixel size in the x-axis direction of the image plane is d × Cos (θ), and the pixel size in the y-axis direction of the image plane is still d.
Distance S for moving object perpendicular to optical axis direction of camerav=(Ci-Ci-1)*Sc*Sin(α)。
Then according to the position information (X) of the edge point provided by the front camerai,Yi+Spb) And D, the object distance v ═ of the camera can be found (D + X)i) And/sin (. alpha.). The image distance u can also be obtained according to a lens focal length formula.
Further get xi=(u-u0)/sin(θ)+x0
The controller controls the camera to shoot according to the calculation result and the information obtained by the position acquisition subsystem, so that the images shot by the camera twice at intervals/distances can meet the requirement of image splicing.
And fusing a plurality of clear images corresponding to different positions of the glass edge to obtain a complete glass edge image in the shooting range of a single camera.
If the current glass moving direction is from right to left: the [ x-D/2, x + D/2] image of the current frame is acquired and placed to the right of the output image.
If the current glass movement direction is from left to right: the [ x-D/2, x + D/2] image of the current frame is collected and placed on the left side of the output image.
The above two steps are repeated until a complete sharp image is obtained.
Sensor/lens tilt
When the surface flaw of the object is detected, the object to be detected can be placed on an industrial production line in any direction. The image acquisition and detection system acquires the image of the surface of the object by obtaining the clear image at the superposition position between the surface of the object and the clear imaging plane P of the camera. The image acquisition devices comprise cameras, video cameras, scanners and the like, and clear imaging planes P of the image acquisition devices have certain lengths. The object surface flaw detection system needs a camera with a larger depth of field range, so that a wide enough area on an industrial production line can be covered.
In order to obtain a larger depth of field, a method of replacing the lens and/or the image sensor may be used. But a larger depth of field range means more cost.
As an alternative embodiment, the image acquisition subsystem in the industrial inspection system employs a camera with an offset configuration.
As shown in fig. 5, the camera having the offset configuration includes a housing 501, a lens 502, and an image sensor 503, the image sensor being coupled to a mounting shaft 504, which is rotatably mounted to the housing 501. The mounting shaft is exposed from the upper portion of the housing 501, and the rotary knob 505 is fixedly connected to the mounting shaft exposed from the housing. As an alternative embodiment, an angle code wheel 506 is arranged around the rotary dial knob for indicating the rotation angle of the image sensor. The lens 502 defines a main optical axis and the image sensor 503 defines a sensing plane, and a line passing through the mounting axis 504 and perpendicular to the main optical axis of the lens forms an angle α with the sensing plane. The rotary knob 505 is operable to be rotated to rotate the mounting shaft 504, which in turn rotates the image sensor 503 along with the mounting shaft 504. The current inclination angle of the image sensor can be intuitively known through the angle code disc 506 arranged around the rotary dial knob.
Alternatively, the lens 502 may also be rotated to change the tilt relationship between the image sensor and the lens.
Taking the image sensor tilt as an example, the focal range of the sensor is expressed by the following formula:
[(u+sin(α)*x)/(u+sin(α)*x-1),u/(u-1)]
where u is an image distance, α is an image sensor rotation angle, X is a normalized size of a focal length of the image sensor in a radial direction of the rotation axis (X ═ X/f, where X is a real size), and sin (α) × is a projection distance of the sensor in the optical axis direction.
As can be seen from the formula, for a given x and u, a larger α (u + sin (α) ×)/(u + sin (α) × 1) means a larger depth of field. When the angle alpha ranges from 5 degrees to 65 degrees (which can also be expressed as the angle formed by the intersection of the main optical axis of the lens and the sensing plane ranges from 25 degrees to 85 degrees), a better imaging effect can be obtained.
According to the imaging principle of the lens, the camera with offset structure has a longer length of the clear imaging plane P and a larger depth of field range compared with a common camera using the same lens and image sensor.
The longer length and depth of field range of the clear imaging plane P enable the imaging range of the camera with offset configuration to cover as much area as possible on the industrial production line. Because the position of the glass to be measured on the industrial production line is uncertain, the camera with the offset structure is adopted, so that the situation that any point on the industrial production line can be coincided with a certain point on the clear imaging plane P of the camera at a certain moment in the motion process as far as possible under the same hardware cost is ensured, and the situation that any position of the edge of the glass can be clearly imaged in the camera with the offset structure at a certain moment is also ensured.
Meanwhile, the larger depth of field range can acquire more clear images when the surface image of the object with certain depth is shot. For example, when shooting a glass with a radian, a camera with an offset structure can shoot a clear macro image on the whole glass surface at one time under a certain shooting angle. However, the conventional macro camera may not be able to simultaneously acquire clear images of different positions of the glass with radian due to relatively small depth of field.
Camera with offset configuration having offset configuration camera with offset configuration having offset configuration
Multiple cameras shoot together
Often, objects have multiple surfaces, and there may be mutual occlusion between the multiple surfaces of the object or different locations on the surface. At this time, a single camera cannot acquire images of multiple surfaces of the object at the same time.
The image acquisition subsystem 32 is used to acquire a complete image of the glass edge. Image acquisition subsystem 32 includes a plurality of cameras. Multiple cameras may be disposed on the same horizontal plane. Generally, a plurality of cameras are distributed on the periphery of an industrial production line and are arranged on two sides of a conveying straight line of the industrial production line. The sharp imaging planes P of any two cameras are not coincident.
As shown in fig. 6A, as an alternative embodiment, a plurality of identical cameras are uniformly distributed around a certain point (the point is referred to as a central point) on the center line in the width direction of the industrial production line, and an included angle formed by a connecting line between any two adjacent cameras and the central point is consistent, which can also be described as that the plurality of cameras are uniformly distributed on the circumference of a certain circle with the central point as the center. The arrangement is such that each camera is centrally symmetrical about the centre point. The centrosymmetric relationship makes the installation and replacement of the whole image acquisition subsystem 32 easier and more convenient.
As a modification of the above embodiment, the number of the plurality of cameras is an even number. The even number of cameras are uniformly distributed on two sides of a conveying straight line of the industrial production line. In this case, the cameras are not only centrosymmetric but also axisymmetric. The difficulty of assembling and replacing the image acquisition subsystem is further reduced by adopting the arrangement of an even number of cameras.
The multiple cameras are adopted for shooting simultaneously, and images of different edges of the glass can be obtained at the same time. If the number of the adopted cameras is too small, the requirements for acquiring images of different edges of the glass cannot be met. However, the number of cameras used is too large, the installation of these units on an industrial production line becomes complicated, and the corresponding costs increase due to the increase in the number of cameras.
Each camera has a certain shooting when shooting the glass to be measured at different positionsAngle of incidence beta, only at beta>At 0 deg., the camera can work normally. When a plurality of cameras are used, the shooting angle β of each camera at the time of shooting is required>0 deg. Considering the condition that a plurality of cameras are uniformly distributed, the number n of the cameras and the limit shooting angle betamaxIs expressed by the following formula:
βmax=90°-180°/n
only betamax>At 0 deg., the camera can work normally. Meanwhile, n must be an integer, and the formula shows that n is more than or equal to 3, that is, at least 3 cameras are needed to meet the shooting requirement. At this time, at least two cameras are located on the same side of the transport straight line.
βmaxCan also be regarded as the incident angle, beta, of the cameramaxThe larger the size, the more perpendicular the shooting direction of the camera is to the object surface, the better the shooting effect.
As can be seen from the table below, the larger n, the larger βmaxThe larger the average imaging quality. That is, the larger the number of cameras, the better the imaging effect
Number of cameras n βmax Angle improvement
3 30 -
4 45 50%
5 54 25%
6 60 11%
7 64.3 7%
However, the larger the number of cameras, the higher the cost and the more complicated the assembly.
According to multiple experimental results, the absolute value of the imaging quality, the improved value of the imaging quality and the hardware cost of the whole system when different numbers of cameras are adopted are comprehensively considered. The optimal state b of the whole system is expressed by the following formula:
b=8n-n2
as can be seen from this equation, when n is 4, b has the maximum value.
When n takes 4, the cameras can be arranged on two sides of the industrial production line, and the cameras and the glass to be measured are ensured to be positioned on the same horizontal plane. And, because the distance that sets up between each camera on industry production line both sides and the industry production line is the same, can reduce the degree of difficulty of assembly.
With 4 cameras, the angle is improved by 50% compared to 3 cameras. According to experimental results, the improvement in imaging quality was greatest when the cameras of the image acquisition subsystem ranged from 3 to 4.
Mounting position of camera
As shown in fig. 6A, taking 3 cameras as an example, fig. 6B-6D show the relationship between the clear imaging planes of the cameras in different image capturing subsystems.
In one embodiment, the main optical axes of the three cameras are parallel to the same plane. Such an arrangement can prevent distortion of the captured picture due to tilting of the camera in the longitudinal direction.
The clear imaging planes P of the respective cameras are to satisfy: in the process of relative motion between the industrial production line and the camera, each point on the surface of the target object can be intersected with a certain clear imaging plane P, namely: a plane perpendicular to the clear imaging plane P is taken as a projection plane (the transport plane in this embodiment may be taken as a projection plane), and the projection of the clear imaging plane P on the projection plane is approximately regarded as a projection line segment, so that an intersection point exists between the motion trajectory of any point on the surface of the target on the projection plane and the projection line segment of the clear imaging plane P.
The target is placed on the industrial production line, and the edge of the target does not exceed the edge of the industrial production line when viewed from the top. By the arrangement, no matter what direction and position the object is placed on the industrial production line, the edge of the object cannot exceed the plane of the industrial production line, namely: there will be no completely non-imageable portions of the object surface (e.g., portions that are out of the plane of the industrial line will be completely non-imageable).
In addition, since there is a problem of front-back occlusion of the object itself being observed, whether by human eye or by device imaging. In the scene of shooting the image of the edge of the glass, as shown in fig. 6B-6C, when the glass is placed at a specific position in the glass placing direction, the complete image of the edge of the glass to be measured cannot be obtained due to front and back shielding. As shown in fig. 6B, in this scenario, the image acquisition subsystem cannot acquire the image of the right edge of the rectangular glass 39 to be acquired; in the scenario of fig. 6C, the image acquisition subsystem cannot acquire the image of the left edge of the rectangular glass 39 to be measured.
According to a plurality of tests, as shown in fig. 6D, under the condition that a closed region is formed between line segments of a plurality of clear imaging planes P on a projection plane, as long as any point in a projection pattern of a detected product conveyed on an industrial production line on a conveying plane can pass through the closed region, that is, the detected product can completely pass through the closed region in the conveying process, a complete clear image of the detected product can be obtained.
A plurality of cameras satisfying the above conditions, are able to obtain a complete image of the glass edge, irrespective of the direction and position of the glass on the industrial production line.
Selection of best camera
The positions of the plurality of cameras are fixed, that is, the shooting angles of the respective cameras are fixed. Since the edges of the glass may be blocked and the glass is placed at any angle on the industrial production line, in general, not every camera can completely capture an image of a certain edge or a certain position of the glass.
And after the glass edge image and the corresponding position information are obtained through the position learning subsystem, the glass edge image is processed to obtain a normal vector at any position on the glass edge image. The position pointed to by the normal vector, i.e. the best shooting angle for that point.
If the glass has a plurality of edges, the average value of the normal vectors on the edges of each piece of glass is calculated respectively, and the optimal shooting angle is determined according to the average value of the normal vectors corresponding to the edges of each piece of glass, so that the optimal camera is selected.
If the shape of the glass edge is a closed curve, the normal vectors at the fixed spacing positions of the glass edge can be sequenced according to the size of an included angle between the normal vectors and a certain reference direction, and the best camera is selected according to the angle of the normal vector at the position.
As an alternative embodiment, the image of the glass edge is a closed graph, and the normal vector of each point on the closed graph is obtained. The angle ranges of the normal vectors corresponding to each camera can be the same or different, but the sum of the angle ranges of the normal vectors corresponding to all the cameras should be not less than 360 degrees, so as to ensure that at least one corresponding camera is arranged at any position on any glass edge image. When the image of the glass edge is collected, the image collected by the camera corresponding to the normal vector at the glass edge is the optimal image.
As an alternative embodiment, the plurality of cameras are uniformly distributed around a certain point (the point is called a center point) on the center line in the width direction of the industrial production line, and the included angle formed by the connecting line between two adjacent cameras and the center point is consistent, which can also be described as that the plurality of cameras are uniformly distributed on the circumference of a certain circle with the center point as the center. The arrangement is such that each camera is centrally symmetrical about the centre point. Where n cameras are involved, each camera selects the corresponding best normal vector in the range of 360/n. By means of the arrangement, the angle ranges of the normal vectors corresponding to the cameras are the same, namely the workload of each camera is the same, and stable operation of the image acquisition subsystem is facilitated. As shown in fig. 3B, as an embodiment, the industrial detection system based on image acquisition adopts a central symmetry arrangement of 4 cameras, which are evenly distributed on two sides of a transport straight line of an industrial production line, and an included angle between central axes of every two cameras is 90 °.
And selecting a corresponding camera according to the normal vector of the glass edge, and obtaining the glass edge image with the best imaging quality while controlling the working time of the camera to reduce the working load of the camera.
Glass edge flaw determination
Fig. 7A-7B show the glass edge images taken when there is a flaw in the glass edge under test. The form of the flaws may vary for different types of glass.
As an example, the edge of the glass to be tested should be frosted under normal conditions. At this time, the main types of glass edge flaws include "uncovered frosting", and the flaw portions produce specular reflection. The camera captures the non-frosted blemish to obtain the brightness and darkness of the image, depending on the angle of the specular reflection surface and the incident direction of the light source.
If the light source light, the flaw part and the camera cannot form a light reflection relation, at the moment, the non-flaw part generates diffuse reflection, and part of the light source is reflected to the camera; the defective portion undergoes specular reflection, and light from the light source does not enter the camera through specular reflection because the direction of the reflection angle is not aligned with the camera. In this case, as shown in fig. 7A, the visual effect of the flaw portion is darker than that of the non-flaw portion, so that the glass of the non-frosted flaw portion appears black.
FIG. 7B is a diagram showing another case where a flaw is present at the edge of the glass to be measured. At this time, the light source light, the flaw part and the camera just form a geometric reflection relationship, and the whole light source light is reflected into the camera. The non-defective portion is still diffusely reflected, reflecting a portion of the light from the light source to the camera. In this case, the visual effect of the defective portion is rather brighter than that of the non-defective portion. Therefore, with this type of light source, there may be both "bright" and "dark" conditions in the "uncovered frosted" blemishes.
As an alternative solution. When the flaw of the glass edge is judged, two different judgment threshold values are set, wherein the two different judgment threshold values comprise a dark threshold value and a bright threshold value, and when the brightness value of a certain position of the glass edge is judged to be smaller than the dark threshold value, the position is judged to be the flaw; or when the brightness value of a certain position of the glass edge is judged to be larger than the bright threshold value, the position is also judged to be a flaw. As another alternative solution. The average brightness value of the entire glass edge is first calculated. The brightness values at different positions of the edge of the glass are then subtracted from the average brightness value and the absolute value of the result of the subtraction is calculated. Judging the absolute value: if the absolute value is larger than a certain threshold value, the brightness of the position is too bright or too dark, namely the position is a mirror reflection position, and at the moment, the position is judged to have a defect. The luminance value here includes not only the luminance value in a color image but also the gradation value in a black-and-white image.
Light source
When the defect is judged by adopting the judging method, if the difference between the brightness of the defect and the brightness of the non-defect is too small, the selection of the threshold value is difficult, and the judgment of the defect is not accurate enough.
To solve this problem, as an alternative embodiment, referring to fig. 8A, a stripe-shaped light source 81 is disposed above the clear image plane P of the camera. From a top view perspective, each strip light source can cover a clear imaging plane P of one or more image capture units in image capture subsystem 32. The strip light source may include point light sources or area light sources uniformly distributed on the strip light source. By adopting the arrangement, when the image acquisition subsystem acquires an image, the clear part in the image can be irradiated by light source light rays with the same brightness and angle, so that the absolute value of the brightness difference value between each flaw position and each non-flaw position is increased, the brightness contrast between the flaws and the non-flaw positions is improved, and the flaws are judged more accurately. The strip-shaped light source can be close to the industrial production line as much as possible on the premise of not interfering the movement of objects on the industrial production line, so that the loss of brightness is reduced.
Flaws, known as "pop edges," may also exist at the intersection of the edge of the frosted glass and the glass surface.
In another alternative embodiment, the light emitted by the light source is parallel light. The parallel light covers the clear imaging plane P of the camera. The light source can be arranged beside the camera and is on the same side of the industrial production line with the camera. Or can be arranged on both sides of the industrial production line with the camera. By adopting the light source arrangement, the glass edge can be irradiated to the intersection of the glass edge and the glass surface, so that the glass edge explosion can be detected, and the application range of the online detection system is widened.
As another alternative, referring to fig. 8B, the light source 82 is an annular surface light source shaped like a side surface of a cylinder or a side surface of a rectangular parallelepiped. The light source surrounds the glass to be measured, and the inner wall of the whole light source emits uniform light. Such a light source setting mode can ensure that: the light emitted by the inner wall of the light source, the flaw of the glass edge and a certain camera in the image acquisition subsystem can always form a mirror reflection relation. At this time, no matter what direction the glass to be measured is placed on the industrial production line, the flaw part of the glass edge can reflect the light rays at a certain position on the light source into a certain camera in the image acquisition subsystem through mirror reflection, so that the visual effect of the image of the flaw part of the glass edge can always be highlighted.
As an alternative embodiment, the light source is in the shape of a ring surface light source and a cover surface light source. Compared with the case of only including the annular light source, the light source structure can improve the detection effect on the 'burst edge'.
When the light source is arranged, the lower edge of the light-emitting part of the light source at least needs to be consistent with the position of the lower end of the glass edge or lower than the lower end of the glass edge, so that the effect of the light source can be ensured. At the same time, however, such light sources may impede the movement of the glass on the industrial production line.
As an alternative embodiment, as shown in fig. 8C, the side view of the industrial line 30 is a ladder shape with a small top and a big bottom, and the industrial line 30 has a highest plane during operation. The ring light source or capped ring light source is disposed around the uppermost plane of the industrial production line. The lower end of the ring light source 83 may be positioned flush with or slightly below the plane of the uppermost plane of the industrial line 30. At the same time, the lower end of the ring light source is kept at a distance from the surface of the industrial line 30, which is greater than the maximum thickness of the glass 39 to be measured. When the glass 39 to be measured is transported to the highest level of the industrial production line, the ring light source or the capped ring light source surrounds the glass to be measured. Because the distance between the lower end of the annular light source 83 and the industrial production line 30 is larger than the maximum thickness of the glass, the position of the annular light source does not need to be changed when the glass moves on the industrial production line, and meanwhile, the movement of the glass is not interfered by the light source. The lower edge of the light-emitting part of the light source is flush with or lower than the lower end of the edge of the glass, and the integrity of the incident light angle of the light source is ensured.
As an alternative embodiment, the ring light source 84 may be moved up and down, as shown in FIG. 8D. The upper end of the annular light source is provided with a telescopic rod 841 which can move up and down and can be driven by a cylinder, hydraulic pressure or a motor. When the glass to be measured on the industrial production line 30 moves below the light source 84, the movement of the industrial production line stops, and at this time, the telescopic rod 841 descends to make the lower edge of the light emitting part of the light source flush with the lower end of the glass to be measured 39. After the glass edge image is acquired, the telescopic rod 841 is lifted, and the industrial production line resumes movement. Correspondingly, the industrial production line 30 can be driven by a servo motor, and a fixed area for placing the glass 39 to be measured is arranged on the industrial production line. The servo motor operates each time to move the industrial production line 30 a fixed distance. With such an arrangement, the light source 84 can be raised and lowered at the same time interval, thereby simplifying the procedure and reducing the error probability. Meanwhile, a pressure feedback device or a distance measuring device may be provided on the light source to ensure that the light source does not crush the glass 39 to be measured when it is lowered.
Since the light source itself is opaque, the image acquisition subsystem 32 generally needs to be installed between the light source and the glass to be measured in order to acquire an image of the glass to be measured. Also, since the cameras in the image acquisition system 32 are also typically opaque, the cameras tend to block some portion of the light from the light source. If the number of cameras is multiple and two cameras are in reflective relationship with the glass edge flaw, the resulting image of the glass edge flaw will again appear "dark".
In order to solve the above problem, as an alternative solution, a light source is provided on the camera 321. The light source may be arranged to the side of the camera lens and may be in the form of a surface light source so that the angle of the outgoing light rays will at least compensate for the light rays of the portion of the annular light source that is blocked by the camera. The light source may also be annular, mounted around the camera lens. By the arrangement mode, light rays emitted by the light source can be further ensured to make up for the light rays blocked by the camera.
Defect detection
After a sharp image of the glass edge is obtained, the processor 33 detects flaws from the image.
Flaw detection method comprising front camera
1. Flaw detection mode without image splicing
And performing flaw detection without image splicing, and performing flaw detection on the clear image acquired by the image acquisition subsystem at each shooting moment so as to judge whether the image at a certain position of the edge of the glass acquired at the shooting moment has flaws. Meanwhile, since the detection system including the front camera 312 can directly obtain the position of the clear image at the edge of the glass at any time, when the defect is detected at the clear imaging position of the edge of the glass at any time, the spatial position of the defect can be known. In this way, the presence or absence of a flaw on the glass edge and the position of the glass edge at which the flaw is located can be known simultaneously.
The specific determination process is shown in fig. 9A.
In step S911, a reference value of brightness or gradation is determined. The reference value may be determined as an average value of the brightness or the gray scale of the edge of the previous piece of glass. As another embodiment, the average value of brightness or gray scale may be determined by the flawless glass prior to testing.
In step S912, an absolute value of a difference between the average value of the brightness or the gray scale of the current picture and the reference value is calculated.
In step S913, it is determined whether the absolute value exceeds a reasonable error range. If the absolute value of the calculation does not exceed a reasonable error range, the position of the glass edge is considered to be free from flaws. If the absolute value of the calculation result is out of the reasonable error range, it is considered that the flaw is present at the position of the glass edge, and the process proceeds to step S914.
In step S914, spatial information of the glass edge corresponding to the defect picture is obtained. This spatial information may be obtained by the location-aware subsystem 31.
In step S915, feedback is performed. The feedback mode can be used for alarming through sound and light, and can also be fed back to a working computer of a quality testing person through wired connection, or can be sent to an external mobile terminal such as a smart watch and a smart phone through wireless connection.
The method directly judges whether the clear image obtained at any moment has flaws, saves the image splicing step, and is efficient and rapid.
2. Flaw detection mode after image splicing
And a flaw detection mode of image splicing is not carried out, and each detection is relatively independent. Therefore, the result of the detection is greatly affected by the previous step. For example, the reference value depends on the brightness and gray scale information of the glass before detection, and the difference exists under the influence of different environmental illumination, so the precision of flaw detection is influenced along with the change of natural illumination. And the position information of the clear imaging part at any time is completely dependent on the front camera 312 and the encoder 311, if the information obtained by the front camera 312 or the encoder 311 itself generates deviation, the obtained position information is deviated.
As another glass edge flaw detection method, as shown in fig. 9B, the following steps are performed:
in step S921, the images are stitched to form a sharp image of the complete glass edge.
In step S922, an average value of the brightness or gradation of the entire image obtained in step S921 is calculated. When the obtained picture is a color picture, the average value is a brightness average value; when the acquired picture is a black-and-white picture, the average value is a gray average value.
In step S923, the brightness or gray scale of each pixel/pixel group in the complete image obtained in step S921 is calculated in a specific calculation order and compared with the average value of the brightness or gray scale obtained in step S921. The comparison may be by calculating the absolute value of the difference between the luminance or gray level of the pixel and the average value. Or the absolute value of the division of the difference value of the brightness or the gray scale of the pixel and the average value is calculated, and the difference value is divided by the average value, so that the influence of the illumination condition on the judgment of the flaw part can be effectively eliminated. And if the comparison result of the brightness or the gray scale of the pixel with the reference value is larger than a certain threshold value, the pixel position is considered as a defect position.
In step S924, the relative position of the flaw portion at the glass edge is obtained based on the order of calculating the flaws obtained in step S923. In this case, the spatial information acquired by the position acquisition subsystem 31 may be received and integrated with the relative position acquired in the past.
In step S925, the relative position is fed back to the quality inspector. The feedback mode can be fed back to a working computer of a quality testing worker through wired connection, or can be sent to an external mobile terminal such as a smart watch and a smart phone through wireless connection.
Flaw detection method without front camera 312
1. Flaw detection mode without image splicing
And performing flaw detection without image splicing, and performing flaw detection on the clear image acquired by the image acquisition subsystem at each shooting moment so as to judge whether the image at a certain position of the edge of the glass acquired at the shooting moment has flaws. Since the flaw detection method without the front camera 312 is difficult to obtain the position of the clear imaging position at any time, such a detection method is usually used to determine whether there is a flaw on the edge of the glass to be measured.
The method directly judges whether the clear image obtained at any moment has flaws, saves the image splicing step, and is efficient and rapid.
2. Flaw detection mode after image splicing
And flaw detection is carried out after the images are spliced, so that on one hand, the detection precision can be improved, and on the other hand, the position information of the flaw part can be obtained by obtaining the position relation of the flaw part compared with other parts of the object to be detected. With this method, the position of the defective portion can be obtained despite the absence of position information provided by the position acquisition system.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.

Claims (10)

1. An image acquisition method, characterized by:
the image acquisition method comprises the following steps:
driving a camera or/and a target object to make them move relatively so that a part to be shot of the target object can appear in a clear imaging plane of the camera in a time-sharing manner;
collecting an image of a target object;
processing an image captured by the camera to identify a portion of the camera where a sharp imaging plane coincides with the target object surface.
2. The image acquisition method according to claim 1, characterized in that:
the camera or/and the target are driven to rotate around a rotation axis, and the rotation axis is parallel to a clear imaging plane of the camera.
3. The image acquisition method according to claim 1, characterized in that:
and driving the camera or/and the target object to move along a straight line.
4. The image acquisition method according to claim 1, characterized in that processing the image taken by the camera comprises:
the peak value of the gradient values of the image parameters of a pixel/group of pixels in the image is calculated.
5. The image acquisition method according to claim 1, characterized in that processing the image taken by the camera comprises:
and denoising the image.
6. The image acquisition method according to claim 1, further comprising:
acquiring position information of the contour of a target object;
acquiring position information of a clear imaging surface of a camera;
and calculating the coincidence position of the surface of the target object and the clear imaging surface of the camera according to the position information of the contour of the target object and the position information of the clear imaging surface of the camera.
7. The image acquisition method according to claim 6, characterized in that:
the location information includes coordinate information.
8. The image acquisition method according to claim 1, further comprising:
emitting laser light, wherein the region formed by the laser light comprises a clear imaging surface of the camera;
and acquiring the position information of the laser spot in the image, and matching the position information of the laser spot with the image.
9. The image acquisition method according to claim 8, wherein acquiring laser spot position information in the image comprises:
filtering the image by adopting a channel corresponding to the laser color in the multi-color camera to obtain a filtered image;
and acquiring the position information of the laser spots in the filtered image.
10. The image acquisition method according to claim 1, further comprising:
acquiring a plurality of images of the target object;
and fusing the parts of the plurality of images, which are overlapped by the clear imaging surfaces of the cameras and the surface of the target object.
CN201810966719.9A 2018-05-25 2018-08-23 Image acquisition method Pending CN110602355A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810517889 2018-05-25
CN2018105178899 2018-05-25
CN2018107638045 2018-07-12
CN201810763804 2018-07-12

Publications (1)

Publication Number Publication Date
CN110602355A true CN110602355A (en) 2019-12-20

Family

ID=65839540

Family Applications (20)

Application Number Title Priority Date Filing Date
CN201810967671.3A Pending CN110596130A (en) 2018-05-25 2018-08-23 Industrial detection device with auxiliary lighting
CN201810967732.6A Pending CN110536048A (en) 2018-05-25 2018-08-23 A kind of camera constituted with biasing
CN201810968066.8A Pending CN110596133A (en) 2018-05-25 2018-08-23 Method suitable for industrial image detection
CN201810968901.8A Pending CN110596135A (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection device based on image acquisition
CN201810967692.5A Active CN110595999B (en) 2018-05-25 2018-08-23 Image acquisition system
CN201810966579.5A Pending CN110530868A (en) 2018-05-25 2018-08-23 A kind of detection method based on location information and image information
CN201821365534.4U Active CN208754389U (en) 2018-05-25 2018-08-23 A kind of camera constituted with biasing
CN201821365450.0U Active CN208672539U (en) 2018-05-25 2018-08-23 A kind of foliated glass edge faults detection device based on Image Acquisition
CN201810966747.0A Pending CN110530885A (en) 2018-05-25 2018-08-23 A kind of Systems for optical inspection suitable for industrial production line
CN201810966719.9A Pending CN110602355A (en) 2018-05-25 2018-08-23 Image acquisition method
CN201810966428.XA Active CN110596126B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection method based on image acquisition
CN201810966469.9A Active CN110596127B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection system based on image acquisition
CN201810968204.2A Active CN110530869B (en) 2018-05-25 2018-08-23 Detection system based on position information and image information
CN201810967615.XA Active CN110596129B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection system based on image acquisition
CN201810967920.9A Active CN110596131B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection method based on image acquisition
CN201821364204.3U Active CN208860761U (en) 2018-05-25 2018-08-23 A kind of industry detection apparatus with floor light
CN201810966595.4A Active CN110596128B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection system based on image acquisition
CN201810968298.3A Active CN110596134B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection method based on image acquisition
CN201810968036.7A Pending CN110596132A (en) 2018-05-25 2018-08-23 System suitable for industrial image detection
CN201810967811.7A Pending CN110530889A (en) 2018-05-25 2018-08-23 A kind of optical detecting method suitable for industrial production line

Family Applications Before (9)

Application Number Title Priority Date Filing Date
CN201810967671.3A Pending CN110596130A (en) 2018-05-25 2018-08-23 Industrial detection device with auxiliary lighting
CN201810967732.6A Pending CN110536048A (en) 2018-05-25 2018-08-23 A kind of camera constituted with biasing
CN201810968066.8A Pending CN110596133A (en) 2018-05-25 2018-08-23 Method suitable for industrial image detection
CN201810968901.8A Pending CN110596135A (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection device based on image acquisition
CN201810967692.5A Active CN110595999B (en) 2018-05-25 2018-08-23 Image acquisition system
CN201810966579.5A Pending CN110530868A (en) 2018-05-25 2018-08-23 A kind of detection method based on location information and image information
CN201821365534.4U Active CN208754389U (en) 2018-05-25 2018-08-23 A kind of camera constituted with biasing
CN201821365450.0U Active CN208672539U (en) 2018-05-25 2018-08-23 A kind of foliated glass edge faults detection device based on Image Acquisition
CN201810966747.0A Pending CN110530885A (en) 2018-05-25 2018-08-23 A kind of Systems for optical inspection suitable for industrial production line

Family Applications After (10)

Application Number Title Priority Date Filing Date
CN201810966428.XA Active CN110596126B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection method based on image acquisition
CN201810966469.9A Active CN110596127B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection system based on image acquisition
CN201810968204.2A Active CN110530869B (en) 2018-05-25 2018-08-23 Detection system based on position information and image information
CN201810967615.XA Active CN110596129B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection system based on image acquisition
CN201810967920.9A Active CN110596131B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection method based on image acquisition
CN201821364204.3U Active CN208860761U (en) 2018-05-25 2018-08-23 A kind of industry detection apparatus with floor light
CN201810966595.4A Active CN110596128B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection system based on image acquisition
CN201810968298.3A Active CN110596134B (en) 2018-05-25 2018-08-23 Sheet glass edge flaw detection method based on image acquisition
CN201810968036.7A Pending CN110596132A (en) 2018-05-25 2018-08-23 System suitable for industrial image detection
CN201810967811.7A Pending CN110530889A (en) 2018-05-25 2018-08-23 A kind of optical detecting method suitable for industrial production line

Country Status (1)

Country Link
CN (20) CN110596130A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191623A (en) * 2020-01-03 2020-05-22 圣点世纪科技股份有限公司 Finger vein shooting distance determination method
CN112254653A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Program control method for 3D information acquisition
CN112469984A (en) * 2019-12-31 2021-03-09 深圳迈瑞生物医疗电子股份有限公司 Image analysis device and imaging method thereof
CN112595245A (en) * 2021-03-08 2021-04-02 深圳中科飞测科技股份有限公司 Detection method, detection system, and non-volatile computer-readable storage medium
CN117871415A (en) * 2024-03-11 2024-04-12 天津大学四川创新研究院 Exposure type structural flaw detection system and method based on parallel light source

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110596130A (en) * 2018-05-25 2019-12-20 上海翌视信息技术有限公司 Industrial detection device with auxiliary lighting
CN110300248A (en) * 2019-07-12 2019-10-01 浙江大华技术股份有限公司 A kind of imaging system and video camera
CN110455813B (en) * 2019-08-29 2024-02-27 东莞市三姆森光电科技有限公司 Universal system and method for extracting irregular arc edges
CN110567973B (en) * 2019-09-27 2022-07-05 济南大学 Piston detection platform and method based on image acquisition
US10965856B1 (en) * 2019-12-12 2021-03-30 GM Global Technology Operations LLC Camera vision systems
CN110987971B (en) * 2019-12-19 2022-06-24 太原理工大学 Crystal bubble detection device and method based on machine vision
CN113052787A (en) * 2019-12-27 2021-06-29 中核北方核燃料元件有限公司 Automatic identification device and method for riser of ball blank
CN111077168A (en) * 2019-12-30 2020-04-28 彩虹显示器件股份有限公司 Device and method for spot inspection of plate glass flaws
CN113313664A (en) * 2020-02-07 2021-08-27 财团法人石材暨资源产业研究发展中心 Stone image analysis method based on stone processing
CN111198160A (en) * 2020-02-14 2020-05-26 巢湖学院 CCD image detection device and use method thereof
CN111366592B (en) * 2020-04-15 2022-10-25 西北核技术研究院 Automatic fragment detection system based on industrial photogrammetry
CN111707675B (en) * 2020-06-11 2024-05-14 圣山集团有限公司 Cloth surface flaw online monitoring device and monitoring method thereof
CN111649697B (en) * 2020-07-03 2022-02-11 东北大学 Metal strip shape detection method based on stereo vision of linear array camera
US11263755B2 (en) * 2020-07-17 2022-03-01 Nanya Technology Corporation Alert device and alert method thereof
CN111935404B (en) * 2020-08-14 2021-10-15 腾讯科技(深圳)有限公司 Microspur imaging system, method and device
CN111993200B (en) * 2020-08-17 2022-07-08 上海中车瑞伯德智能系统股份有限公司 Welding seam identification and positioning method and device for welding seam polishing
CN114079768B (en) * 2020-08-18 2023-12-05 杭州海康汽车软件有限公司 Image definition testing method and device
CN112098428A (en) * 2020-09-04 2020-12-18 杭州百子尖科技股份有限公司 Intelligent flaw identification system based on machine vision in sheet building material manufacturing
CN112098429A (en) * 2020-09-18 2020-12-18 广东川田卫生用品有限公司 Packaged product defect detection equipment based on machine vision
CN112344879B (en) * 2020-09-29 2022-03-25 联想(北京)有限公司 Method, device and equipment for detecting glue road
CN112163442B (en) * 2020-09-29 2022-05-06 杭州海康威视数字技术股份有限公司 Graphic code recognition system, method and device
CN112268174B (en) * 2020-10-22 2022-07-05 陕西世和安全应急技术有限公司 Image detection and identification device and method for safety production line
CN112348807B (en) * 2020-11-27 2022-11-18 安徽大学 Endoscope highlight point repairing method and system based on contour pixel statistics
CN112631064B (en) * 2020-12-30 2022-04-08 安徽地势坤光电科技有限公司 Method for adjusting installation angle of imaging lens
CN112581547B (en) * 2020-12-30 2022-11-08 安徽地势坤光电科技有限公司 Rapid method for adjusting installation angle of imaging lens
CN112859189B (en) * 2020-12-31 2024-08-02 广东美的白色家电技术创新中心有限公司 Workpiece detection device, workpiece detection method, and computer-readable storage medium
CN113203363A (en) * 2021-04-08 2021-08-03 福建呈祥机械制造有限公司 Bamboo tube measuring method and measuring device based on digital image processing technology
CN113176271B (en) * 2021-04-27 2022-05-03 凯多智能科技(上海)有限公司 Deviation-correcting, flaw and size detection sensor
CN113620614A (en) * 2021-07-27 2021-11-09 深圳市若菲特科技有限公司 Method, device and equipment for removing ink on glass surface and storage medium
CN113670212B (en) * 2021-09-22 2024-09-17 深圳南玻应用技术有限公司 Dimension detection method and dimension detection device
CN114184617A (en) * 2021-12-07 2022-03-15 创新奇智(北京)科技有限公司 Detection device
JPWO2023105849A1 (en) * 2021-12-07 2023-06-15
CN114509021B (en) * 2022-02-18 2024-04-16 深圳市中钞科信金融科技有限公司 Special-shaped plate glass edge imaging method
CN114608459B (en) * 2022-03-08 2024-05-07 江苏泗阳协力轻工机械有限公司 Glass tube detection equipment
CN114697556B (en) * 2022-04-12 2023-06-06 山东瑞邦智能装备股份有限公司 Rotary photographing method for line production line
CN115018829A (en) * 2022-08-03 2022-09-06 创新奇智(成都)科技有限公司 Glass flaw positioning method and device
CN115266681B (en) * 2022-09-27 2022-12-16 南京诺源医疗器械有限公司 High-precision scanning and rapid marking method for medical Raman spectral imaging
CN115598805B (en) * 2022-10-18 2023-04-21 深圳市灿锐科技有限公司 Low-cost large-view-field telecentric lens with variable working distance and detection method thereof
CN115356355B (en) * 2022-10-21 2022-12-27 沃卡姆(山东)真空玻璃科技有限公司 Automatic detection blanking conveying line and detection method for vacuum laminated glass
CN115375686B (en) * 2022-10-25 2023-01-24 山东鲁玻玻璃科技有限公司 Glass edge flaw detection method based on image processing
TWI830553B (en) * 2022-12-26 2024-01-21 荷蘭商荷蘭移動驅動器公司 Method for detecting wear of vehicle windows and related devices
TWI828545B (en) * 2023-02-22 2024-01-01 開必拓數據股份有限公司 Flexible and intuitive system for configuring automated visual inspection system
CN116665164A (en) * 2023-04-28 2023-08-29 深圳云天励飞技术股份有限公司 Manhole cover disease detection method, device and system, electronic equipment and storage medium
CN117092114B (en) * 2023-10-16 2023-12-29 苏州德机自动化科技有限公司 Appearance detection system based on AI
CN117491391B (en) * 2023-12-29 2024-03-15 登景(天津)科技有限公司 Glass substrate light three-dimensional health detection method and equipment based on chip calculation
CN117949462B (en) * 2024-03-26 2024-08-06 广州市易鸿智能装备股份有限公司 Online high-speed high-precision burr detection method, device and storage medium
CN118151278B (en) * 2024-04-16 2024-10-15 上海频准激光科技有限公司 Target grating generation system
CN118067621B (en) * 2024-04-17 2024-06-25 常州旭焱光电科技有限公司 Production equipment and production process of precise ceramic piece
CN118090754B (en) * 2024-04-18 2024-07-12 菲特(天津)检测技术有限公司 Surface image acquisition system
CN118096602B (en) * 2024-04-25 2024-06-21 中国中建设计研究院有限公司 Stone repairing and scanning method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105571512A (en) * 2015-12-15 2016-05-11 北京康拓红外技术股份有限公司 Vehicle information acquisition method based on integration of depth information and visual image information and device thereof
CN105806249A (en) * 2016-04-15 2016-07-27 南京拓控信息科技股份有限公司 Method for achieving image collection and depth measurement simultaneously through a camera
CN105812669A (en) * 2016-05-13 2016-07-27 大族激光科技产业集团股份有限公司 Curved automatic imaging focusing method and system
CN106375648A (en) * 2015-08-31 2017-02-01 北京智谷睿拓技术服务有限公司 Image collection control method and device
CN106534661A (en) * 2015-09-15 2017-03-22 中国科学院沈阳自动化研究所 Automatic focus algorithm accumulated based on strongest edge gradient Laplasse operator
CN107087107A (en) * 2017-05-05 2017-08-22 中国科学院计算技术研究所 Image processing apparatus and method based on dual camera

Family Cites Families (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04226441A (en) * 1990-06-04 1992-08-17 Fuji Photo Film Co Ltd Camera
JP3339902B2 (en) * 1993-03-18 2002-10-28 オリンパス光学工業株式会社 Image processing system
JP2001094841A (en) * 1999-09-21 2001-04-06 Asahi Optical Co Ltd Digital still camera having camera movements function
JP2001153625A (en) * 1999-11-24 2001-06-08 Enutekku:Kk Appearance inspecting method and device for article
SE518050C2 (en) * 2000-12-22 2002-08-20 Afsenius Sven Aake Camera that combines sharply focused parts from various exposures to a final image
US6531707B1 (en) * 2000-12-29 2003-03-11 Cognex Corporation Machine vision method for the inspection of a material for defects
US9092841B2 (en) * 2004-06-09 2015-07-28 Cognex Technology And Investment Llc Method and apparatus for visual detection and inspection of objects
US7738945B2 (en) * 2002-04-19 2010-06-15 University Of Washington Method and apparatus for pseudo-projection formation for optical tomography
JP2004072076A (en) * 2002-06-10 2004-03-04 Nikon Corp Exposure device, stage unit and method for manufacturing device
US6801719B1 (en) * 2003-03-14 2004-10-05 Eastman Kodak Company Camera using beam splitter with micro-lens image amplification
US20040196454A1 (en) * 2003-04-03 2004-10-07 Takayuki Ishiguro Optical system, detector and method for detecting peripheral surface defect of translucent disk
JP2004309287A (en) * 2003-04-07 2004-11-04 Nippon Sheet Glass Co Ltd Defect detection device and defect detection method
JP4775930B2 (en) * 2004-02-20 2011-09-21 キヤノン株式会社 LENS CONTROL DEVICE, IMAGING DEVICE, AND LENS CONTROL METHOD
JP4390068B2 (en) * 2004-12-28 2009-12-24 ソニー株式会社 Method for correcting distortion of captured image signal and distortion correction apparatus for captured image signal
JP4734552B2 (en) * 2005-03-15 2011-07-27 名古屋市 Method and apparatus for measuring three-dimensional shape of road surface
JP2007057431A (en) * 2005-08-25 2007-03-08 Takashima Giken Kk Method and device for inspecting flaw of resin-made bottle body
DE102005041431B4 (en) * 2005-08-31 2011-04-28 WÖHLER, Christian Digital camera with swiveling image sensor
CN1983140A (en) * 2005-12-13 2007-06-20 邓仕林 Multi-lens light-path optical imager with pen-like light mouse
KR101181629B1 (en) * 2006-01-03 2012-09-10 엘지이노텍 주식회사 Camera module and methods of manufacturing a camera module
US20080088830A1 (en) * 2006-10-16 2008-04-17 Shigeru Serikawa Optical system of detecting peripheral surface defect of glass disk and device of detecting peripheral surface defect thereof
CN101109715B (en) * 2007-08-01 2010-09-22 北京理工大学 Optical method for detecting defect on inner wall of holes
TWI343207B (en) * 2007-09-07 2011-06-01 Lite On Technology Corp Device and method for obtain a clear image
CN101539422B (en) * 2009-04-22 2011-04-13 北京航空航天大学 Monocular vision real time distance measuring method
CN101576508B (en) * 2009-05-27 2011-09-21 华南理工大学 Device and method for automatically detecting chip appearance defects
CN101630061B (en) * 2009-08-17 2012-03-07 公安部物证鉴定中心 Optical confocal three-dimensional data acquisition method of tool traces
CN102033068A (en) * 2009-09-24 2011-04-27 苏州维世迅机器视觉技术有限公司 Product on-line detector
JP2011097377A (en) * 2009-10-29 2011-05-12 Sanyo Electric Co Ltd Imaging device
CN102131044B (en) * 2010-01-20 2014-03-26 鸿富锦精密工业(深圳)有限公司 Camera module
JP2011197283A (en) * 2010-03-18 2011-10-06 Sony Corp Focusing device, focusing method, focusing program, and microscope
EP2568870B1 (en) * 2010-03-30 2018-05-02 3Shape A/S Scanning of cavities with restricted accessibility
CN101840146A (en) * 2010-04-20 2010-09-22 夏佳梁 Method and device for shooting stereo images by automatically correcting parallax error
CN102218406B (en) * 2011-01-04 2013-06-12 华南理工大学 Intelligent detection device of defects of mobile phone outer shell based on machine vision
CN102074044B (en) * 2011-01-27 2012-11-07 深圳泰山在线科技有限公司 System and method for reconstructing surface of object
CN102253050A (en) * 2011-03-14 2011-11-23 广州市盛通建设工程质量检测有限公司 Automatic detection method and device for magnetic tile surface defect based on machine vision
WO2012142967A1 (en) * 2011-04-21 2012-10-26 Ati-China Co., Ltd. Apparatus and method for photographing glass defects in multiple layers
KR101278249B1 (en) * 2011-06-01 2013-06-24 주식회사 나노텍 Apparatus for Detecting a Defect in Edge of Glass Plate and the Method Thereof
CN102289119B (en) * 2011-07-01 2014-02-05 深圳市华星光电技术有限公司 Liquid crystal display and method for repairing broken line
CN103033525B (en) * 2011-09-30 2016-03-02 清华大学 CT system and CT image rebuilding method
CN103093419B (en) * 2011-10-28 2016-03-02 浙江大华技术股份有限公司 A kind of method of detected image sharpness and device
JP5818651B2 (en) * 2011-11-22 2015-11-18 株式会社キーエンス Image processing device
CN103246166B (en) * 2012-02-02 2015-03-25 上海微电子装备有限公司 Silicon wafer prealignment measuring apparatus
CN102788802A (en) * 2012-08-29 2012-11-21 苏州天准精密技术有限公司 Workpiece quality detection method by multiple cameras
CN102818538B (en) * 2012-09-14 2014-09-10 洛阳兰迪玻璃机器股份有限公司 Detection system based on modulated glass thread structure laser image
US20140152771A1 (en) * 2012-12-01 2014-06-05 Og Technologies, Inc. Method and apparatus of profile measurement
EA031929B1 (en) * 2012-12-14 2019-03-29 Бипи Корпорейшн Норд Америка Инк. Apparatus and method for three dimensional surface measurement
TW201435299A (en) * 2013-03-15 2014-09-16 Og Technologies Inc A method and apparatus of profile measurement
KR101449273B1 (en) * 2013-04-23 2014-10-08 주식회사 포스코 Apparatus and method of detecting side defect of strip edge
CN104183010A (en) * 2013-05-22 2014-12-03 上海迪谱工业检测技术有限公司 Multi-view three-dimensional online reconstruction method
CN103350281B (en) * 2013-06-20 2015-07-29 大族激光科技产业集团股份有限公司 Laser marking machine automatic focusing device and automatic focusing method
ES2666499T3 (en) * 2013-07-03 2018-05-04 Kapsch Trafficcom Ab Method for identification of contamination in a lens of a stereoscopic camera
JP6242098B2 (en) * 2013-07-16 2017-12-06 株式会社キーエンス 3D image processing apparatus, 3D image processing method, 3D image processing program, computer-readable recording medium, and recorded apparatus
CN103471512B (en) * 2013-09-06 2016-08-17 中国建材国际工程集团有限公司 A kind of glass plate width detecting system based on machine vision
CN104614878A (en) * 2013-11-04 2015-05-13 北京兆维电子(集团)有限责任公司 Liquid crystal display detection system
CN103743753B (en) * 2014-01-23 2016-03-30 四川大学 A kind of magnetic shoe flexible imaging device
JP2016008930A (en) * 2014-06-26 2016-01-18 澁谷工業株式会社 Container opening portion inspection device and container opening portion inspection method
CN104458764B (en) * 2014-12-14 2017-03-22 中国科学技术大学 Curved uneven surface defect identification method based on large-field-depth stripped image projection
CN104501740B (en) * 2014-12-18 2017-05-10 杭州鼎热科技有限公司 Handheld laser three-dimension scanning method and handheld laser three-dimension scanning equipment based on mark point trajectory tracking
CN104463887A (en) * 2014-12-19 2015-03-25 盐城工学院 Tool wear detection method based on layered focusing image collection and three-dimensional reconstruction
CN204593030U (en) * 2015-01-14 2015-08-26 埃赛力达科技(深圳)有限公司 The illumination module of the even collimation of photocopy photoetching and matrix lamp system
US10044932B2 (en) * 2015-03-13 2018-08-07 Sensormatic Electronics, LLC Wide angle fisheye security camera having offset lens and image sensor
CN106161912B (en) * 2015-03-24 2019-04-16 北京智谷睿拓技术服务有限公司 Focusing method and device, capture apparatus
CN104914111B (en) * 2015-05-18 2018-08-03 北京华检智研软件技术有限责任公司 A kind of steel strip surface defect online intelligent recognition detecting system and its detection method
TWI582557B (en) * 2015-07-15 2017-05-11 由田新技股份有限公司 A linear encoder and optical inspection platform comprising the linear encoder
CN204731167U (en) * 2015-07-16 2015-10-28 南京汇川工业视觉技术开发有限公司 A kind of bottled product outer package label coding defect detecting system
CN105072330B (en) * 2015-07-17 2019-01-11 电子科技大学 A kind of automatic focusing method of line-scan digital camera
CN105044124A (en) * 2015-08-27 2015-11-11 李明英 Glass flaw classification device based on gray mean value analysis
CN105158267A (en) * 2015-09-22 2015-12-16 安徽省科亿信息科技有限公司 Device and method for 360-degree bottle body production line backlight vision inspection
CN105158266A (en) * 2015-09-22 2015-12-16 安徽省科亿信息科技有限公司 Device and method for 360-degree bottle body production line front-light vision inspection
CN105898135A (en) * 2015-11-15 2016-08-24 乐视移动智能信息技术(北京)有限公司 Camera imaging method and camera device
CN105572144B (en) * 2016-03-07 2018-11-20 凌云光技术集团有限责任公司 Glass corner image collecting device and system
DE102016203709B4 (en) * 2016-03-08 2018-04-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Image processing method, image processing means and image processing apparatus for generating images of a part of a three-dimensional space
WO2017162778A1 (en) * 2016-03-22 2017-09-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device for three-dimensionally measuring an object, method and computer program having image-based triggering
CN105784719A (en) * 2016-05-04 2016-07-20 成都贝森伟任科技有限责任公司 Machine vision inspection terminal capable of sorting goods
US9648225B1 (en) * 2016-05-10 2017-05-09 Howard Preston Method, apparatus, system and software for focusing a camera
CN105866129A (en) * 2016-05-16 2016-08-17 天津工业大学 Product surface quality online detection method based on digital projection
CN106023193B (en) * 2016-05-18 2018-06-19 东南大学 A kind of array camera observation procedure detected for body structure surface in turbid media
CN105953747B (en) * 2016-06-07 2019-04-02 杭州电子科技大学 Structured light projection full view 3-D imaging system and method
WO2017222558A1 (en) * 2016-06-24 2017-12-28 Isee, Inc. Laser-enhanced visual simultaneous localization and mapping (slam) for mobile devices
CN106248686A (en) * 2016-07-01 2016-12-21 广东技术师范学院 Glass surface defects based on machine vision detection device and method
CN106093059B (en) * 2016-08-16 2019-11-19 杭州赤霄科技有限公司 Hygiene medical treatment protective articles mask vision detection system
CN106525869B (en) * 2016-11-09 2024-03-22 芜湖东旭光电科技有限公司 Glass edge defect detection method, device and system thereof
CN206534707U (en) * 2016-11-23 2017-10-03 北京锐视康科技发展有限公司 Navigated in a kind of PET fluorescent dual modules state art imaging system
CN106548480B (en) * 2016-12-23 2023-05-26 蚌埠学院 Quick agricultural product volume measuring device and measuring method based on machine vision
CN106896115A (en) * 2017-02-21 2017-06-27 上海大学 Varnished glass Defect Detection device based on area array cameras parallel connection acquisition system
CN206862915U (en) * 2017-02-28 2018-01-09 武汉易视维科技有限公司 A kind of plastic bottle quality visual detecting system
CN106959078B (en) * 2017-02-28 2019-07-30 苏州凡目视觉科技有限公司 A kind of contour measuring method for measuring three-dimensional profile
CN107133981B (en) * 2017-03-30 2019-04-12 腾讯科技(深圳)有限公司 Image processing method and device
CN206756711U (en) * 2017-04-07 2017-12-15 凌云光技术集团有限责任公司 A kind of photovoltaic glass corner detection means
CN106872477A (en) * 2017-04-11 2017-06-20 安徽国防科技职业学院 Car engine seal open defect fault detection system
CN107063099B (en) * 2017-04-11 2019-04-05 吉林大学 A kind of online quality monitoring method of machinery manufacturing industry of view-based access control model system
CN107169957A (en) * 2017-04-28 2017-09-15 燕山大学 A kind of glass flaws on-line detecting system and method based on machine vision
CN206891990U (en) * 2017-05-24 2018-01-16 南京邮电大学 A kind of surface defects of products on-line measuring device
CN207036721U (en) * 2017-08-16 2018-02-23 福耀集团(上海)汽车玻璃有限公司 Glass edge detection means and glass processing line
CN107302667B (en) * 2017-08-17 2019-05-07 中国人民解放军国防科技大学 Camera-interchangeable dynamic spectral imaging system and method for applying same to high dynamic imaging
CN107478661A (en) * 2017-09-11 2017-12-15 深圳市中天元光学玻璃有限公司 A kind of glass screen on-line measuring device
CN107830813B (en) * 2017-09-15 2019-10-29 浙江理工大学 The longaxones parts image mosaic and bending deformation detection method of laser wire tag
CN107632021B (en) * 2017-10-12 2020-02-07 中国矿业大学 Portable combined type semi-automatic continuous accurate shooting auxiliary platform and use method
CN107907053A (en) * 2017-12-12 2018-04-13 扬州大学 A kind of micro-displacement measuring system
CN107941703A (en) * 2017-12-20 2018-04-20 上海鸿珊光电子技术有限公司 A kind of polarity equipment that non-contact judgement MPO optical devices are realized using camera imaging
CN110596130A (en) * 2018-05-25 2019-12-20 上海翌视信息技术有限公司 Industrial detection device with auxiliary lighting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375648A (en) * 2015-08-31 2017-02-01 北京智谷睿拓技术服务有限公司 Image collection control method and device
CN106534661A (en) * 2015-09-15 2017-03-22 中国科学院沈阳自动化研究所 Automatic focus algorithm accumulated based on strongest edge gradient Laplasse operator
CN105571512A (en) * 2015-12-15 2016-05-11 北京康拓红外技术股份有限公司 Vehicle information acquisition method based on integration of depth information and visual image information and device thereof
CN105806249A (en) * 2016-04-15 2016-07-27 南京拓控信息科技股份有限公司 Method for achieving image collection and depth measurement simultaneously through a camera
CN105812669A (en) * 2016-05-13 2016-07-27 大族激光科技产业集团股份有限公司 Curved automatic imaging focusing method and system
CN107087107A (en) * 2017-05-05 2017-08-22 中国科学院计算技术研究所 Image processing apparatus and method based on dual camera

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112469984A (en) * 2019-12-31 2021-03-09 深圳迈瑞生物医疗电子股份有限公司 Image analysis device and imaging method thereof
CN112469984B (en) * 2019-12-31 2024-04-09 深圳迈瑞生物医疗电子股份有限公司 Image analysis device and imaging method thereof
CN111191623A (en) * 2020-01-03 2020-05-22 圣点世纪科技股份有限公司 Finger vein shooting distance determination method
CN111191623B (en) * 2020-01-03 2023-09-19 圣点世纪科技股份有限公司 Method for determining finger vein shooting distance
CN112254653A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Program control method for 3D information acquisition
CN112254653B (en) * 2020-10-15 2022-05-20 天目爱视(北京)科技有限公司 Program control method for 3D information acquisition
CN112595245A (en) * 2021-03-08 2021-04-02 深圳中科飞测科技股份有限公司 Detection method, detection system, and non-volatile computer-readable storage medium
CN117871415A (en) * 2024-03-11 2024-04-12 天津大学四川创新研究院 Exposure type structural flaw detection system and method based on parallel light source

Also Published As

Publication number Publication date
CN110596132A (en) 2019-12-20
CN110530889A (en) 2019-12-03
CN110530868A (en) 2019-12-03
CN110596127B (en) 2022-07-08
CN110595999B (en) 2022-11-11
CN110596126A (en) 2019-12-20
CN208754389U (en) 2019-04-16
CN208672539U (en) 2019-03-29
CN110596129A (en) 2019-12-20
CN110596134A (en) 2019-12-20
CN110530869A (en) 2019-12-03
CN110596127A (en) 2019-12-20
CN110596130A (en) 2019-12-20
CN110596128A (en) 2019-12-20
CN110530869B (en) 2022-08-23
CN110596128B (en) 2022-05-27
CN110530885A (en) 2019-12-03
CN110596131B (en) 2022-06-17
CN110596131A (en) 2019-12-20
CN110596134B (en) 2022-05-31
CN110596133A (en) 2019-12-20
CN208860761U (en) 2019-05-14
CN110596135A (en) 2019-12-20
CN110595999A (en) 2019-12-20
CN110596126B (en) 2022-07-08
CN110536048A (en) 2019-12-03
CN110596129B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN110530869B (en) Detection system based on position information and image information
CA2615117C (en) Apparatus and methods for inspecting a composite structure for inconsistencies
US7599050B2 (en) Surface defect inspecting method and device
JPH11271038A (en) Painting defect inspection device
CN110402386A (en) Cylinder surface examining device and cylinder surface inspecting method
KR20200046789A (en) Method and apparatus for generating 3-dimensional data of moving object
CN209745834U (en) Optical defect detection system with brightness adjustment
KR101198406B1 (en) Pattern inspection device
KR20200062081A (en) How to inspect the optical display panel for damage
Delcroix et al. Online defects localization on mirrorlike surfaces
JPH0585003B2 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201100 room 0026, 2f, Ji building, No. 555, Dongchuan Road, Minhang District, Shanghai

Applicant after: SHANGHAI NEXT VISION INFORMATION TECH Co.,Ltd.

Address before: Room 402-02, No. 800 Naxian Road, China (Shanghai) Free Trade Pilot Area, Pudong New Area, Shanghai, 200120

Applicant before: SHANGHAI NEXT VISION INFORMATION TECH Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220