CN111366072B - Data acquisition method for image deep learning - Google Patents

Data acquisition method for image deep learning Download PDF

Info

Publication number
CN111366072B
CN111366072B CN202010086353.3A CN202010086353A CN111366072B CN 111366072 B CN111366072 B CN 111366072B CN 202010086353 A CN202010086353 A CN 202010086353A CN 111366072 B CN111366072 B CN 111366072B
Authority
CN
China
Prior art keywords
industrial camera
product
light source
motion
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010086353.3A
Other languages
Chinese (zh)
Other versions
CN111366072A (en
Inventor
张效栋
陈亮亮
朱琳琳
闫宁
李娜娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010086353.3A priority Critical patent/CN111366072B/en
Publication of CN111366072A publication Critical patent/CN111366072A/en
Application granted granted Critical
Publication of CN111366072B publication Critical patent/CN111366072B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • G01N2021/0106General arrangement of respective parts
    • G01N2021/0112Apparatus in one mechanical, optical or electronic block
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Landscapes

  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Vascular Medicine (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a data acquisition method for deep learning of images, which comprises the following steps: placing a product on the surface of a backlight light source, then sending a command for controlling multi-axis motion through a computer, receiving the command by a horizontal motion axis and a horizontal motion axis, driving the product to do random displacement motion in a plane, and receiving the command by a lifting motion axis, an inclined motion axis and a pitching motion axis, and driving an industrial camera to do lifting, inclining and pitching motion in a space; when the industrial camera moves to a certain position in space, the computer sends out an instruction for controlling the illumination of the light source, controls the on-off of the flat light sources at different positions annularly arranged in the inner cavity of the system, the annular light source carried by the camera and the backlight light source arranged under the product, tracks the position of the product by the industrial camera within a period of time to carry out space shooting under different conditions, simulates various actual shooting conditions, and obtains a data set.

Description

Data acquisition method for image deep learning
Technical Field
The invention relates to a data acquisition system for deep learning of images.
Background
In the production and application process of products, the detection, identification and classification of the surfaces of the products are a crucial link. With the development of computer technology, the detection and identification of product surfaces are developed from relying on manual work to realizing automatic detection through digital image processing technology. However, the traditional digital image processing technology has high requirements on the image acquisition environment, and when the image acquisition environment is slightly changed, the problem of reduction of the object identification accuracy rate can be brought. In recent years, with the rise of artificial intelligence technology, image recognition and detection become a more reliable mode through a deep learning method, and a convolutional neural network, as one of the representative algorithms of deep learning, is prominent in two-dimensional image processing, and particularly has good robustness and higher operation efficiency in recognizing images with displacement, scaling and other forms of distortion invariance. However, training of deep learning requires a large enough data set to support, and a traditional method for expanding a data set mainly uses an image processing means, that is, the data set is further enriched on the basis of original data, and the method has certain limitations, which is a difficult problem that the image deep learning is restricted to be applied to surface detection and identification classification of products in various industries. Currently, the following problems mainly exist in the aspect of product surface data set acquisition:
(1) considering the data set acquisition mode, the traditional data set acquisition mode is single and ideal, and the situation of image acquisition in the practical application process cannot be fully explained.
(2) In consideration of the data set, the data acquired at the present stage is small in quantity and insufficient in richness;
(3) in consideration of the data collection process, the data collection period is long, a large amount of manpower and material resources are needed, and the cost is high.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a data acquisition method for deep image learning, which can comprehensively simulate shooting conditions in different actual scenes, so as to solve the problems of simplification, idealization, small data volume, insufficient richness, etc. of the traditional data set creation method. In order to achieve the purpose, the invention adopts the following technical scheme:
a data acquisition method for deep learning of images adopts a data acquisition system which comprises an industrial camera arranged in an inner cavity of the system, a displacement mechanism, a multi-directional illumination light source and a computer, and is characterized in that,
the industrial camera is used for acquiring surface images of products under different poses and different illumination conditions to acquire a data set;
the displacement mechanism comprises a horizontal motion shaft system, a lifting motion shaft, an inclined motion shaft and a pitching motion shaft, wherein the horizontal motion shaft system consists of a horizontal motion shaft X and a horizontal motion shaft Y and is used for driving a product to be detected to move in a plane; the lifting motion shaft, the tilting motion shaft and the pitching motion shaft are connected with the industrial camera and used for carrying the industrial camera to carry out lifting, tilting or pitching motion so as to realize shooting of different spatial angles of the surface of a product;
the multi-azimuth illumination light source comprises flat light sources at different angles annularly arranged in an inner cavity of the system, an annular light source carried on a lens of the industrial camera and a backlight light source arranged below a product to be detected;
and the computer is used for controlling the action of the displacement mechanism and the multi-directional illumination light source.
The data acquisition comprises the following steps:
(1) placing a product 4 on the surface of a backlight light source 3, then sending a command for controlling multi-axis motion through a computer 1, receiving the command by a horizontal motion axis X5 and a horizontal motion axis Y6, driving the product 4 to do random displacement motion in a plane, and receiving the command by a lifting motion axis 7, an inclined motion axis 8 and a pitching motion axis 9, driving an industrial camera 10 to do lifting, inclining and pitching motion in a space;
(2) when the industrial camera 10 moves to a certain position in space, the computer 1 sends out a command for controlling the illumination of the light source, controls the on-off of the flat light source 2 at different positions annularly arranged in the inner cavity of the system, the annular light source 11 carried by the camera and the backlight light source 3 arranged under the product 4, then collects the image of the product 4 in the state by adjusting the focusing position of the industrial camera 10, tracks the position of the product for space shooting under different conditions within a period of time, simulates various actual shooting conditions, and obtains a large amount of abundant data sets.
The method for tracking the position of the product by the industrial camera comprises the following steps:
the industrial camera 10 is moved from an initial position N (0,0, z)0) The point moves to Q (0,0, z)1) The product 4 moves from the initial position O (0,0,0) to P (x)1,y10) point, in order to ensure that the product 4 is within the field of view of the industrial camera 10, the industrial camera 10 needs to rotate around the Y axis in the direction a, and also needs to rotate around the X axis in the direction B, so that the rotation angle in the direction A, B needs to be calculated, the industrial camera 10 is first rotated around the Y axis by a certain angle β and a rotation momentMatrix of
Figure GDA0002941888810000021
Then the industrial camera 10 rotates a certain angle alpha around the X axis and tracks the product 4, and the matrix is rotated
Figure GDA0002941888810000022
According to a space vector two-point distance formula and a rigid body rotation transformation principle, the calculation formula of alpha and beta is obtained as follows:
Figure GDA0002941888810000023
Figure GDA0002941888810000024
due to the adoption of the technical scheme, the invention has the following advantages:
(1) the invention can realize the shooting of different positions and different visual angles of the surface of the product through the displacement mechanism;
(2) according to the invention, the situation of uneven illumination during actual shooting can be simulated by the aid of the multi-directional illumination light sources;
(3) the displacement mechanism and the multi-directional illumination light source are matched with each other, so that the random lighting of the spatial multi-directional light source and the random shooting of different positions of the spatial fabric can be realized, and a product surface information data set can be comprehensively obtained.
(4) The invention realizes the automation of the image acquisition process, has high acquisition speed and can construct abundant and massive data sets in a short time.
Drawings
FIG. 1 is a system diagram of a data acquisition system for image depth learning of the present invention.
Fig. 2 is a photographing flow chart of the data acquisition system for image depth learning of the present invention.
Fig. 3 is a schematic diagram of the industrial camera tracking the position of the product according to the present invention.
The reference numbers in the figures illustrate: 1, a computer; 2, a flat light source; 3 backlight light source; 4, preparing a product; 5 horizontal axis of motion X; 6 horizontal axis of motion Y; 7 lifting movement shaft; 8, tilting the motion axis; 9 pitching motion axis; 10 an industrial camera; 11 annular light source
Detailed Description
The invention is described below with reference to the figures and examples.
As shown in fig. 1, the data acquisition system for image depth learning provided by the present invention mainly comprises an industrial camera 10, a displacement mechanism, a multi-directional illumination light source, and a computer. The industrial camera 10 is used for acquiring surface images of products under different poses and different illumination conditions to acquire a data set; the displacement mechanism has five freedom degrees of movement such as translation, lifting, inclination and pitching, wherein the horizontal movement axis ties the product to move randomly in a plane, and the three movement axes of lifting, inclination and pitching carry the industrial camera to perform lifting, inclination and pitching movement, so that different spatial angles of the surface of the product can be shot; the multi-azimuth illumination light source mainly comprises a flat light source 2 arranged in the system cavity in an annular mode and arranged at different angles, an annular light source 11 carried by a camera lens and a backlight light source 3 arranged below a product.
The product image shooting process is shown in fig. 2, and the specific acquisition process is as follows: the product 4 is firstly placed on the surface of the backlight light source 3, then a command for controlling multi-axis motion is sent out through the computer 1, the horizontal motion axis X5 and the horizontal motion axis Y6 drive the product 4 to do random displacement motion in a plane after receiving the command, and meanwhile, the lifting motion axis 7, the tilting motion axis 8 and the pitching motion axis 9 drive the industrial camera 10 to do lifting, tilting and pitching motion in a space after receiving the command. When the industrial camera 10 moves to a certain position in space, the computer 1 sends a command for controlling the illumination of the light source, at the moment, the flat light sources 2 at different directions annularly arranged in the system cavity, the annular light source 11 carried by the camera and the backlight light source 3 arranged below the product 4 are randomly switched on and off, and then the focusing position of the industrial camera 10 is adjusted through an automatic focusing algorithm to acquire an image in the state. Shooting in random space within a period of time, comprehensively simulating various actual shooting conditions, and finally obtaining a large amount of abundant data sets.
Fig. 3 is a schematic diagram of the industrial camera tracking the position of the product according to the present invention. The industrial camera 10 is moved from an initial position N (0,0, z)0) The point moves to Q (0,0, z)1) The product 4 moves from the initial position O (0,0,0) to P (x)1,y10) point, in order to ensure that the product 4 is within the field of view of the industrial camera 10, the industrial camera 10 needs to make rotational motions in the directions of a (rotating around the Y axis) and B (rotating around the X axis), and therefore the rotation angle in the direction of A, B needs to be calculated, assuming that the industrial camera 10 rotates around the Y axis by a certain angle β first and the rotation matrix rotates
Figure GDA0002941888810000031
Then the industrial camera 10 rotates a certain angle alpha around the X axis to just track the product 4, and the rotation matrix
Figure GDA0002941888810000032
According to a space vector two-point distance formula and a rigid body rotation transformation principle, a specific calculation formula is as follows:
Figure GDA0002941888810000033
QO″=RX×Ry×QO
the calculation results of α and β can be obtained by the above formula as follows:
Figure GDA0002941888810000041
Figure GDA0002941888810000042

Claims (2)

1. a data acquisition method for deep learning of images adopts a data acquisition system which comprises an industrial camera arranged in an inner cavity of the system, a displacement mechanism, a multi-directional illumination light source and a computer, and is characterized in that,
the industrial camera is used for acquiring surface images of products under different poses and different illumination conditions to acquire a data set;
the displacement mechanism comprises a horizontal motion shaft system, a lifting motion shaft, an inclined motion shaft and a pitching motion shaft, wherein the horizontal motion shaft system consists of a horizontal motion shaft X and a horizontal motion shaft Y and is used for driving a product to be detected to move in a plane; the lifting motion shaft, the tilting motion shaft and the pitching motion shaft are connected with the industrial camera and used for carrying the industrial camera to carry out lifting, tilting or pitching motion so as to realize shooting of different spatial angles of the surface of a product;
the multi-azimuth illumination light source comprises flat light sources at different angles annularly arranged in an inner cavity of the system, an annular light source carried on a lens of the industrial camera and a backlight light source arranged below a product to be detected;
the computer is used for controlling the action of the displacement mechanism and the multi-directional lighting light source;
the data acquisition comprises the following steps:
(1) placing a product on the surface of a backlight light source, then sending a command for controlling multi-axis motion through a computer, receiving the command by two horizontal motion axes, driving the product to do random displacement motion in a plane, and receiving the command by a lifting motion axis, an inclined motion axis and a pitching motion axis, and driving an industrial camera to do lifting, inclining and pitching motion in a space;
(2) when the industrial camera moves to a certain position in space, the computer sends out an instruction for controlling the illumination of the light source, controls the on-off of the flat light sources at different positions annularly arranged in the inner cavity of the system, the annular light source carried by the camera and the backlight light source arranged under the product, then collects the image of the product in the state by adjusting the focusing position of the industrial camera, tracks the position of the product by the industrial camera within a period of time to carry out space shooting under different conditions, simulates various actual shooting conditions, and obtains a large amount of abundant data sets.
2. The method of claim 1, wherein the industrial camera tracks the position of the product as follows: industrial camera from initial position N (0,0, z)0) The point moves to Q (0,0, z)1) The product is discharged from the initial position O (0,0) point movement to P (x)1,y10) point, in order to ensure that the product is within the field of view of the industrial camera, the industrial camera needs to rotate around the Y axis in the direction a, and also needs to rotate around the X axis in the direction B, so that the rotation angle in the direction A, B needs to be calculated, the industrial camera is firstly rotated around the Y axis by a certain angle β, and a rotation matrix is set
Figure FDA0002941888800000011
Then the industrial camera rotates a certain angle alpha around the X axis and tracks the product, and the matrix is rotated
Figure FDA0002941888800000012
According to a space vector two-point distance formula and a rigid body rotation transformation principle, the calculation formula of alpha and beta is obtained as follows:
Figure FDA0002941888800000013
Figure FDA0002941888800000014
CN202010086353.3A 2020-02-11 2020-02-11 Data acquisition method for image deep learning Active CN111366072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010086353.3A CN111366072B (en) 2020-02-11 2020-02-11 Data acquisition method for image deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010086353.3A CN111366072B (en) 2020-02-11 2020-02-11 Data acquisition method for image deep learning

Publications (2)

Publication Number Publication Date
CN111366072A CN111366072A (en) 2020-07-03
CN111366072B true CN111366072B (en) 2021-05-14

Family

ID=71207956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010086353.3A Active CN111366072B (en) 2020-02-11 2020-02-11 Data acquisition method for image deep learning

Country Status (1)

Country Link
CN (1) CN111366072B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112113098A (en) * 2020-09-09 2020-12-22 中国电力科学研究院有限公司 Camera support that multi-angle was shot
CN112488093B (en) * 2020-11-25 2023-03-31 西北工业大学 Part identification data set collection system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104697901A (en) * 2013-12-05 2015-06-10 上海梭伦信息科技有限公司 Intrinsic contact angle test and test method thereof
JP2015163843A (en) * 2014-02-28 2015-09-10 株式会社エアロ Rivet inspection device for aircraft
CN205560173U (en) * 2016-04-22 2016-09-07 深圳市奥斯卡科技有限公司 Liftable balanced cloud platform of diaxon control
CN207094131U (en) * 2017-06-23 2018-03-13 浙江机电职业技术学院 A kind of filming apparatus for being easy to multi-angled shooting
CN208239091U (en) * 2018-05-25 2018-12-14 上海复瞻智能科技有限公司 A kind of five axis optical platforms for HUD optical detection
CN109029292A (en) * 2018-08-21 2018-12-18 孙傲 A kind of inner surface of container three-dimensional appearance non-destructive testing device and detection method
CN208381661U (en) * 2018-03-27 2019-01-15 宁波勤邦新材料科技有限公司 A kind of camera lifting device of PET production line Defect Detection
CN110044926A (en) * 2019-04-22 2019-07-23 天津大学 A kind of lens defect detection device
CN110687127A (en) * 2019-10-31 2020-01-14 浙江首席智能技术有限公司 Dermal surface defect detection equipment based on machine vision and deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11215999B2 (en) * 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104697901A (en) * 2013-12-05 2015-06-10 上海梭伦信息科技有限公司 Intrinsic contact angle test and test method thereof
JP2015163843A (en) * 2014-02-28 2015-09-10 株式会社エアロ Rivet inspection device for aircraft
CN205560173U (en) * 2016-04-22 2016-09-07 深圳市奥斯卡科技有限公司 Liftable balanced cloud platform of diaxon control
CN207094131U (en) * 2017-06-23 2018-03-13 浙江机电职业技术学院 A kind of filming apparatus for being easy to multi-angled shooting
CN208381661U (en) * 2018-03-27 2019-01-15 宁波勤邦新材料科技有限公司 A kind of camera lifting device of PET production line Defect Detection
CN208239091U (en) * 2018-05-25 2018-12-14 上海复瞻智能科技有限公司 A kind of five axis optical platforms for HUD optical detection
CN109029292A (en) * 2018-08-21 2018-12-18 孙傲 A kind of inner surface of container three-dimensional appearance non-destructive testing device and detection method
CN110044926A (en) * 2019-04-22 2019-07-23 天津大学 A kind of lens defect detection device
CN110687127A (en) * 2019-10-31 2020-01-14 浙江首席智能技术有限公司 Dermal surface defect detection equipment based on machine vision and deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
深度学习算法及其在光学的应用;周宏强等;《红外与激光工程》;20191231;第48卷(第12期);全文 *

Also Published As

Publication number Publication date
CN111366072A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111366072B (en) Data acquisition method for image deep learning
CN110609037B (en) Product defect detection system and method
CN109800864B (en) Robot active learning method based on image input
CN107471218B (en) Binocular vision-based hand-eye coordination method for double-arm robot
WO2019028075A1 (en) Intelligent robots
CN111421539A (en) Industrial part intelligent identification and sorting system based on computer vision
CN113276106B (en) Climbing robot space positioning method and space positioning system
CN105835036B (en) A kind of parallel connected bionic eye device and its control method
Suzuki et al. Visual servoing to catch fish using global/local GA search
CN100393486C (en) Method and apparatus for quick tracing based on object surface color
CN110553650B (en) Mobile robot repositioning method based on small sample learning
CN113963044A (en) RGBD camera-based intelligent loading method and system for cargo box
CN115903541A (en) Visual algorithm simulation data set generation and verification method based on twin scene
CN110089350A (en) A kind of Mushroom Picking Robot system and picking method
CN113580149A (en) Unordered aliasing workpiece grabbing method and system based on key point prediction network
Inoue et al. Transfer learning from synthetic to real images using variational autoencoders for robotic applications
CN114131603B (en) Deep reinforcement learning robot grabbing method based on perception enhancement and scene migration
CN112164112A (en) Method and device for acquiring pose information of mechanical arm
CN114193440A (en) Robot automatic grabbing system and method based on 3D vision
CN111294514A (en) Data set acquisition system for image deep learning
CN107330913B (en) Intelligent robot marionette performance system based on autonomous learning script
Sanchez-Lopez et al. A real-time 3D pose based visual servoing implementation for an autonomous mobile robot manipulator
WO2022148419A1 (en) Quadrupedal robot positioning apparatus and quadrupedal robot formation
Walck et al. Automatic observation for 3d reconstruction of unknown objects using visual servoing
CN114285979A (en) Micro-distance photographing equipment and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant