CN109034748B - Method for building mould disassembly and assembly engineering training system based on AR technology - Google Patents

Method for building mould disassembly and assembly engineering training system based on AR technology Download PDF

Info

Publication number
CN109034748B
CN109034748B CN201810904293.4A CN201810904293A CN109034748B CN 109034748 B CN109034748 B CN 109034748B CN 201810904293 A CN201810904293 A CN 201810904293A CN 109034748 B CN109034748 B CN 109034748B
Authority
CN
China
Prior art keywords
image
network camera
coordinate system
assembly
disassembly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810904293.4A
Other languages
Chinese (zh)
Other versions
CN109034748A (en
Inventor
潘旭东
王佳钰
吕建峰
韩强辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201810904293.4A priority Critical patent/CN109034748B/en
Publication of CN109034748A publication Critical patent/CN109034748A/en
Application granted granted Critical
Publication of CN109034748B publication Critical patent/CN109034748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a building method of a mould disassembly and assembly engineering training system based on AR technology, and relates to the field of mechanical product disassembly and assembly training. The invention aims to solve the problem that the virtual reality technology cannot completely depict a real scene. The virtual assembly guiding information is superposed to the actual assembly operation environment by using the AR technology, a scene of virtual-real combination is constructed by using the network camera and the projector, meanwhile, the projector is used for projecting the guiding information, the camera is used for acquiring the actual operation image in real time to be matched with the matched template image, the actual scene can be depicted by 100%, the guiding information is displayed in the visual field of a user through display equipment, the good guiding effect is achieved on the actual operation, the learning autonomy of a trainee is improved, the ORB operator is used for quickly identifying the image characteristic points, the time delay is low, the system smoothness is good, the operation speed of the ORB operator is about 10 times of that of the SURF operator and 40 times of that of the SIFT operator, and the program operation flow is obviously accelerated. The invention is used for the field of mechanical product dismounting training.

Description

Method for building mould disassembly and assembly engineering training system based on AR technology
Technical Field
The invention belongs to the field of mechanical product disassembly and assembly training, and particularly relates to a construction method of a mold disassembly and assembly engineering training system based on an AR technology.
Background
In traditional mechanical product dismouting training teaching process, the student is faced with the assembly environment of reality, consequently, traditional mechanical product dismouting training teaching process's characteristics are that the sense of reality is strong, but can not effectively promote student's interest in learning. The virtual reality technology is applied to the virtual simulation teaching process, the field utilization rate is improved, the experiment cost is saved, the learning interest of students is promoted, and the advantages are very obvious, for example: by utilizing the virtual reality technology, the university of Shanghai, California and the like, the Unity3D software-based cylindrical gear reducer dismounting teaching system is completed on the Zspace desktop virtual reality equipment, the interactivity and the immersion in the dismounting process are improved, although the virtual reality technology makes certain progress in the aspect of improving the learning interest of students, the guiding effect on the actual operation is very limited, and the virtual reality technology cannot completely describe the real scene, so that the reality is limited.
Disclosure of Invention
The invention aims to solve the problem that the virtual reality technology cannot completely depict a real scene.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the method comprises the following steps: designing guide information in the process of disassembling and assembling the mould;
the guidance information includes: the information of the part model required in each step of the dismounting process, the animation demonstration of the dismounting process and the model display after the dismounting process is finished;
step two: using an MFC class library of C + + language in VS2015, building a graphical interface of a die disassembly and assembly engineering training system, and adding Button, Slider and Tab controls according to design requirements;
step three: downloading OpenCV3.4.1, HCNet and ZBARSDK, wherein OpenCV3.4.1 is used for realizing a computer vision function, HCNet is used as a network camera, ZBARSDK is used for realizing a bar code recognition function, the OpenCV3.4.1, HCNet and ZBARSDK are led into VS2015, and a development environment is configured;
step four: calling a network camera, and calibrating and correcting distortion of the network camera;
step five: constructing a position relation between a network camera and a projector, generating a Code39 bar Code added with verification by using a development tool function of Excel software, and performing bar Code identification on the bar Code by using ZBAR;
step six: shooting a matched template image of a part obtained from the part box in each step of dismounting by using a picture-grabbing function of the network camera;
step seven: according to all the disassembly and assembly processes, program logic is designed, reading of the guide information and the matched template image is achieved, meanwhile, the projector is used for projecting the guide information, according to the guide information, the network camera obtains an actual operation image of each step of disassembly and assembly process in real time, feature points of the actual operation image and the matched template image are matched, and feature points which are successfully matched are screened;
step eight: recording the feature point matching logarithm after screening, if the matching number of the actual operation image acquired by the network camera and the feature points of the matched template image reaches 70% of the self-matching number of the feature points of the matched template image, determining that the matching is successful, and continuing to execute the program logic designed in the step seven until the whole dismounting process is completed;
if the matching is not determined to be successful, adjusting the positions of the parts, and continuing to perform feature point matching on the actual operation image acquired by the network camera and the matching template image until the matching is determined to be successful; and continuously executing the program logic designed in the step seven until the whole dismounting process is completed.
The invention has the beneficial effects that: the invention utilizes AR technology to superpose virtual assembly guiding information to an actual assembly operation environment, utilizes a network camera and a projector to construct a scene of virtual-real combination, simultaneously utilizes the projector to project guiding information and an application camera to acquire an actual operation picture in real time, and matches the actual operation picture with a matching template image, can draw 100% of the actual scene, displays the guiding information in the visual field of a user through display equipment, has good guiding effect on actual operation, and also improves the learning autonomy of a trainee.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is an image of an angular point drawn on an original image after an inner angular point is extracted;
FIG. 3 is a diagram of a relationship between a pixel coordinate system and an image coordinate system;
FIG. 4 is a diagram of a relationship between a camera coordinate system and an image coordinate system;
wherein: p is any point under the camera coordinate system, and P is the point projected by P to the image coordinate system;
FIG. 5 is an image after distortion correction is completed;
FIG. 6 is a diagram of the extraction result of feature points of the lower template;
FIG. 7 is a diagram showing the result of feature point matching between the checkerboard image acquired by the network camera and the matching template image;
FIG. 8 is a diagram of feature point matching results after completion of screening;
fig. 9 is a schematic view of the die assembly process.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings, but not limited thereto, and any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention shall be covered by the protection scope of the present invention.
The first embodiment is as follows: this embodiment will be described with reference to fig. 1. The method for building the training system for the die disassembly and assembly project based on the AR technology comprises the following steps:
the method comprises the following steps: designing guide information in the process of disassembling and assembling the mould;
the guidance information includes: the information of the part model required in each step of the dismounting process, the animation demonstration of the dismounting process and the model display after the dismounting process is finished;
step two: using an MFC class library of C + + language in VS2015, building a graphical interface of a die disassembly and assembly engineering training system, and adding Button, Slider and Tab controls according to design requirements;
step three: downloading OpenCV3.4.1, HCNet and ZBARSDK, wherein OpenCV3.4.1 is used for realizing a computer vision function, HCNet is used as a network camera, ZBARSDK is used for realizing a bar code recognition function, the OpenCV3.4.1, HCNet and ZBARSDK are led into VS2015, and a development environment is configured;
step four: calling a network camera, and calibrating and correcting distortion of the network camera;
step five: constructing a position relation between a network camera and a projector, generating a Code39 bar Code added with verification by using a development tool function of Excel software, and performing bar Code identification on the bar Code by using ZBAR;
step six: shooting a matched template image of a part obtained from the part box in each step of dismounting by using a picture-grabbing function of the network camera;
step seven: according to all the disassembly and assembly processes, program logic is designed, reading of the guide information and the matched template image is achieved, meanwhile, the projector is used for projecting the guide information, according to the guide information, the network camera obtains an actual operation image of each step of disassembly and assembly process in real time, feature points of the actual operation image and the matched template image are matched, and feature points which are successfully matched are screened;
step eight: recording the feature point matching logarithm after screening, if the matching number of the actual operation image acquired by the network camera and the feature points of the matched template image reaches 70% of the self-matching number of the feature points of the matched template image, determining that the matching is successful, and continuing to execute the program logic designed in the step seven until the whole dismounting process is completed;
if the matching is not determined to be successful, adjusting the positions of the parts, and continuing to perform feature point matching on the actual operation image acquired by the network camera and the matching template image until the matching is determined to be successful; and continuously executing the program logic designed in the step seven until the whole dismounting process is completed.
And seventhly, after the actual operation image in the first step of disassembly and assembly process is matched with the matched template image in characteristic points, judging whether the disassembly and assembly process is successfully matched or not according to the number of the matched characteristic points, if so, continuing to execute the program designed in the seventh step to finish the next step of disassembly and assembly process, and shooting the current actual operation image for matching with the matched template image when the next step of disassembly and assembly process is finished.
If the matching is not successful, after the positions of the parts are adjusted, the actual operation image acquired by the network camera is continuously matched with the matched template image for feature points, and the program designed in the step seven is continuously executed until the assembling and disassembling process of the first step is successfully matched, so that the next assembling and disassembling process is carried out; namely, after the disassembly and assembly process of each step needs to be matched through the characteristic points, the next disassembly and assembly process can be continued until the whole disassembly and assembly process is completed.
In order to solve the problems of the traditional dismounting training process and the virtual reality technology dismounting teaching system in the actual teaching, the invention utilizes the augmented reality technology to render and generate the scene of virtual-real combination, and completes the guidance of dismounting operation in the generated scene of virtual-real combination, thereby synthesizing the advantages of strong sense of reality of the traditional dismounting training and good real-time interactivity of the virtual reality technology dismounting teaching system, and having good and wide application prospect.
And the identification and matching technology based on natural features is used, so that the identification of parts in the disassembly and assembly process is realized. Compared with the existing mature and widely adopted identification method based on the marker, the method has the advantages of no need of additional characteristic information, small influence by the assembly environment, high stability and flexibility and the like, and can obviously reduce the workload in the preparation process.
The second embodiment is as follows: the embodiment further defines the construction method of the AR technology-based training system for the die disassembly and assembly project, wherein the specific process of designing the guide information in the die disassembly and assembly process in the first step is as follows:
step one, establishing a part model required in the process of disassembling and assembling a mould: the method comprises the following steps that a mould used in a mould dismounting engineering training system is a multi-station progressive die, the multi-station progressive die sequentially realizes punching and bending processes, an actual multi-station progressive die is dismounted, the specific size of a part obtained after dismounting is obtained through actual measurement, a two-dimensional and three-dimensional object function is established by using a part of three-dimensional visual entity simulation software, and a three-dimensional model of the part is established;
the part model of the training system for the die disassembly and assembly engineering comprises a lower template, a cover plate, a female die fixing plate, a screw (model GB/T70.1M 5 multiplied by 9), a pin (model GB/T199.25 multiplied by 30), a punch, a screw (model GB/T70.1M 6 multiplied by 20), a screw (model GB/T70.1M 5 multiplied by 20), a red spring and a spring (model GB/T70.1M 5 multiplied by 20)
Figure BDA0001760226220000041
Quick ejection piece, plugging, knife edge, bending punch, screw (model GB/T70.1M 5 x 14), screw (model GB/T70.1M 6 x 25), pin (model GB/T199.25 x 26), spring (model GB/T199.25 x 26)
Figure BDA0001760226220000042
) Upper die plate, stripper plate, thimble and screw M8. Each part cartridge contains a part model.
Step two, assembling a part model of a mould disassembly and assembly engineering training system: fixing the relative position between the part models according to the position requirement in the assembly process by applying the constraint function of three-dimensional visual entity simulation software to finish the assembly of the multi-station progressive die;
step three, assembling video rendering output: and (3) guiding the assembled multi-station progressive die into an inventory studio animation production environment of three-dimensional visual entity simulation software, driving constraint to control the movement of the part model in an animation time axis, and controlling the display and the hiding of the part model by adjusting transparency to complete the assembly video rendering process.
The three-dimensional visual entity simulation software in the embodiment can be Autodesk Inventor Professional (AIP), Solidworks, CATIA, UG and the like.
The third concrete implementation mode: the embodiment further defines the construction method of the AR technology-based mold disassembly and assembly engineering training system described in the second embodiment, and the fourth step of the embodiment calls the network camera to calibrate and correct distortion of the network camera, and the specific process is as follows:
step four, firstly: initializing an SDK of a network camera, setting login parameters to register the network camera, wherein the set login parameters comprise an IP address, a user name and password information of the network camera, starting real-time preview, and completing calling of the camera;
the Haikangwei video H265400W pixel hemispherical network camera is used for secondary development by applying the Haikangwei video SDK, and the calling operation of the camera is completed. According to the development manual, the preview process is as follows: starting and initializing the SDK, registering equipment by a user, starting preview, calling back real-time data, stopping preview, cancelling the equipment, releasing the SDK, and ending;
step four and step two: the cloud platform control function of the network camera is utilized, the sliding strip is used for adjusting the speed, and the cloud platform position, the focal length, the focus front and back and the aperture size are adjusted; setting a prefabricated point to ensure that the network camera is at an adjusted position during each calling;
the control flow of the camera pan-tilt is as follows: starting and initializing the SDK, registering equipment by a user, controlling a holder, stopping previewing, cancelling the equipment, releasing the SDK, and ending;
step four and step three: placing a calibration plate (the purchased calibration plate has a good effect) on a working platform in different postures, shooting 15-20 checkerboard images by using a capture function of a network camera, and storing the checkerboard images in a hard disk;
the camera grabs the chart flow and is: starting and initializing the SDK, registering equipment by a user, starting preview, calling back real-time data, previewing and grabbing images, stopping preview, cancelling the equipment, releasing the SDK and ending;
step four: reading a checkerboard image shot by a network camera, carrying out rough extraction on internal corner points of the checkerboard, and then carrying out precision on the internal corner points subjected to rough extraction to obtain sub-pixel internal corner points so as to complete internal corner point extraction;
step four and five: the calibration and distortion correction are carried out on the network camera, and the specific process of the calibration and distortion correction is as follows:
because the calibration plate is positioned on the horizontal operating platform, the size of the chessboard of the calibration plate is known, a vertex in the chessboard is taken as an original point, and two sides of the chessboard are taken as coordinate axes to establish a rectangular space coordinate system, so that the actual three-dimensional coordinates of the planar chessboard can be determined. And completing the calibration process of the camera by using a Zhang-Yongyou calibration method according to the extracted angular points and the actual three-dimensional coordinates of the planar checkerboard, solving the distortion coefficient and completing the correction of the picture.
A world coordinate system (world coordinate) is a coordinate system for describing the spatial positions of the network camera and the object to be measured, and in the world coordinate system, the homogeneous coordinates of a point with coordinates (xw, yw, zw) are (xw, yw, z)w,1)T;(xw,yw,zw,1)TIs represented by (x)w,yw,zwThe transposition of 1);
in the world coordinate system, the coordinates are (x)w,yw,zw) Has a homogeneous coordinate of (x)w,yw,zw,1)T
Establishing a camera coordinate system by taking the optical center of a lens of the network camera as an original point o', the optical axis of the lens as a z axis and parallel lines on two sides of an image plane as x and y axes; point (x) in world coordinate systemw,yw,zw) The corresponding homogeneous coordinate under the camera coordinate system is (x)c,yc,zc,1)T,(xc,yc,zc) Is a point (x) in the world coordinate systemw,yw,zw) Coordinates under a camera coordinate system;
the conversion relationship between the world coordinate system and the camera coordinate system is shown in formula (1):
Figure BDA0001760226220000061
in the formula: r is a 3 x 3 dimensional rotation matrix; t is a translation vector of 3 x 1 dimensions;
a pixel coordinate system (pixel coordinate) is established by taking the upper left corner of the checkerboard image as an origin o and taking parallel lines on two sides of an image plane as u and v axes; the pixel coordinate system reflects the arrangement of pixels in the photosensitive element of the network camera, and the coordinate axis takes the pixels as a unit.
Point (x) in world coordinate systemw,yw,zw) The corresponding homogeneous coordinate under the pixel coordinate system is (u, v,1)T(u, v) is a point (x) in the world coordinate systemw,yw,zw) Coordinates in a pixel coordinate system;
the image coordinate system is a coordinate system which takes the center of the checkerboard image as an origin, namely the intersection point of the optical axis of the network camera and the image surface as an origin O, and the parallel lines of the u axis and the v axis as an X axis and a Y axis respectively; (coordinate axes are typically in millimeters. the coordinate system is susceptible to coordinate transformations.)
Point (x) in world coordinate systemw,yw,zw) The corresponding homogeneous coordinate in the image coordinate system is set as (X, Y,1)T(ii) a (X, Y) is a point (X) in the world coordinate systemw,yw,zw) Coordinates in an image coordinate system;
the conversion relationship between the pixel coordinate system and the image coordinate system is shown in equation (2):
Figure BDA0001760226220000062
in the formula: dX is the physical size of the pixel in the X-axis direction, dY is the physical size of the pixel in the Y-axis direction, u0 is the abscissa of the origin of the image coordinate system, v0Is the ordinate of the origin of the image coordinate system;
the conversion relationship between the camera coordinate system and the image coordinate system is shown in equation (3):
Figure BDA0001760226220000071
in the formula: s is a scale factor, and f is an effective focal length of the network camera;
the conversion relationship between the world coordinate system and the pixel coordinate system is shown in formula (4):
Figure BDA0001760226220000072
Figure BDA0001760226220000073
an internal reference matrix of the network camera;
the ideal camera model is a pinhole model, but the actual lens does not fit this assumption. In addition, the structure, manufacture, installation, process and other factors of the camera also cause errors, so that the camera usually has various nonlinear distortions, and in order to make the camera calibration result more accurate, the nonlinear distortion of the camera should be taken into account when the camera calibration is performed, so as to correct the ideal projection model.
Distortion of the webcam includes radial nonlinear distortion and tangential nonlinear distortion, wherein: the model for radial nonlinear distortion is:
x2=x(1+k1r2+k2r4+k3r6) (5)
y2=y(1+k1r2+k2r4+k3r6) (6)
wherein: (x, y) is the normal position coordinates of the network camera, (x)2,y2) Is the radial nonlinear distortion position coordinate of the network camera; r is the distance from the normal position coordinates of the network camera to the optical center of the lens, i.e.
Figure BDA0001760226220000074
k1Is a 2 nd order radial distortion coefficient, k2Is a 4 th order radial distortion coefficient, k3Is a 6 th order radial distortion coefficient;
the model for tangential nonlinear distortion is:
x1=x+[2p1y+p2(r2+2x2)] (7)
y1=y+[2p2x+p1(r2+2y2)] (8)
wherein: (x, y) is the normal position coordinates of the network camera, (x)1,y1) Is the tangential nonlinear distortion position coordinate of the network camera; p is a radical of1Is the vertical tangential distortion coefficient, p2Is a horizontal tangential distortion coefficient;
the nonlinear distortion formula of the webcam is:
x0=x(1+k1r2+k2r4+k3r6)+[2p1y+p2(r2+2x2)]
y0=y(1+k1r2+k2r4+k3r6)+[2p2x+p1(r2+2y2)]
the distortion matrix is composed ofk1,k2,k3,p1And p2Five-dimensional vector (k) of1,k2,k3,p1,p2);
And solving an internal parameter matrix, a distortion matrix, a rotation matrix and a translation vector of the network camera, namely completing the calibration and distortion correction of the network camera.
The main routine of this embodiment is as follows: v/Feature2D.cpp
Figure BDA0001760226220000081
Figure BDA0001760226220000091
Figure BDA0001760226220000101
The fourth concrete implementation mode: the embodiment further defines the construction method of the AR technology-based mold disassembly and assembly engineering training system in the third embodiment, the positional relationship between the webcam and the projector is constructed in the fifth step, the Code39 barcode added with verification is generated by using the development tool function of Excel software, and the barcode is identified by using ZBar, and the specific process is as follows:
step five, first: generating a black image with the same resolution as that of a computer by utilizing OpenCV3.4.1, setting an identification area in the black image, changing the color inside the identification area into white, projecting the whole black image onto a working platform by utilizing a projector, and recording the position of the identification area in the whole black image;
step five two: shooting the image of the working platform after the completion of the fifth projection step by utilizing the image capturing function of the network camera, and correcting the shot image by utilizing the internal reference matrix and the distortion matrix of the network camera;
step five and step three: displaying the corrected image by using OpenCV3.4.1, setting a mouse callback function, manually framing the position of the image projected by the projector on the corrected image, and completing the construction of the position relation between the camera and the projector;
step five and four: generating 28 Code39 barcodes with added verification contents from AR01 to AR28 by utilizing the development tool function of Excel software;
step five: and pasting the bar code to the part box, performing framing on the bar code position area, and performing bar code identification on the framed bar code position area by using ZBar.
The fifth concrete implementation mode: the embodiment further defines the construction method of the AR technology-based mold disassembly and assembly engineering training system described in the fourth embodiment, and the specific process of generating the Code39 barcode in the embodiment is as follows:
step 1, clicking a 'development tool' tab, and inserting 'other controls';
step 2, popping up a dialog box of 'Microsoft Bar Code control 14.0 object' and selecting a Code39 style;
step 3, right clicking the bar code object by the mouse, popping up a shortcut menu, and clicking an attribute command item;
step 4, inputting A1 at the Linked Cell, and inputting AR01 characters in the A1 cells;
step 5, clicking a design mode under a development tool tab to finish the design, so that a Code39 bar Code with the content of AR01 can be generated;
and 6, repeating the steps 1-5 to finish the generation of the bar codes from AR01 to AR28 and printing and outputting.
The sixth specific implementation mode: in the sixth step of the present embodiment, according to the position of the identification area set in the fifth step, the parts and the obtained parts in the disassembly and assembly process of each step are placed in the identification area, an image is shot by using the capture function of the network camera, and the image of the identification area is respectively captured as the matching template image in the disassembly and assembly process of each step.
The seventh embodiment: the embodiment further defines the construction method of the AR technology-based mold disassembly and assembly engineering training system in the sixth embodiment, in the seventh embodiment, program logic is designed according to all disassembly and assembly processes, reading of the guide information and the matched template image is realized, meanwhile, the projector is used for projecting the guide information, according to the guide information, the network camera acquires an actual operation image in real time, the actual operation image and the matched template image are subjected to feature point matching, and feature points successfully matched are screened, and the specific process is as follows:
step seven one: designing program logic according to all the disassembly and assembly processes; associating the guide information of each step of disassembling and assembling process with the matched template image;
step seven and two: projecting guide information by using a projector, and acquiring an image of an actual operation process in real time by using a network camera;
step seven and three: intercepting the identification area part in the image obtained in the step seven and two, and applying an ORB feature point extraction operator;
step seven and four: using a FLANN matching operator to perform feature point matching on the image obtained by the interception in the step seven and the image obtained by the interception in the step three and the matching template image;
seventhly, steps: and screening the successfully matched feature points according to the Lao's algorithm.
After the projector is used for projecting guide information (namely, the assembling and disassembling process animation demonstration designed in the first step), an operator can perform actual assembling and disassembling operation according to the animation demonstration, in the actual assembling and disassembling operation process, an actual operation image is obtained by using the network camera in real time, the image of the identification area part is intercepted after the actual operation image is obtained, and then the intercepted image is matched with the matching template image in characteristic points.
Examples
The key program codes of the fourth step and the third step of the invention are as follows:
// extract the interior corner points of the checkerboard
findChessboardCorners(imageInput,board_size,image_points,CV_CALIB_CB_ADAPTIVE_THRESH|CV_CALIB_CB_NORMALIZE_IMAGE);
V/refining the interior corner points for the crude extraction
find4QuadCornerSubpix(view_gray,image_points,Size(5,5));
V/saving corner points in sub-pixels
image_points_seq.push_back(image_points);
The inner corner extraction is completed, and the drawn image is shown in fig. 2.
The relationship between the pixel coordinate system and the image coordinate system in step four of the present invention is shown in fig. 3.
Based on the pinhole imaging principle, the camera coordinate system and the image coordinate system have a simple perspective transformation relationship, which is shown in fig. 4:
solving an internal parameter matrix, a distortion matrix, a rotation matrix and a translation vector of the network camera, namely the key program code for completing the calibration and distortion correction of the network camera is as follows:
v/solving the camera internal parameter matrix, distortion matrix, rotation matrix and translation vector
calibrateCamera(object_points_seq,image_points_seq,image_size,cameraMatrix,distCoeffs,rvecsMat,tvecsMat,CV_CALIB_FIX_K3);
// picture correction
undistort(camera,camera_clone,cameraMatrix,distCoeffs);
The picture with the completed correction is shown in fig. 5.
The key program codes for constructing the position relationship between the network camera and the projector are as follows:
// creating a window
cvNamedWindow("Capture image",WINDOW_NORMAL);
Displaying the corrected picture on the created window and setting the mouse callback function
setMouseCallback("Capture image",on_MouseBarHandle,
(void*)&scrImage);
The key program codes for identifying the bar codes are as follows:
v/initialization scanner
ImageScanner scanner;
// set the scanning Range
Image imageZbar(width,height,"Y800",raw,width*height);
// scanning the Bar code
scanner.scan(imageZbar);
// saving barcode content
Bar.push_back(barname);
// saving barcode position
BarPosition.push_back(g_rectangle);
The key program code for matching the characteristic points with the matching template image by applying the ORB characteristic point extraction operator is as follows: converting the image captured by the camera and the image read from the folder into a gray image;
cvtColor(captureImage,d_srcR,COLOR_BGR2GRAY);
cvtColor(trainImage,d_srcL,COLOR_BGR2GRAY);
// extracting feature points of an image
d_orb->detectAndCompute(d_srcR,Mat(),keyPoints_2,d_descriptorsR);
d_orb->detectAndCompute(d_srcL,Mat(),keyPoints_1,d_descriptorsL);
The lower template feature point extraction result is shown in fig. 6.
Matching feature points in two images
d_matcher->match(d_descriptorsL,d_descriptorsR,matches);
The feature point matching result is shown in fig. 7.
V/screening the feature points successfully matched
if(matches[i].distance<=0.6*max_dist)
{
good_matches.push_back(matches[i]);
GoodMatchNumber++;
}
The feature point matching result after the screening is completed is shown in fig. 8.
The assembly process of the parts of the multi-station progressive die is as follows:
1. determining the parts required by the assembly in the step by a program, projecting marking light spots by a projector at the bar code position of a part box for placing the parts, and projecting matching templates of the parts on a working platform;
2. taking out the parts from the part box, placing the parts in an identification area, and waiting for system identification;
3. after the identification is completed, the video of the assembling process in the step is projected on the working platform;
4. performing assembly operation according to video guidance;
5. placing the assembled parts in an identification area to wait for system identification;
6. and after the identification is completed, repeating the operation of 1-5 steps until the assembly of the parts is completed.
Fig. 9 is a schematic view of the die assembly process.

Claims (6)

1. A building method of a mould disassembly and assembly engineering training system based on AR technology is characterized by comprising the following steps:
the method comprises the following steps: designing guide information in the process of disassembling and assembling the mould;
the guidance information includes: the information of the part model required in each step of the dismounting process, the animation demonstration of the dismounting process and the model display after the dismounting process is finished;
the specific process of designing the guide information in the die disassembly and assembly process is as follows:
step one, establishing a part model required in the process of disassembling and assembling a mould: the method comprises the following steps that a mould used in a mould dismounting engineering training system is a multi-station progressive die, the multi-station progressive die sequentially realizes punching and bending processes, an actual multi-station progressive die is dismounted, the specific size of a part obtained after dismounting is obtained through actual measurement, a two-dimensional and three-dimensional object function is established by using a part of three-dimensional visual entity simulation software, and a three-dimensional model of the part is established;
step two, assembling a part model of a mould disassembly and assembly engineering training system: fixing the relative position between the part models according to the position requirement in the assembly process by applying the constraint function of three-dimensional visual entity simulation software to finish the assembly of the multi-station progressive die;
step three, assembling video rendering output: guiding the assembled multi-station progressive die into an inventory studio animation production environment of three-dimensional visual entity simulation software, driving constraint to control the movement of a part model in an animation time axis, and controlling the part model to be invisible and visible by adjusting transparency to complete an assembly video rendering process;
step two: using an MFC class library of C + + language in VS2015, building a graphical interface of a die disassembly and assembly engineering training system, and adding Button, Slider and Tab controls according to design requirements;
step three: downloading OpenCV3.4.1, HCNet and ZBARSDK, wherein OpenCV3.4.1 is used for realizing a computer vision function, HCNet is used as a network camera, ZBARSDK is used for realizing a bar code recognition function, the OpenCV3.4.1, HCNet and ZBARSDK are led into VS2015, and a development environment is configured;
step four: calling a network camera, and calibrating and correcting distortion of the network camera;
step five: constructing a position relation between a network camera and a projector, generating a Code39 bar Code added with verification by using a development tool function of Excel software, and performing bar Code identification on the bar Code by using ZBAR;
step six: shooting a matched template image of a part obtained from the part box in each step of dismounting by using a picture-grabbing function of the network camera;
step seven: according to all the disassembly and assembly processes, program logic is designed, reading of the guide information and the matched template image is achieved, meanwhile, the projector is used for projecting the guide information, according to the guide information, the network camera obtains an actual operation image of each step of disassembly and assembly process in real time, feature points of the actual operation image and the matched template image are matched, and feature points which are successfully matched are screened;
step eight: recording the feature point matching logarithm after screening, if the matching number of the actual operation image acquired by the network camera and the feature points of the matched template image reaches 70% of the self-matching number of the feature points of the matched template image, determining that the matching is successful, and continuing to execute the program logic designed in the step seven until the whole dismounting process is completed;
if the matching is not determined to be successful, adjusting the positions of the parts, and continuing to perform feature point matching on the actual operation image acquired by the network camera and the matching template image until the matching is determined to be successful; and continuously executing the program logic designed in the step seven until the whole dismounting process is completed.
2. The building method of the AR technology-based mold disassembly and assembly engineering training system according to claim 1, wherein the network camera is called in the fourth step, calibration and distortion correction are performed on the network camera, and the specific process is as follows:
step four, firstly: initializing an SDK of a network camera, setting login parameters to register the network camera, wherein the set login parameters comprise an IP address, a user name and password information of the network camera, starting real-time preview, and completing calling of the camera;
step four and step two: the cloud platform control function of the network camera is utilized, the sliding strip is used for adjusting the speed, and the cloud platform position, the focal length, the focus front and back and the aperture size are adjusted; setting a prefabricated point to ensure that the network camera is at an adjusted position during each calling;
step four and step three: placing the calibration board on a working platform in different postures, shooting 15-20 checkerboard images by using the image capturing function of the network camera, and storing the checkerboard images in a hard disk;
step four: reading a checkerboard image shot by a network camera, carrying out rough extraction on internal corner points of the checkerboard, and then carrying out precision on the internal corner points subjected to rough extraction to obtain sub-pixel internal corner points so as to complete internal corner point extraction;
step four and five: the calibration and distortion correction are carried out on the network camera, and the specific process of the calibration and distortion correction is as follows:
in the world coordinate system, the coordinates are (x)w,yw,zw) Has a homogeneous coordinate of (x)w,yw,zw,1)T
The optical center of the lens of the network camera is taken as an original point o', the optical axis of the lens is taken as a z axis, and parallel lines on two sides of an image plane are taken as an x axis and a y axisEstablishing a camera coordinate system; point (x) in world coordinate systemw,yw,zw) The corresponding homogeneous coordinate under the camera coordinate system is (x)c,yc,zc,1)T,(xc,yc,zc) Is a point (x) in the world coordinate systemw,yw,zw) Coordinates under a camera coordinate system;
the conversion relationship between the world coordinate system and the camera coordinate system is shown in formula (1):
Figure FDA0003145445630000021
in the formula: r is a 3 x 3 dimensional rotation matrix; t is a translation vector of 3 x 1 dimensions;
the pixel coordinate system is established by taking the upper left corner of the checkerboard image as an origin o and taking parallel lines on two sides of the image surface as u and v axes;
point (x) in world coordinate systemw,yw,zw) The corresponding homogeneous coordinate under the pixel coordinate system is (u, v,1)T(u, v) is a point (x) in the world coordinate systemw,yw,zw) Coordinates in a pixel coordinate system;
the image coordinate system is a coordinate system which takes the center of the checkerboard image as an origin, namely the intersection point of the optical axis of the network camera and the image surface as an origin O, and the parallel lines of the u axis and the v axis as an X axis and a Y axis respectively;
point (x) in world coordinate systemw,yw,zw) The corresponding homogeneous coordinate in the image coordinate system is set as (X, Y,1)T(ii) a (X, Y) is a point (X) in the world coordinate systemw,yw,zw) Coordinates in an image coordinate system;
the conversion relationship between the pixel coordinate system and the image coordinate system is shown in equation (2):
Figure FDA0003145445630000031
in the formula: dX is a pixelPhysical dimension in X-axis direction, dY is physical dimension of pixel in Y-axis direction, u0Is the abscissa of the origin of the image coordinate system, v0Is the ordinate of the origin of the image coordinate system;
the conversion relationship between the camera coordinate system and the image coordinate system is shown in equation (3):
Figure FDA0003145445630000032
in the formula: s is a scale factor, and f is an effective focal length of the network camera;
the conversion relationship between the world coordinate system and the pixel coordinate system is shown in formula (4):
Figure FDA0003145445630000033
distortion of the webcam includes radial nonlinear distortion and tangential nonlinear distortion, wherein: the model for radial nonlinear distortion is:
x2=x(1+k1r2+k2r4+k3r6) (5)
y2=y(1+k1r2+k2r4+k3r6) (6)
wherein: (x, y) is the normal position coordinates of the network camera, (x)2,y2) Is the radial nonlinear distortion position coordinate of the network camera; r is the distance from the normal position coordinates of the network camera to the optical center of the lens, i.e.
Figure FDA0003145445630000034
k1Is a 2 nd order radial distortion coefficient, k2Is a 4 th order radial distortion coefficient, k3Is a 6 th order radial distortion coefficient;
the model for tangential nonlinear distortion is:
x1=x+[2p1y+p2(r2+2x2)] (7)
y1=y+[2p2x+p1(r2+2y2)] (8)
wherein: (x, y) is the normal position coordinates of the network camera, (x)1,y1) Is the tangential nonlinear distortion position coordinate of the network camera; p is a radical of1Is the vertical tangential distortion coefficient, p2Is a horizontal tangential distortion coefficient;
the nonlinear distortion formula of the webcam is:
x0=x(1+k1r2+k2r4+k3r6)+[2p1y+p2(r2+2x2)]
y0=y(1+k1r2+k2r4+k3r6)+[2p2x+p1(r2+2y2)]
the distortion matrix is k1,k2,k3,p1And p2Five-dimensional vector (k) of1,k2,k3,p1,p2);
And solving an internal parameter matrix, a distortion matrix, a rotation matrix and a translation vector of the network camera, namely completing the calibration and distortion correction of the network camera.
3. The building method of the AR technology-based mold disassembly and assembly engineering training system according to claim 2, wherein the concrete process of the fifth step is as follows:
step five, first: generating a black image with the same resolution as that of a computer by utilizing OpenCV3.4.1, setting an identification area in the black image, changing the color inside the identification area into white, projecting the whole black image onto a working platform by utilizing a projector, and recording the position of the identification area in the whole black image;
step five two: shooting the image of the working platform after the completion of the fifth projection step by utilizing the image capturing function of the network camera, and correcting the shot image by utilizing the internal reference matrix and the distortion matrix of the network camera;
step five and step three: displaying the corrected image by using OpenCV3.4.1, setting a mouse callback function, manually framing the position of the image projected by the projector on the corrected image, and completing the construction of the position relation between the camera and the projector;
step five and four: generating 28 Code39 barcodes with added verification contents from AR01 to AR28 by utilizing the development tool function of Excel software;
step five: and pasting the bar code to the part box, performing framing on the bar code position area, and performing bar code identification on the framed bar code position area by using ZBar.
4. The building method of the AR technology-based mold disassembly and assembly engineering training system according to claim 3, wherein the specific process of generating Code39 bar codes is as follows:
step 1, clicking a ' development tool ' tab and inserting a ' -other control;
step 2, popping up a dialog box of 'Microsoft Bar Code control 14.0 object' and selecting a Code39 style;
step 3, right clicking the bar code object by the mouse, popping up a shortcut menu, and clicking an attribute command item;
step 4, inputting A1 at the Linked Cell, and inputting AR01 characters in the A1 cells;
step 5, clicking a design mode under a development tool tab to finish the design, so that a Code39 bar Code with the content of AR01 can be generated;
and 6, repeating the steps 1-5 to finish the generation of the bar codes from AR01 to AR28 and printing and outputting.
5. The building method of the AR technology-based mold disassembly and assembly engineering training system according to claim 4, wherein in the sixth step, according to the position of the identification area set in the fifth step, the parts and the obtained parts in the disassembly and assembly process in each step are placed in the identification area, images are shot by using a capture function of a network camera, and the images in the identification area are respectively captured as the matching template images in the disassembly and assembly process in each step.
6. The building method of the AR technology-based mold disassembly and assembly engineering training system according to claim 5, wherein the specific process of the seventh step is as follows:
step seven one: designing program logic according to all the disassembly and assembly processes; associating the guide information of each step of disassembling and assembling process with the matched template image;
step seven and two: projecting guide information by using a projector, and acquiring an image of an actual operation process in real time by using a network camera;
step seven and three: intercepting the identification area part in the image obtained in the step seven and two, and applying an ORB feature point extraction operator;
step seven and four: matching the image obtained by the step seven and three with the matching template image by feature points;
seventhly, steps: and screening the successfully matched feature points according to the Lao's algorithm.
CN201810904293.4A 2018-08-09 2018-08-09 Method for building mould disassembly and assembly engineering training system based on AR technology Active CN109034748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810904293.4A CN109034748B (en) 2018-08-09 2018-08-09 Method for building mould disassembly and assembly engineering training system based on AR technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810904293.4A CN109034748B (en) 2018-08-09 2018-08-09 Method for building mould disassembly and assembly engineering training system based on AR technology

Publications (2)

Publication Number Publication Date
CN109034748A CN109034748A (en) 2018-12-18
CN109034748B true CN109034748B (en) 2021-08-31

Family

ID=64632559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810904293.4A Active CN109034748B (en) 2018-08-09 2018-08-09 Method for building mould disassembly and assembly engineering training system based on AR technology

Country Status (1)

Country Link
CN (1) CN109034748B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685912A (en) * 2018-12-20 2019-04-26 深圳瑞和建筑装饰股份有限公司 Demenstration method and system based on AR
CN109697918B (en) * 2018-12-29 2021-04-27 深圳市掌网科技股份有限公司 Percussion instrument experience system based on augmented reality
CN109920062A (en) * 2019-02-01 2019-06-21 谷东科技有限公司 A kind of part changeable assembling guidance method and system based on AR glasses
CN111658142A (en) * 2019-03-07 2020-09-15 重庆高新技术产业开发区瑞晟医疗科技有限公司 MR-based focus holographic navigation method and system
CN110264818B (en) * 2019-06-18 2021-08-24 国家电网有限公司 Unit water inlet valve disassembly and assembly training method based on augmented reality
WO2022036634A1 (en) * 2020-08-20 2022-02-24 青岛理工大学 Assembly/disassembly operation-oriented augmented reality guidance and remote collaboration development system
CN113191841A (en) * 2021-04-28 2021-07-30 张鹏 Scientific and technological innovation and culture sharing intelligent platform mode method based on augmented reality technology

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3952717B2 (en) * 2001-09-19 2007-08-01 日産自動車株式会社 Instrument panel disassembly jig
CN105788408A (en) * 2013-03-11 2016-07-20 林肯环球股份有限公司 Importing and analyzing external data using a virtual reality welding system
CN107833503B (en) * 2017-11-10 2019-10-29 广东电网有限责任公司教育培训评价中心 Distribution core job augmented reality simulation training system

Also Published As

Publication number Publication date
CN109034748A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109034748B (en) Method for building mould disassembly and assembly engineering training system based on AR technology
CN109509226B (en) Three-dimensional point cloud data registration method, device and equipment and readable storage medium
CN107484428B (en) Method for displaying objects
CN100377171C (en) Method and apparatus for generating deteriorated numeral image
Ibáñez et al. An experimental study on the applicability of evolutionary algorithms to craniofacial superimposition in forensic identification
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN104169941A (en) Automatic tracking matte system
CN101669146A (en) 3d object scanning using video camera and TV monitor
CN107155341A (en) 3 D scanning system and framework
US10169891B2 (en) Producing three-dimensional representation based on images of a person
CN108550157B (en) Non-shielding processing method for teaching blackboard writing
CN111292408B (en) Shadow generation method based on attention mechanism
CN110648274B (en) Method and device for generating fisheye image
CN111062869A (en) Curved screen-oriented multi-channel correction splicing method
CN111951333A (en) Automatic six-dimensional attitude data set generation method, system, terminal and storage medium
CN107507263B (en) Texture generation method and system based on image
CN114913308A (en) Camera tracking method, device, equipment and storage medium
WO2010109994A1 (en) Stripe pattern image appraisal device, stripe pattern image appraisal method, and program
CN112365589B (en) Virtual three-dimensional scene display method, device and system
RU2735066C1 (en) Method for displaying augmented reality wide-format object
US20230353702A1 (en) Processing device, system and method for board writing display
TWI672639B (en) Object recognition system and method using simulated object images
JP6967150B2 (en) Learning device, image generator, learning method, image generation method and program
JP7380661B2 (en) Projection method and projection system
US20100166296A1 (en) Method and program for extracting silhouette image and method and program for constructing three dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant