CN115481489A - System and method for verifying suitability of body-in-white and production line based on augmented reality - Google Patents

System and method for verifying suitability of body-in-white and production line based on augmented reality Download PDF

Info

Publication number
CN115481489A
CN115481489A CN202211151732.1A CN202211151732A CN115481489A CN 115481489 A CN115481489 A CN 115481489A CN 202211151732 A CN202211151732 A CN 202211151732A CN 115481489 A CN115481489 A CN 115481489A
Authority
CN
China
Prior art keywords
virtual
production line
white
model
gripping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211151732.1A
Other languages
Chinese (zh)
Inventor
胡耀光
王鹏
杨晓楠
王敬飞
李承舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202211151732.1A priority Critical patent/CN115481489A/en
Publication of CN115481489A publication Critical patent/CN115481489A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/18Details relating to CAD techniques using virtual or augmented reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/28Fuselage, exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a system and a method for verifying the suitability of a body-in-white and a production line based on augmented reality, and belongs to the technical field of the suitability verification of the body-in-white and the production line. According to the invention, virtual grid points are arranged on the production line site through a human-computer interaction auxiliary verification method, a virtual grid fitting the surface of the production line is generated based on a graphical algorithm, lightweight instant modeling of the production line is realized, site correction grid point layout information is fused on the basis of computer vision perception information, and the perception precision of the production line environment is improved; based on the virtual grid boundary points on the production line surface obtained by fitting and the detection points on the predefined white body model, the risk point distance measurement of the white body and the real production line environment is realized by the AR distance measurement method, the real environment distance measurement point space distance is obtained by utilizing the mapping risk point distance measurement under the AR environment, the problems that the occlusion cannot be handled under the existing space distance measurement method and the measurement points cannot be obtained under the environment complex condition are effectively avoided, and the simple and accurate space distance measurement is realized.

Description

System and method for verifying suitability of body in white and production line based on augmented reality
Technical Field
The invention belongs to the technical field of suitability verification of a body in white and a production line, and relates to a system and a method for verifying the suitability of the body in white and the production line of an automobile based on a HoloLens device.
Background
With the development of digital manufacturing technology, an intelligent manufacturing system with integrated information-physical system has become a general solution for manufacturing upgrade. As one of the cores of modern industrial civilization, the automobile industry urgently needs to carry out intelligent modification on the whole life cycle of production. The Body-in-white (BIW) is the structural assembly of the automobile after welding, and is the framework of the automobile, the cost of the Body-in-white can account for 30-60% of the cost of the whole automobile, and the decision made in the conceptual design stage determines 70% of the cost of the whole automobile. Therefore, it is important to verify the structure of the body-in-white at the design stage. However, most of the BIW digital verification at present focuses on structural performance, and the suitability verification of a body-in-white structure and a subsequent production process (such as gluing and assembling) production line is lacked.
In order to maintain competitiveness in the current automobile industry, most of automobile enterprises are provided with flexible production lines capable of producing various body-in-white models on a common production line on the same platform, but a problem to be solved is urgent to verify whether the existing production lines can meet the body-in-white manufacturing requirements. For example, the production engineering of a body-in-white welding process must ensure the accessibility of the welding robot and the safe working space for automatic welding. At present, the method for verifying the body in white and the production line in the industry is mostly based on physical prototype and computer aided design software. The former refers to the physical verification of the manufacturability of a body-in-white prototype and on a production line, which presents a safety hazard because all entities are physically verified. The latter refers to the use of CAD software to simulate body-in-white manufacturing processes, and there is often a gap between the simulation environment and the actual production line that is limited by software capabilities. At present, a method for quickly, accurately and accurately verifying the suitability of a body-in-white and a production line is urgently needed.
With the development of Virtual Reality (VR) and digital twin technologies, verification systems for purely Virtual digital plants containing product prototypes have been developed. This approach takes manufacturing manufacturability feedback into account in the product structure design by validating the product design at the manufacturing site of the production line. However, a virtual scene of a production line environment needs to be created, the production line environment is usually complex, and the verification method has the problems of long modeling period and high cost. Furthermore, field verification of body-in-white often requires consideration of human factors, and in a purely virtual environment, the user's senses are limited. Augmented Reality (AR) is known as an effective method for solving the above problems. AR is a novel human-computer interaction tool that overlays computer-generated information in a real-world environment. Compared with VR, it realizes the interaction of real environment and virtual environment, reduces the complex scene modeling work, and delivers more real user sense. In recent years, with the convenience of application development and the maturity of hardware equipment, augmented reality has great application potential in the industrial field, such as assembly guidance, auxiliary maintenance, design verification and the like. AR has become one of the major technologies driving the development of smart manufacturing. Therefore, a fast digital verification system for body-in-white and production line suitability based on augmented reality is proposed. The system provides an AR environment based on a production line field, assists a user in carrying out tests such as distance measurement and collision detection, and can carry out production line adaptive digital verification quickly and with low cost in a body-in-white design stage, so that the body-in-white structure design is optimized.
Disclosure of Invention
The invention mainly aims to provide a system and a method for verifying the adaptability of a body-in-white and a production line based on augmented reality, wherein virtual grid points are arranged on the production line site through a man-machine interaction auxiliary verification method, and virtual grids fitting the surface of the production line are generated based on a graphical algorithm according to the virtual grid points arranged on the site, so that light instant modeling of the production line is realized, site correction grid point layout information is fused on the basis of computer visual perception information, and the perception precision of the production line environment is improved; based on the virtual grid boundary point on the production line surface obtained through fitting and the detection point on the predefined white body model, risk point ranging of the white body and the real production line environment is achieved through an AR distance measurement method, the space distance of the real environment ranging point is obtained through mapping risk point ranging in the AR environment, the problems that shielding cannot be handled in the existing space ranging method, and the measuring point cannot be obtained in the complex environment condition are effectively avoided, and simple and accurate space distance measurement is achieved. The method can avoid the problems of long period and unsafety of the traditional entity body verification, realize the quick low-cost suitability verification of the suitability of the body-in-white structure and the production line in the AR environment, and provide instant feedback for the body-in-white structure design and the production line layout optimization.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
the invention discloses a system for verifying suitability of a body in white and a production line based on augmented reality, which comprises a verification environment construction module, a bare-handed human-computer interaction auxiliary verification module and a detection module.
The verification environment construction module utilizes AR equipment to construct a virtual-real fused verification environment, and realizes that the virtual body-in-white model moves on an actual production line. The verification environment building module is the basis of the AR verification system. Lack accurate false-true fusion, the operator is difficult to correctly judge the relative position between the body-in-white model and the actual production line to make scene space perception appear unusually, influence operator's visual verification effect.
In the actual production process, the body-in-white is fixedly connected with a slide way of a production line through a lifting appliance, and the body-in-white and the lifting appliance are relatively fixed. Therefore, the virtual-real combination of the virtual body-in-white and the real production line is realized through the binding of the virtual body-in-white model and the real hanger. The verification environment construction module employs a marker-based tracking technique, taking into account the computational limitations of the AR device and the need for accurate virtual-real fusion. A static planar image (two-dimensional code) is imported into the system as a mark object, and the position of the virtual body-in-white model is defined according to a mark coordinate system. The position of the body-in-white relative to the production line is determined by the position of the model relative to the markers and the arrangement of the markers on the spreader. The AR device identification tag map displays a virtual body-in-white model. An operator sees the whole virtual reality fusion scene, and visually verifies the trafficability of the body-in-white of the production site on the AR equipment. The verification environment construction module is a functional basis of a subsequent verification module (comprising a distance measurement module, a collision detection module and a verification result visual output module), and distance measurement and collision detection are carried out on the basis of an accurate virtual-real fusion environment established by the verification environment construction module.
For better rendering effect, preferably, the AR device selects HoloLens2.
The man-machine interaction auxiliary verification module develops a free-hand man-machine interaction function based on AR equipment, and develops the man-machine interaction auxiliary verification module by combining the free-hand man-machine interaction function, virtual grid points are arranged on a production line site, and a virtual grid on the surface of the production line is generated and fitted based on a graphical algorithm according to the virtual grid points arranged on the site, so that light-weight instant modeling of the production line is realized, site correction grid point layout information is fused on the basis of computer vision perception information, and the perception precision of the production line environment is improved. The man-machine interaction auxiliary verification module comprises a free-hand man-machine interaction sub-module, a manual grid point arrangement module and a virtual grid generation module.
And the freehand man-machine interaction submodule is used for realizing a three-dimensional gesture control function of the virtual model in the augmented reality environment. The method comprises the steps of utilizing RGB-D information collected by AR equipment, superposing a virtual hand model on a real hand, constructing a 'gripping pair' condition according to the characteristics of a gripping physical process of the real object and an augmented reality environment, constructing a gripping intention recognition algorithm based on the 'gripping pair', and recognizing whether two hands grip the virtual model or release the virtual model. The method comprises the steps of detecting two hands and other virtual models to be in contact based on a collision detection algorithm, calculating whether a 'gripping pair' can be formed between the hands and a plurality of contact points of a manipulated model according to a gripping intention recognition algorithm, judging whether a gripping condition exists between the two hands and the virtual model, wherein the 'gripping pair' is composed of two contact points, if more than one 'gripping pair' exists, the grasped virtual model is judged to be in a gripping state, judging whether gripping is finished based on contact calculation of the contact points is not needed, so that gripping intention judgment is more flexible, the real three-dimensional gesture manipulation condition is more approximate, the method is more suitable for complex gesture interaction scenes, visual interaction feelings of users are better met, and meanwhile, if more pairs of 'gripping pairs' exist, a plurality of contact points forming the gripping pair all participate in interaction intention recognition, and robustness, flexibility, efficiency and immersion feelings of gesture interaction intention recognition are improved.
And manually arranging a grid point submodule, adjusting the position of the grid point and correcting the layout information of the grid point based on the 'grabbing pair' function of the freehand man-machine interaction submodule. The mesh generated on the production line by fitting is a spatial convex polygon consisting of triangular patches. Triangular patches are the basic units used by computers to create planar meshes. The elements that produce a triangle patch include a set of vertex coordinates and a set of triangle vertex indices, which are also the arrangement of vertices to be processed, so it is first considered to obtain a set of vertices that can be adapted to the surface of the production line, i.e., the set of mesh points. By predefining virtual sphere objects as grid points, the coordinate system of the sphere objects is uniform and easy to obtain in the AR environment, and by using the said freehand human-computer interaction submodule "grasping pair", the sphere is generated and dragged instantly onto the edge of the convex body on the production line surface. The set of vertices required for mesh fitting is achieved by arranging a series of sphere vertices.
And the virtual grid generation submodule is used for triangularizing the spatial convex body polygon by utilizing a graphical algorithm according to a grid vertex set obtained in the manual grid point arrangement submodule to obtain a triangle vertex index set. The triangularization is to decompose a polygon into a plurality of triangles, and the vertices form the polygon. Given that the manually defined sequence of vertices is the desired direct modeling sequence, the triangle vertex index set is built according to the adjacency principle. Combinations of triangles are then generated by graphical algorithms to generate virtual meshes that fit the actual production line surface.
The detection module detects the suitability verification requirements of the specific body in white and the production line based on the AR body in white and the production line suitability verification environment established by the verification environment establishing module and the bare-handed human-computer interaction auxiliary verification module. The detection module comprises an AR ranging submodule, a collision detection submodule and a verification result visual output submodule.
And the AR ranging submodule is used for ranging at risk points of the white body and the surrounding production line environment based on the virtual grid boundary points of the production line surface obtained by the man-machine interaction auxiliary verification module and detection points on the predefined white body model. The two detection points are all uniform spherical objects with coordinate attributes, and based on the space anchor of the AR device, the positions of the two detection points are converted into a uniform coordinate system, and vector operation is carried out to obtain the space distance. The risk point ranging of the white automobile body and the real production line environment is realized through the AR distance measuring method, the real environment ranging point space distance is obtained by utilizing the mapping risk point ranging in the AR environment, the problems that the existing space ranging method cannot process shielding and the measuring point cannot be obtained under the environment complex condition are effectively avoided, the space distance measurement which is easy to operate and accurate is realized, and the safe operation space is ensured.
And the collision detection submodule is used for detecting the collision between the body-in-white model and the production line grid based on a space convex polygonal grid model which is established for fitting the surface of the production line in the bare-handed human-computer interaction auxiliary verification module. A physical engine is developed by using 3D, a uniform collider is added for a production line fitting grid model and a body-in-white model, collision detection is realized, an event response is set by combining a trigger, and the position and the depth of the collision are recorded.
In order to realize more accurate verification effect, the Unity3D physical engine is selected as the physical engine.
And the test result visual output sub-module is used for receiving the detection point distance information output by the distance measurement module in real time and receiving the interference position and collision depth information output by the collision detection module in real time in the verification process of the distance measurement sub-module and the collision detection sub-module, and the distance measurement and collision verification information is presented in a form of a space visual user interface to support instant viewing and summary detection report display. In the invention, based on the gesture recognition function of the freehand man-machine interaction submodule, by defining a convenient user interface, the AR equipment camera can display an interaction menu when recognizing the stretching gesture, and the menu page contains real-time distance information and a collision detection result report.
The invention also discloses a body-in-white and production line suitability verification method based on augmented reality, which is realized based on the body-in-white and production line suitability verification system based on augmented reality. The method for verifying the suitability of the body-in-white and the production line based on the augmented reality comprises the following steps:
the method comprises the following steps: and creating a virtual-real fused body-in-white and production line adaptability verification environment, wearing AR equipment by a user, scanning and identifying a preset two-dimensional code mark, and rendering a body-in-white model on a production line. And determining the position of the model relative to the creation of the two-dimensional code coordinate system, determining the position of the two-dimensional code preset in a production line, and realizing the positioning of the body-in-white model relative to the production line.
Step two: the method comprises the steps of collecting RGB-D information by using AR equipment, identifying key nodes of two hands, superposing virtual hand models, determining the positions and postures of the virtual hand models according to the positions and postures of the key nodes, and achieving mapping of real two hands in a virtual space.
Step three: and constructing a 'gripping pair' condition according to the characteristics of the gripping physical process of the real object and the characteristics of man-machine interaction in the augmented reality environment. Based on a collision detection algorithm, whether contact occurs between the virtual hand model and other virtual models to be manipulated is calculated in real time in each frame, according to an intention recognition algorithm, when a plurality of contact points between the calculation hand and the manipulated model can form a 'gripping pair', whether a gripping condition exists between the two hands and the virtual model is judged, the 'gripping pair' is composed of two contact points, and if more than one 'gripping pair' exists, the grasped virtual model is judged to be in a gripping state. The gesture recognition method is more suitable for complex gesture interaction scenes, better accords with the visual interaction feeling of users, and improves the robustness, flexibility, efficiency and immersion of gesture interaction intention recognition.
The 'gripping pair' is formed by a virtual hand model meeting the condition and two contact points of a gripped model. The "grip pair" conditions are as follows: the angle between the line connecting the two contact points and the normal of the respective contact surface does not exceed a fixed angle α, the two contact points will form a stable gripping pair g (a, b). The fixed angle alpha is the friction angle.
The gripping intention recognition algorithm is established according to a gripping pair condition, and circularly judges whether all current contact points and another contact point form a pair of gripping pairs or not. For any two contact points a and b of the virtual hand and the virtual object in one cycle judgment, the angle between the connecting line of the two contact points and the normal of the respective contact surface does not exceed a fixed angle alpha, and then the two contact points form a stable gripping pair g (a and b). This fixed angle α is referred to as the friction angle, i.e. the grip pair g (a, b) should satisfy
Figure BDA0003856723920000051
Wherein n is a And n b The normal vectors of the contact point a and the contact point b are normal vectors of the cylindrical surface of the joint virtual model at the contact point; l. the ab Is a connecting line of the contact points a and b; α is the friction angle, the value of which needs to be set for a particular manipulated model by testing to meet the stable, natural grip of the virtual part.
Step four: and constructing a grabbing center acquisition method according to the 'grabbing pair' condition constructed in the third step so as to acquire the grabbing center. If the virtual model is judged to be in the gripping state based on the gripping intention recognition algorithm in the third step, the virtual force or moment exerted on the virtual model by the two hands is calculated based on the displacement and posture transformation of the gripping center of the two hands on the manipulated model according to the manipulation intention recognition algorithm, and the virtual force or moment drives the movement or rotation of the virtual model. After the operation intention recognition algorithm is adopted and the judgment condition of the grabbing center is added, all the contact points participate in the operation intention recognition process, so that the operation intention recognition is more flexible, and the robustness of the operation intention recognition is improved.
The grabbing center is a central point representing the motion of the whole hand, the whole hand is regarded as a complete rigid body, and the position, the posture and the speed of the grabbing center represent the motion parameters of the whole virtual hand.
The grasping center judging method comprises the following steps: and determining the position and the number of the 'gripping pairs' according to the 'gripping pairs' condition constructed in the step four. The "grip pair" is regarded as one unified rigid body, and the position and posture of the rigid body are represented by the grip center. If one 'gripping pair' exists, the gripping center is the center of a connecting line of contact points forming the 'gripping pair', and the gripping center position and the posture are calculated as follows:
Figure BDA0003856723920000052
Figure BDA0003856723920000053
wherein Pc represents a grasping center position, p1 and p2 represent positions of contact points constituting a "grasping pair", wc, rc and lc represent three Euler angle parameters of the grasping center, respectively,
Figure BDA0003856723920000054
and
Figure BDA0003856723920000055
representing a unit vector pointing in the x, y and z axes in the current coordinate system.
If a plurality of grasping pairs exist, judging according to the connecting line lengths of the contact points of the grasping pairs, determining the grasping pair with the longest connecting line length as a main grasping pair, and constructing a grasping center according to the formulas (3) and (4).
Step 4.1, judging whether the 'grasping pair' meets a 'grasping pair' canceling condition, if so, determining that the user puts down the manipulated virtual model, not executing subsequent steps, and not updating the position and the posture of the virtual model in the next frame; if not, executing step 4.2;
the "grip pair" cancellation condition is calculated as follows:
Figure BDA0003856723920000056
wherein the content of the first and second substances,
Figure BDA0003856723920000061
the distance between two contact points constituting a "grip pair" for the current i-th frame,
Figure BDA0003856723920000062
the distance between two contact points constituting the "grip pair" for the i-1 th frame, k is a fixed value. That is, when the two contact points constituting the "grip pair" are separated between the two frames and the separation degree satisfies a certain threshold, the grip is regarded as cancelled.
And 4.2, calculating the virtual force or moment exerted on the virtual model by the two hands according to the manipulation intention recognition algorithm, and continuously executing the step 4.3. The manipulation intention recognition algorithm is used for calculating virtual force or virtual moment applied by the two hands of the current frame to the virtual model based on the pose transformation trend of the grabbing center, and calculating the moving and rotating parameters of the virtual model according to the virtual force or the virtual moment, wherein the moving and rotating parameters comprise the moving direction and distance and the rotating direction and angle. After the manipulation intention recognition algorithm is adopted and the condition judgment of the 'grabbing center' is added, all the contact points participate in the manipulation intention recognition process, so that the manipulation intention recognition is more flexible, and the robustness of the manipulation intention recognition is improved.
The manipulation intention recognition algorithm is constructed based on a spring damping model (springer-dampers) of virtual linearity and torsion. The calculation formula of the manipulation intention recognition algorithm is as follows.
Figure BDA0003856723920000063
Figure BDA0003856723920000064
Equation (6) is a calculation equation of the virtual force, f vf Expressing the virtual steering force, equation (7) is a calculation equation of the virtual moment, τ vf Representing a virtual steering torque. Wherein the gesture of the current ith frame of two-hand contact with the center point is represented as
Figure BDA0003856723920000065
At frame i +1, the gesture of both hands touching the center point is represented as (qi +1l, qi + 1o),
Figure BDA0003856723920000066
for the three-dimensional position of the hand in the ith frame,
Figure BDA0003856723920000067
quaternions to describe hand orientation;
Figure BDA0003856723920000068
and
Figure BDA0003856723920000069
linear and angular velocities at the i-th frame for the virtual model to be manipulated. K sl (K so ) And K Dl (K Do ) Coefficients of linear and torsion spring damping models. By means of adjustment. K sl (K so ) And K Dl (K Do ) And the coefficient realizes stable and smooth dynamic motion of the virtual part and accords with the visual interactive feeling of the user.
And 4.3, calculating the displacement variation and the rotation variation of the virtual model by combining rigid body dynamics according to the virtual force or the moment calculated by the manipulation intention recognition algorithm in the step 4.2. And updating the position and the posture of the manipulated virtual model in the current frame according to the displacement and the rotation variation, and rendering the virtual model according to the new position and the posture.
The displacement variation calculation formula is as follows:
Figure BDA00038567239200000610
wherein Si represents the displacement of the manipulated virtual model at the current ith frame, vi represents the velocity of the manipulated virtual model at the current ith frame, Δ t represents the time difference between the current ith frame and the i +1 th frame of the next frame, f vf The virtual steering forces identified for said steering intent recognition algorithm, m representing the mass of the steered virtual model. Delta T i A displacement matrix representing the virtual model, and Z, Y, and X represent coordinate systems in the augmented reality environment.
The rotation variation calculation formula is as follows:
Figure BDA0003856723920000071
ΔR i =R ziz )R yiy )R xix ) (11)
wherein, theta i Indicating the angle of rotation, τ, of the virtual model manipulated at the current ith frame vf The virtual steering force recognized by the steering intention recognition algorithm, Δ t represents the time difference between the current ith frame and the (i + 1) th frame of the next frame, J represents the moment of inertia of the steered virtual model, and Δ R i Rotation matrix, θ, representing the virtual model iz ,θ iy And theta ix Respectively indicate the rotation angle at theta i The components around the x, y, z axes of the augmented reality environment coordinate system.
Step five: and manually arranging grid points, and dragging the virtual sphere to the edge of the convex body on the surface of the production line by using the 'gripping pair' created in the step three through predefining the virtual sphere object as the grid point. The set of vertices required for mesh fitting is achieved by arranging a series of sphere vertices. And (4) manually adjusting the positions of grid points based on the 'gripping pair' constructed in the step three, and correcting the layout information of the grid points to obtain a grid point set capable of accurately fitting the surface of the production line.
Step six: and generating a virtual grid on the surface of the convex body of the fitting production line by using a graphical algorithm, and triangulating the virtual grid on the surface of the convex body of the fitting production line on the basis of a grid vertex set obtained by manually arranging grid points in the fourth step to obtain a triangle vertex index set. The triangularization process is to decompose a polygon into a plurality of triangles, and the polygon is composed of the vertexes. Since the artificially defined vertex sequence is the desired direct modeling sequence, the triangle vertex index set is built according to the adjacency principle. Combinations of triangles are then generated by graphical algorithms to generate virtual meshes that fit the actual production line surface.
The set of mesh vertices and the set of triangle vertex indices are represented by the following parameterization:
{V 0 ,V 1 ,V 2 ,....,V i ,...},V i ∈polygon (12)
Figure BDA0003856723920000072
where polygon represents a set of virtual grid vertices, V i Represents the ith grid point; Δ i denotes the ith triangle (V) constituting the virtual mesh i 0 ,V i 1 ,V i 2 ) Representing the three vertex index sets of this triangle.
Step seven: and measuring the distance between a glue gun of the body-in-white and the glue coating position of the body-in-white by defining a detection point. The definition of the detection point is realized by combining user menu selection and gesture definition: predefining detection points on the body-in-white model and selecting the detection points through gestures of a user interface; and the production line field detection point drags the detection point (sphere) to a corresponding position through the 'grabbing pair' created in the fourth step. And the two detection points are positioned under a unified space coordinate system, and the space distance is calculated through vector operation. Based on the gesture recognition function in the fourth step, when the AR equipment camera recognizes the stretching gesture, an interactive menu is displayed, and a menu page contains real-time distance information. The method for measuring the distance between the risk points of the white body and the real production line environment is realized by the AR distance measuring method, the spatial distance between the real environment distance measuring points is obtained by the mapping risk point distance measuring under the AR environment, the problems that the existing spatial distance measuring method cannot process shielding and the measuring points cannot be obtained under the environment complex condition are effectively avoided, the spatial distance measuring method is easy to operate and accurate, and the safe operation space is ensured.
Step eight: and in the white automobile body moving and positioning process, detecting the collision position through a collision detection algorithm, and performing interference verification with a gluing production line. Creating an AR verification environment based on the first step and a production line virtual grid model created in the fifth step, adding collision bodies for the grid model and the body-in-white by using a physical engine, detecting the interference condition with a production line in real time in the moving process of the body, setting an event response by combining a trigger, recording the position and the depth of the collision, and performing result visual feedback on the interference, namely realizing the adaptation verification of the body-in-white and the production line based on augmented reality.
The collision detection algorithm model is represented by the following equation:
A+B={a+b|a∈A,b∈B} (14)
A. b refers to the collection of points on the convex bodies A and B, and a and B are points in A and B.
A-B={a-b|a∈A,b∈B} (15)
Equation (15) is called as Minkowski difference, and indicates that when the convex bodies A and B are overlapped or intersected, the difference set { a-B } of the convex bodies A and B definitely contains an origin, and for the collision detection of the white body and the production line, namely, whether interference occurs is verified by judging whether the difference set of the white body model point set and the production line virtual grid point set contains the origin, and the inclusion of the origin indicates that the collision occurs.
Through the verification process, the high-efficiency and low-cost vehicle body production line suitability verification is realized at the white vehicle body design stage by utilizing the augmented reality technology, the instant feedback is provided for the white vehicle body structure optimization and the production line layout optimization, and the white vehicle body structure optimization and the production line layout are further adjusted to improve the suitability of the white vehicle body and the production line.
Has the advantages that:
1. the invention discloses a system and a method for verifying the adaptability of a body-in-white and a production line based on augmented reality.A virtual grid point is arranged on the production line site through a man-machine interaction auxiliary verification method, and a virtual grid fitting the surface of the production line is generated based on a graphical algorithm according to the virtual grid point arranged on the site, so that the lightweight instant modeling of the production line is realized, the layout information of the site correction grid point is fused on the basis of computer visual perception information, and the perception precision of the production line environment is improved; based on the virtual grid boundary points on the surface of the production line obtained by fitting and the detection points on the predefined white body model, the risk point distance measurement of the white body and the real production line environment is realized by an AR distance measurement method, the real environment distance measurement point space distance is obtained by utilizing the mapping risk point distance measurement under the AR environment, the problems that the existing space distance measurement method cannot process shielding and the measurement points cannot be obtained under the environment complex condition are effectively avoided, and the simple and accurate space distance measurement is realized.
2. According to the system and the method for verifying the suitability of the body-in-white and the production line based on the augmented reality, disclosed by the invention, the distance measurement, the interference verification and the like of the body-in-white and the production line are realized under the AR environment, the problems of long period and unsafety of the traditional entity body verification are avoided, the quick low-cost suitability verification of the suitability of the body-in-white structure and the production line under the AR environment is realized, the instant feedback is provided for the optimization of the body-in-white structure and the layout optimization of the production line, and the quick iteration of the body-in-white design is realized in a supporting manner.
3. According to the system and the method for verifying the adaptability of the body-in-white and the production line based on the augmented reality, a 'grabbing pair' condition is constructed according to the characteristics of a real object grabbing physical process and the augmented reality environment, an intention recognition algorithm is constructed based on the 'grabbing pair' condition, natural free-hand man-machine interaction is realized, the system and the method are more suitable for complex gesture interaction scenes, the intuitive interaction feeling of a user is better met, and the robustness, flexibility, efficiency and immersion feeling of gesture interaction intention recognition are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only one embodiment of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is an experimental verification environment of the present invention in a body-in-white glue line.
FIG. 2 is a diagram of vertices defined to fit a production line mesh model using HoloLens2 gesture interaction.
FIG. 3 is a spatial polygon mesh model fitted to the surface of a production line convex body.
FIG. 4 illustrates the definition of range detection points for a virtual body-in-white and a physical production line by a pre-definition and gesture selection method.
FIG. 5 is a visual output of body in white and glue line validation example collision detection.
FIG. 6 is a gesture display user interface including distance information and menu bars.
FIG. 7 is a system block diagram of the augmented reality-based body-in-white and production line suitability verification system.
FIG. 8 is a flowchart of the augmented reality-based body-in-white and production line suitability verification method.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The application provides a white automobile body and production line suitability verification system and method based on augmented reality, can be applied to AR terminal equipment who is equipped with the camera, including cell-phone, flat board, AR glasses or AR helmet. In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are described in detail and completely with reference to the drawings in the embodiments of the present invention by taking AR glasses HoloLens2 as an example.
The system for verifying the suitability of the body-in-white and the production line in the augmented reality environment disclosed by the embodiment is shown in fig. 1 to 7 and comprises a verification environment construction module, a freehand human-computer interaction auxiliary verification module and a detection module.
The verification environment construction module arranges the marker map at a specific position of a production line, wherein the position of the body-in-white relative to the production line is determined by the position of the body-in-white model relative to the marker and the arrangement of the marker on the production line environment, and the virtual body-in-white model is moved on the production line by identifying the marker map through HoloLens glasses, so that the virtual body-in-white model is moved on the actual production line. And the user wears the HoloLens2 identification tag diagram to render and display a virtual-real fused verification environment, and visual verification is carried out on the trafficability of the white automobile body in the production field.
As shown in fig. 2, in the human-computer interaction-assisted verification module, a user wears HoloLens2 to arrange virtual grid points on a production line site, and generates a virtual grid fitting the surface of the production line based on a graphical algorithm according to the virtual grid points arranged on the site, so as to realize light-weight and instant modeling of the production line. The man-machine interaction auxiliary verification module comprises a free-hand man-machine interaction sub-module, a manual grid point arrangement module and a virtual grid generation module.
And the freehand man-machine interaction submodule is used for realizing a three-dimensional gesture control function of the virtual model in the augmented reality environment. The method comprises the steps of utilizing RGB-D information collected by HoloLens2 glasses, superimposing a virtual hand model on a real hand of a user, constructing a 'gripping pair' condition according to the physical process characteristics of the gripping of a real object and an augmented reality environment, constructing a gripping intention recognition algorithm based on the 'gripping pair', and recognizing whether two hands grip virtual grid points or release the virtual grid points.
And manually arranging a grid point submodule, manually adjusting the position of a grid point based on the 'grabbing pair' function of the freehand man-machine interaction submodule, correcting the layout information of the grid point, and finally generating a virtual grid fitting the surface of the production line.
And the virtual grid generation submodule is used for triangularizing the space convex body polygon by utilizing a graphical algorithm according to a grid vertex set obtained in the manual grid point arrangement submodule to obtain a triangular vertex index set. Then, a combination of triangles is generated through a graphical algorithm, and a virtual grid which is suitable for the surface of an actual production line is generated.
The detection module is used for detecting distance measurement and interference verification requirements in the adaptability of a specific white automobile body and a production line based on an AR white automobile body and production line adaptive verification environment created by the verification environment construction module and the bare-handed man-machine interaction auxiliary verification module. The detection module comprises an AR ranging submodule, a collision detection submodule and a verification result visual output submodule.
And the AR ranging submodule is used for ranging at risk points of the white body and the surrounding production line environment based on the virtual grid boundary points of the production line surface obtained by the man-machine interaction auxiliary verification module and detection points on the predefined white body model. Based on the constructed 'grabbing pair' condition, the user grabs the two detection points by hands and places the two detection points at the risk points of the white body and the production line respectively. Based on the space anchor of the HoloLens2 glasses, the positions of the two detection points are converted into a unified coordinate system, and vector operation is carried out to obtain the space distance. The method comprises the steps of realizing risk point ranging of a body-in-white and a real production line environment through an AR distance measuring method, and obtaining the space distance of the ranging points of the real environment by utilizing the mapping risk point ranging in the AR environment so as to ensure a safe operation space.
And the collision detection submodule is used for detecting the collision between the body-in-white model and the production line grid based on a space convex polygonal grid model which is established for fitting the surface of the production line in the bare-handed human-computer interaction auxiliary verification module. And adding a uniform collider for the production line fitting grid model and the white body model by using a Unity3D engine to realize interference verification of the white body and the production line, setting event response by combining a trigger, and recording the position and the depth of collision.
The test result visual output sub-module is used for displaying an interactive menu by defining a convenient user interface, namely, a HoloLens2 glasses camera identifies a stretching gesture to display the interactive menu based on the gesture identification function of the freehand man-machine interaction sub-module, and the menu page contains real-time distance information and a collision detection result report.
As shown in fig. 8, the embodiment further discloses a method for verifying the suitability of a body-in-white and a production line in an augmented reality environment, which specifically comprises the following steps:
step one, creating a virtual-real fused body-in-white and production line adaptability verification environment, and a user wears HoloLens2 glasses, scans and identifies a preset marker so as to register a body-in-white model to be displayed on a production line. And determining the position of the model relative to the creation of the two-dimensional code coordinate system, determining the position of the two-dimensional code preset in a production line, and realizing the positioning of the body-in-white model relative to the production line.
And step two, establishing a 'gripping pair' condition according to the gripping physical process characteristics of the real object and the man-machine interaction characteristics of the augmented reality environment. Based on a collision detection algorithm, whether contact occurs between the virtual hand model and other virtual models to be manipulated is calculated in real time in each frame, according to an intention recognition algorithm, when whether a plurality of contact points between the calculating hand and the manipulated model can form a 'gripping pair' or not, whether a gripping condition exists between the two hands and the virtual model or not is judged, the 'gripping pair' is composed of two contact points, and if more than one 'gripping pair' exists, the gripped virtual model is judged to be in a gripping state. The gesture recognition method is more suitable for complex gesture interaction scenes, better accords with the visual interaction feeling of users, and improves the robustness, flexibility, efficiency and immersion of gesture interaction intention recognition.
And step three, through the manual human-computer interaction auxiliary verification of the HoloLens2, defining grid points at key positions on the surface of the production line by hands of a user, and creating a grid model according with the production line scene. The method comprises the steps of presetting detection points (spheres) in a scene, defining a hand menu bar button to generate the detection points, dragging the spheres to the edge of a production line in a gesture interaction manner, arranging a series of vertex sets of the surface of the production line in a fitting manner, calculating through a graphical algorithm to generate triangular patches based on a polygon vertex set, and constructing a mesh model of the surface of the production line in the fitting manner through combination of the triangular patches.
And step four, measuring the AR distance. And measuring the distance between a production line and a white vehicle body risk point by defining a detection point. The definition of the detection point is realized by combining user menu selection and gesture definition: predefining detection points on the body-in-white model and selecting the detection points through gestures of a user interface; the production line detection point drags the detection point (sphere) to a corresponding position through gesture interaction. And the two detection points are positioned under a unified space coordinate system, and the space distance is calculated through vector operation. The ranging information supports real-time viewing of the hand menu.
And step five, collision detection. And in the white automobile body moving and positioning process, the collision position is detected through a collision detection algorithm, and the interference verification with a gluing production line is carried out. Based on the production line grid model created in the first positioning process and the second freehand human-computer interaction auxiliary verification, collision bodies are added to the grid model and the body-in-white by using a Unity3D physical engine, the interference condition with a production line is detected in real time in the moving process of the body, and the result visual feedback is carried out on the interference.
Through the verification process, the body-in-white structure design and the production line layout are further adjusted to ensure the adaptability of the body-in-white and the glue coating line.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. The utility model provides a white automobile body and production line suitability verification system based on augmented reality which characterized in that: the system comprises a verification environment construction module, a freehand human-computer interaction auxiliary verification module and a detection module;
the verification environment construction module is used for constructing a virtual-real fused verification environment by utilizing AR equipment to realize the movement of the virtual body-in-white model on the actual production line;
the body-in-white is fixedly connected with a slideway of a production line through a lifting appliance, and the body-in-white and the lifting appliance are relatively fixed; therefore, the virtual-real combination of the virtual body-in-white and the real production line is realized by binding the virtual body-in-white model and the real hanger; considering the computational power limit of the AR equipment and the requirement of accurate virtual-real fusion, the verification environment construction module is realized by adopting a mark-based tracking technology; a static plane image is imported into the system as a mark object, and the position of the virtual body-in-white model is defined according to a mark coordinate system; the position of the body-in-white relative to the production line is determined by the position of the model relative to the markers and the arrangement of the markers on the spreader; the AR equipment identifies the marked graph to display a virtual body-in-white model; an operator sees the whole virtual reality fusion scene, and visually verifies the trafficability of the white car body on the production site on the AR equipment; the verification environment construction module is a functional basis of a subsequent verification module, and the subsequent verification module carries out distance measurement and collision detection on the basis of an accurate virtual-real fusion environment established by the verification environment construction module;
the man-machine interaction auxiliary verification module develops a free-hand man-machine interaction function based on AR equipment, and develops the man-machine interaction auxiliary verification module by combining the free-hand man-machine interaction function, a virtual grid for fitting the surface of a production line is generated based on a graphical algorithm by arranging virtual grid points on the production line site, so that light-weight instant modeling of the production line is realized, site correction grid point layout information is fused on the basis of computer visual perception information, and the perception precision of the production line environment is improved; the man-machine interaction auxiliary verification module comprises a free-hand man-machine interaction sub-module, a manual grid point arrangement module and a virtual grid generation module;
the free-hand man-machine interaction submodule is used for realizing a three-dimensional gesture control function of the virtual model in the augmented reality environment; the method comprises the steps of utilizing RGB-D information collected by AR equipment, superposing a virtual hand model on a real hand, constructing a 'gripping pair' condition according to the characteristics of a gripping physical process of the real object and an augmented reality environment, constructing a gripping intention recognition algorithm based on the 'gripping pair', and recognizing whether two hands grip the virtual model or release the virtual model; detecting two hands and other virtual models to be contacted based on a collision detection algorithm, calculating whether a 'gripping pair' can be formed between the hands and a plurality of contact points of a manipulated model according to a gripping intention recognition algorithm, judging whether a gripping condition exists between the two hands and the virtual models, wherein the 'gripping pair' consists of two contact points, if more than one 'gripping pair' exists, the gripped virtual models are judged to be in a gripping state, judging whether gripping is finished based on contact calculation of the contact points is not needed, so that gripping intention judgment is more flexible, the gripping pair is closer to a real three-dimensional gesture manipulation condition, the gripping pair is more suitable for a complex gesture interaction scene and more accords with visual interaction feeling of a user, and meanwhile, if more pairs of 'gripping pairs' exist, the contact points forming the gripping pair all participate in interaction intention recognition, and the robustness, flexibility, efficiency and immersion feeling of the gesture interaction intention recognition are improved;
manually arranging a grid point submodule, adjusting the position of a grid point and correcting the layout information of the grid point based on the 'grabbing auxiliary' function of the freehand man-machine interaction submodule; the mesh generated on the production line by fitting is a space convex polygon formed by triangular patches; the triangular patch is a basic unit used by a computer to create a planar mesh; the elements for generating the triangle patch comprise a vertex coordinate set and a triangle vertex index set, wherein the triangle vertex index set is also the vertex arrangement to be processed, so that firstly, a vertex set which can be adapted to the surface of a production line, namely the mesh point set, is obtained; by predefining a virtual sphere object as a grid point, wherein the coordinate system of the sphere object is uniform and easy to obtain in an AR environment, and the sphere is generated and dragged to the edge of the convex body on the surface of the production line in real time by using the 'grasping pair' of the freehand man-machine interaction submodule; the vertex set required by the mesh fitting is realized by arranging and distributing a series of spherical vertexes;
the virtual grid generation submodule is used for triangularizing the space convex polygon by utilizing a graphical algorithm according to a grid vertex set obtained in the manual grid point arrangement submodule to obtain a triangular vertex index set; the triangularization treatment is to decompose a polygon into a plurality of triangles, and the vertexes form the polygon; considering that the vertex sequence defined by the manual work is the required direct modeling sequence, the triangle vertex index set is established according to the adjacency principle; then generating a combination of triangles through a graphical algorithm to generate a virtual grid adapted to the surface of the actual production line;
the detection module detects the suitability verification requirement of the specific body-in-white and the production line based on the AR body-in-white and production line suitability verification environment created by the verification environment construction module and the bare-handed human-computer interaction auxiliary verification module; the detection module comprises an AR ranging submodule, a collision detection submodule and a verification result visual output submodule;
the AR ranging submodule is used for ranging at risk points of the white body and the surrounding production line environment based on the boundary points of the virtual grid on the production line surface obtained by the man-machine interaction auxiliary verification module and detection points on the predefined white body model; the two detection points are all uniform spherical objects with coordinate attributes, the positions of the two detection points are converted into a uniform coordinate system based on a space anchor of the AR equipment, and vector operation is carried out to obtain a space distance; the risk point ranging between the body-in-white and the real production line environment is realized through an AR distance measuring method, the space distance of the ranging point of the real environment is obtained by utilizing the mapping risk point ranging under the AR environment, the problems that the existing space ranging method cannot deal with shielding and the measuring point cannot be obtained under the environment complex condition are effectively avoided, the space distance measurement which is easy to operate and accurate is realized, and the safe operation space is ensured;
the collision detection submodule is used for carrying out collision detection between the body-in-white model and the production line grids on the basis of a space convex polygonal grid model which is established for fitting the production line surface in the bare-handed human-computer interaction auxiliary verification module; a 3D development physical engine is utilized, a uniform collider is added for a production line fitting grid model and a body-in-white model, collision detection is achieved, event response is set by combining a trigger, and the position and the depth of collision are recorded;
the testing result visual output sub-module is used for receiving detection point distance information output by the distance measuring module in real time in the verification process of the distance measuring sub-module and the collision detection sub-module, receiving interference position and collision depth information output by the collision detection module in real time, presenting the interference position and collision depth information in a space visual user interface mode by using the distance measuring and collision verification information, and supporting instant viewing and summary detection report display; in the invention, based on the gesture recognition function of the freehand man-machine interaction submodule, by defining a convenient user interface, the AR equipment camera can display an interaction menu when recognizing the stretching gesture, and the menu page contains real-time distance information and a collision detection result report.
2. The system for verifying the suitability of a body in white and a production line based on augmented reality as claimed in claim 1, wherein: and the AR equipment selects HoloLens2.
3. An augmented reality based body-in-white and production line suitability verification system as claimed in claim 2 wherein: the physical engine is a Unity3D physical engine.
4. An augmented reality-based body-in-white and production line suitability verification method is realized based on the augmented reality-based body-in-white and production line suitability verification system as claimed in claim 1, 2 or 3, and is characterized in that: comprises the following steps of (a) carrying out,
the method comprises the following steps: creating a virtual-real fused white automobile body and production line suitability verification environment, wearing AR equipment by a user, scanning and identifying a preset two-dimensional code mark, and rendering a white automobile body model on a production line; determining the position of the model relative to the creation of a two-dimensional code coordinate system, determining the position of a two-dimensional code preset in a production line, and realizing the positioning of the body-in-white model relative to the production line;
step two: collecting RGB-D information by using AR equipment to identify key nodes of two hands, superposing virtual hand models, determining the positions and postures of the virtual hand models according to the positions and postures of the key nodes, and realizing the mapping of the real two hands in a virtual space;
step three: establishing a 'gripping pair' condition according to the gripping physical process characteristics of the real object and the man-machine interaction characteristics of the augmented reality environment; calculating whether contact occurs between the virtual hand model and other virtual models to be manipulated in real time in each frame based on a collision detection algorithm, judging whether a grasping condition exists between two hands and the virtual models or not when a grasping pair can be formed between a plurality of contact points of a calculating hand and a manipulated model according to an intention recognition algorithm, wherein the grasping pair consists of two contact points, and if more than one grasping pair exists, the grasped virtual model is judged to be in a grasping state; the method is more suitable for complex gesture interaction scenes, more accords with the visual interaction feeling of the user, and improves the robustness, flexibility, efficiency and immersion of gesture interaction intention recognition;
the 'gripping pair' is formed by a virtual hand model meeting the conditions and two contact points of a gripped model; the "grip pair" conditions are as follows: the angle between the connecting line of the two contact points and the normal of the respective contact surface does not exceed a fixed angle alpha, and then the two contact points form a stable gripping pair g (a, b); the fixed angle alpha is a friction angle;
the grasping intention identification algorithm is established according to a grasping pair condition, and whether all current contact points and another contact point can form a pair of grasping pairs is circularly judged; for any two contact points a and b of the virtual hand and the virtual object in one cycle judgment, if the angle between the connecting line of the two contact points and the normal of the respective contact surface does not exceed a fixed angle alpha, the two contact points form a stable gripping pair g (a and b); this fixed angle α is referred to as the friction angle, i.e. the gripping pair g (a, b) should satisfy
Figure FDA0003856723910000031
Wherein n is a And n b The normal vectors of the contact point a and the contact point b are normal vectors of the cylindrical surface of the joint virtual model at the contact point; l ab Is a connecting line of the contact points a and b; alpha is a friction angle, and the value of the friction angle needs to be set by testing aiming at a specific manipulated model so as to meet the stable and natural grasping of a virtual part;
step four: constructing a grabbing center acquisition method according to the 'grabbing pair' condition constructed in the step three so as to acquire a grabbing center; if the virtual model is judged to be in the gripping state based on the gripping intention recognition algorithm in the third step, calculating virtual force or moment exerted on the virtual model by the two hands based on the displacement and posture transformation of the gripping center of the two hands on the manipulated model according to the manipulation intention recognition algorithm, and driving the virtual model to move or rotate by the virtual force or moment; after a manipulation intention identification algorithm is adopted and a grasping center judgment condition is added, all contact points participate in the manipulation intention identification process, so that the manipulation intention is more flexibly identified, and the robustness of the manipulation intention identification is improved;
the grabbing center is a central point representing the motion of the whole hand, the whole hand is regarded as a complete rigid body, and the position, the posture and the speed of the grabbing center represent the motion parameters of the whole virtual hand;
the grasping center judging method comprises the following steps: judging the positions and the number of the 'gripping pairs' according to the 'gripping pairs' condition constructed in the step four; regarding the 'grasping pair' as a uniform rigid body, and representing the position and the posture of the rigid body by a grasping center; if one 'gripping pair' exists, the gripping center is the center of a connecting line of contact points forming the 'gripping pair', and the gripping center position and the posture are calculated as follows:
Figure FDA0003856723910000041
Figure FDA0003856723910000042
wherein Pc represents a grasping center position, p1 and p2 represent positions of contact points constituting a "grasping pair", wc, rc and lc represent three Euler angle parameters of the grasping center, respectively,
Figure FDA0003856723910000043
and
Figure FDA0003856723910000044
a unit vector representing the x, y and z axes pointing in the current coordinate system;
if a plurality of grasping pairs exist, judging according to the connecting line lengths of the contact points of the grasping pairs, determining the grasping pair with the longest connecting line length as a main grasping pair, and constructing a grasping center according to the formulas (3) and (4);
step 4.1, judging whether the 'grasping pair' meets a 'grasping pair' cancellation condition, if so, regarding that the user puts down the manipulated virtual model, not executing the subsequent steps, and not updating the position and the posture of the virtual model in the next frame; if not, executing step 4.2;
the "grip pair" cancellation condition is calculated as follows:
Figure FDA0003856723910000045
wherein the content of the first and second substances,
Figure FDA0003856723910000046
the distance between two contact points constituting a "grip pair" for the current ith frame,
Figure FDA0003856723910000047
the distance between two contact points forming a 'gripping pair' for the (i-1) th frame, and k is a fixed value; namely, when two contact points forming the 'gripping pair' are far away between two frames and the far-away degree meets a certain threshold value, the gripping is considered to be cancelled;
step 4.2, calculating the virtual force or moment exerted on the virtual model by the two hands according to the operation intention recognition algorithm, and continuing to execute the step 4.3; the manipulation intention recognition algorithm is used for calculating virtual force or virtual moment applied by two hands of a current frame to a virtual model based on the pose transformation trend of a grabbing center, and calculating the moving and rotating parameters of the virtual model according to the virtual force or the virtual moment, wherein the moving and rotating parameters comprise moving direction and distance, and rotating direction and angle; after a manipulation intention recognition algorithm is adopted and the condition judgment of a 'grabbing center' is added, all contact points participate in the manipulation intention recognition process, so that the manipulation intention recognition is more flexible, and the robustness of the manipulation intention recognition is improved;
the manipulation intention recognition algorithm is constructed based on a spring damping model of virtual linearity and torsion; the calculation formula of the manipulation intention recognition algorithm is shown as follows;
Figure FDA0003856723910000048
Figure FDA0003856723910000049
equation (6) is a calculation equation of the virtual force, f vf Expressing the virtual steering force, equation (7) is a calculation equation of the virtual moment, τ vf Representing a virtual steering torque; wherein the gesture of the current ith frame with both hands contacting the center point is represented as
Figure FDA0003856723910000051
At frame i +1, the gesture of both hands touching the center point is represented as (qi +1l, qi + 1o),
Figure FDA0003856723910000052
for the three-dimensional position of the hand in the ith frame,
Figure FDA0003856723910000053
quaternions to describe hand orientation;
Figure FDA0003856723910000054
and
Figure FDA0003856723910000055
linear and angular velocities at frame i for the manipulated virtual model; k sl (K so ) And K Dl (K Do ) Coefficients for linear and torsion spring damping models; through adjustment; k is sl (K so ) And K Dl (K Do ) The coefficient realizes stable and smooth dynamic motion of the virtual part and accords with the visual interactive feeling of the user;
4.3, calculating the displacement variation and the rotation variation of the virtual model by combining rigid body dynamics according to the virtual force or the moment calculated by the manipulation intention recognition algorithm in the step 4.2; updating the position and the posture of the manipulated virtual model in the current frame according to the displacement and the rotation variation, and rendering the virtual model according to the new position and the new posture;
the displacement variation calculation formula is as follows:
Figure FDA0003856723910000056
where Si represents the displacement of the manipulated virtual model at the current ith frame, vi represents the velocity of the manipulated virtual model at the current ith frame, Δ t represents the time difference between the current ith frame and the (i + 1) th frame of the next frame, f vf Identifying a virtual steering force for said steering intent recognition algorithm, m representing the mass of the steered virtual model; delta T i A displacement matrix representing the virtual model, Z, Y and X representing a coordinate system in the augmented reality environment;
the rotation variation calculation formula is as follows:
Figure FDA0003856723910000057
ΔR i =R ziz )R yiy )R xix ) (11)
wherein, theta i Indicating the angle of rotation, τ, of the virtual model manipulated at the current ith frame vf The virtual steering force recognized by the steering intention recognition algorithm is recognized, Δ t represents the time difference from the current i-th frame to the i + 1-th frame of the next frame, J represents the moment of inertia of the steered virtual model, and Δ R i Rotation matrix, θ, representing the virtual model iz ,θ iy And theta ix Respectively indicate the rotation angle at theta i Components around the x, y, z axes of the augmented reality environment coordinate system;
step five: manually arranging grid points, and dragging the virtual sphere to the edge of the convex body on the surface of the production line in real time by using the 'gripping pair' created in the step three through predefining the virtual sphere object as the grid point; the vertex set required by the mesh fitting is realized by arranging and distributing a series of spherical vertexes; manually adjusting the positions of grid points and correcting the layout information of the grid points based on the 'grabbing pair' constructed in the third step to obtain a grid point set capable of accurately fitting the surface of the production line;
step six: generating a virtual grid on the surface of the convex body of the fitting production line by using a graphical algorithm, and based on a grid vertex set obtained by manually arranging grid points in the fourth step, triangulating the virtual grid on the surface of the convex body of the fitting production line to obtain a triangle vertex index set; the triangularization treatment is to decompose a polygon into a plurality of triangles, and the vertexes form the polygon; because the vertex sequence defined manually is a required direct modeling sequence, the triangle vertex index set is established according to the adjacency principle; then generating a combination of triangles through a graphical algorithm to generate a virtual grid adapted to the surface of the actual production line;
the set of mesh vertices and set of triangle vertex indices are represented by the following parameterization:
{V 0 ,V 1 ,V 2 ,....,V i ,...},V i ∈polygon (12)
Figure FDA0003856723910000061
where polygon represents a set of virtual grid vertices, V i Represents the ith grid point; Δ i denotes the ith triangle (V) constituting the virtual mesh i 0 ,V i 1 ,V i 2 ) Representing the three vertex index sets of the triangle;
step seven: measuring the distance between a glue gun of the white body and a glue coating position of the white body by defining a detection point; the definition of the detection point is realized by combining user menu selection and gesture definition: predefining a detection point on the body-in-white model, and selecting through a user interface gesture; the production line site detection point drags the detection point to a corresponding position through the 'grabbing pair' created in the fourth step; the two detection points are positioned under a unified space coordinate system, and the space distance is calculated through vector operation; based on the gesture recognition function in the step four, when the AR equipment camera recognizes the stretching gesture, an interactive menu is displayed, and a menu page contains real-time distance information; the method has the advantages that the risk point ranging of the body-in-white and the real production line environment is realized by utilizing the AR distance measuring method, the space distance of the ranging point of the real environment is obtained by utilizing the mapping risk point ranging under the AR environment, the problems that the existing space ranging method cannot deal with shielding and the measuring point cannot be obtained under the environment complex condition are effectively avoided, the space distance measurement which is easy to operate and accurate is realized, and the safe operation space is ensured;
step eight: in the white automobile body moving and positioning process, detecting a collision position through a collision detection algorithm, and performing interference verification with a gluing production line; creating an AR verification environment based on the first step and a production line virtual grid model created in the fifth step, adding collision bodies for the grid model and the body in white by using a physical engine, detecting interference conditions with a production line in real time in the moving process of the body, setting event response by combining a trigger, recording the position and depth of collision, and performing result visual feedback on interference, namely realizing body in white and production line adaptability verification based on augmented reality.
5. The method for verifying the suitability of the body in white and the production line based on the augmented reality as claimed in claim 4, wherein: in the step eight, the step of the method comprises the following steps,
the collision detection algorithm model is represented by the following equation:
A+B={a+b|a∈A,b∈B} (14)
A. b refers to the collection of points on the convex bodies A and B, and a and B are points in A and B;
A-B={a-b|a∈A,b∈B} (15)
the expression (15) is called as Minkowski difference, and indicates that when the convex bodies A and B are overlapped or intersected, the difference set { a-B } of the convex bodies A and B definitely contains an origin, and for the collision detection of the white body and the production line, whether the interference occurs is verified by judging whether the difference set of the white body model point set and the production line virtual grid point set contains the origin, and the inclusion of the origin indicates that the collision occurs;
through the verification process, the high-efficiency and low-cost vehicle body production line suitability verification is realized at the white vehicle body design stage by utilizing the augmented reality technology, the instant feedback is provided for the white vehicle body structure optimization and the production line layout optimization, and the white vehicle body structure optimization and the production line layout are further adjusted to improve the suitability of the white vehicle body and the production line.
CN202211151732.1A 2022-09-21 2022-09-21 System and method for verifying suitability of body-in-white and production line based on augmented reality Pending CN115481489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211151732.1A CN115481489A (en) 2022-09-21 2022-09-21 System and method for verifying suitability of body-in-white and production line based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211151732.1A CN115481489A (en) 2022-09-21 2022-09-21 System and method for verifying suitability of body-in-white and production line based on augmented reality

Publications (1)

Publication Number Publication Date
CN115481489A true CN115481489A (en) 2022-12-16

Family

ID=84423548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211151732.1A Pending CN115481489A (en) 2022-09-21 2022-09-21 System and method for verifying suitability of body-in-white and production line based on augmented reality

Country Status (1)

Country Link
CN (1) CN115481489A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116092214A (en) * 2023-04-11 2023-05-09 海斯坦普汽车组件(北京)有限公司 Synchronous monitoring method and system for production of lightweight body-in-white assembly

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116092214A (en) * 2023-04-11 2023-05-09 海斯坦普汽车组件(北京)有限公司 Synchronous monitoring method and system for production of lightweight body-in-white assembly

Similar Documents

Publication Publication Date Title
Ueda et al. A hand-pose estimation for vision-based human interfaces
Wang et al. Real-virtual components interaction for assembly simulation and planning
CN111443619B (en) Virtual-real fused human-computer cooperation simulation method and system
US5973678A (en) Method and system for manipulating a three-dimensional object utilizing a force feedback interface
US10751877B2 (en) Industrial robot training using mixed reality
JP4817603B2 (en) Method and system for programming an industrial robot
Pan et al. Augmented reality-based robot teleoperation system using RGB-D imaging and attitude teaching device
Xia Haptics for product design and manufacturing simulation
Ma et al. A framework for interactive work design based on motion tracking, simulation, and analysis
CN110385694A (en) Action teaching device, robot system and the robot controller of robot
CN111665933A (en) Method and device for operating object in virtual or augmented reality
CN111338287A (en) Robot motion control method, device and system, robot and storage medium
Yang et al. Research on virtual haptic disassembly platform considering disassembly process
Gu A journey from robot to digital human: mathematical principles and applications with MATLAB programming
CN115686193A (en) Virtual model three-dimensional gesture control method and system in augmented reality environment
CN115481489A (en) System and method for verifying suitability of body-in-white and production line based on augmented reality
Liang et al. Bare-hand depth perception used in augmented reality assembly supporting
CN108664126B (en) Deformable hand grabbing interaction method in virtual reality environment
CN110142769A (en) The online mechanical arm teaching system of ROS platform based on human body attitude identification
Du et al. Natural human–machine interface with gesture tracking and cartesian platform for contactless electromagnetic force feedback
Ueda et al. Hand pose estimation using multi-viewpoint silhouette images
Xiong et al. A framework for interactive assembly task simulationin virtual environment
Akahane et al. Two-handed multi-finger string-based haptic interface SPIDAR-8
CN110688006A (en) Real hand touch pottery manufacturing device and method based on virtual reality technology
US20020101400A1 (en) System and method of interactive evaluation of a geometric model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination