CN112905017A - Multi-person collaborative dismounting system based on gesture interaction - Google Patents

Multi-person collaborative dismounting system based on gesture interaction Download PDF

Info

Publication number
CN112905017A
CN112905017A CN202110302514.2A CN202110302514A CN112905017A CN 112905017 A CN112905017 A CN 112905017A CN 202110302514 A CN202110302514 A CN 202110302514A CN 112905017 A CN112905017 A CN 112905017A
Authority
CN
China
Prior art keywords
virtual
grabbing
task
gesture
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110302514.2A
Other languages
Chinese (zh)
Inventor
胡兆勇
孙淑栓
余彦峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110302514.2A priority Critical patent/CN112905017A/en
Publication of CN112905017A publication Critical patent/CN112905017A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a multi-user collaborative disassembly and assembly system based on gesture interaction, which comprises a server and a plurality of clients, wherein the server is provided with a central control server, the clients are provided with a graphic workstation, a helmet display and a body sensation controller, and the central control server is provided with: the data analysis module is used for reading the imported model file and the imported data file; the collaborative dismounting process model building module is used for building a collaborative dismounting process model according to the file and initializing an executable dismounting task; the collaborative dismounting process model processing module is used for updating the executable dismounting task according to the completion of the dismounting task by the user; the interactive process processing module is used for realizing a virtual assembly scene according to the input of a user and realizing the pose control of a virtual object by tracking the pose change of a virtual hand between adjacent frames; the collaborative interaction module is used for realizing scene synchronization of each client; and the human-computer interaction module is used for realizing the interaction between the user and the central control server.

Description

Multi-person collaborative dismounting system based on gesture interaction
Technical Field
The invention relates to the technical field of virtual reality, in particular to a multi-person collaborative dismounting system based on gesture interaction.
Background
With the rapid development of virtual reality technology, virtual reality technology has been widely applied in various fields. As one of the mainstream applications of the virtual reality technology, the virtual assembly technology can effectively solve the problem of dependence on fields, resources and personnel in actual training, and is widely applied to industrial training and teaching.
The existing virtual disassembly and assembly training system has the following defects:
most of the training systems are single training systems, and in the actual training and learning process, the situation that professional guidance is needed exists, and the situation that disassembly and assembly need to be cooperatively matched also exists.
The method is simple in user human-computer interaction, real natural interaction operation cannot be reflected by using a mouse keyboard or an interaction handle for interaction, the complex disassembly and assembly actions of the user cannot be accurately tracked and identified, and the requirement of multi-user collaborative disassembly and assembly training of complex equipment cannot be met.
The system is not universal, and the system can only be applied to a specific device.
Disclosure of Invention
The invention solves or partially solves the defects in the prior art, and provides a gesture interaction-based multi-user collaborative dismounting system.
The utility model provides a many people dismouting system in coordination based on gesture is interactive, includes server side and a plurality of customer end, and the server side has well accuse server, and the customer end has the graphic workstation of networking to well accuse server to and connect graphic workstation respectively be used for gathering the helmet mounted display of user's visual angle information, be used for gathering the somatosensory controller of user hand gesture information, well accuse server is provided with:
the data analysis module is used for reading the imported model file and the imported data file;
the collaborative dismounting process model building module is used for building a collaborative dismounting process model according to the model file and the data file at the beginning of system operation and initializing an executable dismounting task in the collaborative dismounting process model;
the collaborative dismounting process model processing module is used for updating the newly added executable dismounting task according to the completion of the dismounting task by the user;
the interactive process processing module is used for realizing a virtual assembly scene according to the input of a user based on the helmet display and the somatosensory controller and realizing the pose control of a virtual object by tracking the pose change of a virtual hand between adjacent frames;
the collaborative interaction module is used for realizing scene synchronization of each client;
and the human-computer interaction module is used for realizing the interaction between the user and the central control server.
Further, the collaborative disassembly and assembly process model includes:
the disassembly and assembly task sequence is used for controlling the execution sequence of the disassembly and assembly tasks by a directed graph constructed by the task numbers of the disassembly and assembly tasks and a preceding task set;
the executable task set consists of executable tasks and is updated according to the dismounting task sequence when the dismounting task is completed;
the disassembly and assembly task comprises a task number, a step number set used for associating the task with the disassembly and assembly step, a task flag bit used for marking the completion degree of the corresponding disassembly and assembly step, and a prior task set used for indicating that the task set needs to be completed first when the task can be executed;
and the disassembling and assembling step comprises a step number, a step object used for representing parts to be disassembled and assembled, an interactive process used for representing the operations to be disassembled and assembled, disassembling and assembling information used for prompting to assist disassembling and assembling training, a tool type used for representing tools needed by the execution step, and a grabbing gesture used for representing the gestures needed by the execution step.
Further, the task indicates whether multiple persons need to cooperatively execute or not according to the number of the contained disassembling and assembling steps.
Further, the interaction process includes a single-handed interaction object model, and the single-handed interaction object model includes:
a virtual hand or tool;
parts are required to be disassembled and assembled;
the part pose constraint is used for specifying the constraint condition of the part pose change;
judging the state of the part, wherein the state of the part comprises a grabbing state allowing to implement drive control and a normal state forbidding to implement drive control, detecting whether a grabbing gesture is correct when a virtual hand or a tool collides with the part, if so, entering the grabbing state, and entering a strict grabbing state or a semi-grabbing state according to the constraint condition, wherein the posture change of the part in the strict grabbing state is completely consistent with the posture change of the virtual hand, and the posture change of the part in the semi-grabbing state is controlled by the virtual hand and corresponds to the constraint condition; when the gesture is judged to be the releasing gesture, the state of the control part is changed from the grabbing state to the normal state;
and the driving control is used for realizing the pose change of the part, controlling the part to start tracking the pose change of the virtual hand or the tool after entering a grabbing state, and adjusting the pose according to the constraint condition until the step is finished.
Further, the tracking is to track the position of the last frame and the position of the current frame of the virtual hand or the tool, specifically track the euler angle change of the virtual hand in the palm direction when tracking the virtual hand, and specifically track the euler angle change of the tool in the tool rotation direction when tracking the tool.
Further, the pose constraint of the part comprises position constraint and posture constraint, wherein the posture constraint is divided into rotation posture constraint and revolution posture constraint according to the position of a rotating axis, the rotation posture constraint and the revolution posture constraint are defined by taking Ti, Tj and RVij as parameters, Ti represents that the part is constrained in a space position, the position of the part can only move along the forward direction or the reverse direction of a space unit vector i, Ri represents that the part is constrained in the rotation posture, the part can only rotate in the direction taking the center of the part as the axis and taking the vector i as the axial direction, and RVij represents that the part is constrained in the revolution posture, and the part can only rotate around the direction taking the vector j as the axis position and taking the vector i as the axial direction.
Furthermore, a double-hand interaction object model is expanded on the basis of the single-hand interaction object model in the interaction process, the double-hand interaction object model further comprises a virtual hand A, a virtual hand B, a touch point PosA, a touch point PosB and a virtual operating hand,
the virtual manipulator is an invisible virtual manipulator generated when the part is judged to be in a grabbing state, is controlled by the virtual manipulator A, B together, is positioned in the middle of the virtual manipulator A, B, and has a designated axis always pointing from A to B or from B to A;
the touch points PosA and PosB are positions of the virtual hand A, B when the part is judged to be in the grabbing state, and are bound with the pose of the part after being generated to assist in judging the state of the part;
further, the detection method of the two-hand interaction object model further includes:
when the virtual hands A, B are all collided with the parts, detecting whether the gesture of the virtual hand A, B is correct, continuously detecting whether the grabbing meets the reality when the gesture is correct, if so, generating touch points PosA, PosB and a virtual operating hand, and changing the parts from a normal state to a grabbing state and driving and controlling the parts by the virtual operating hand;
the method for detecting whether the grabbing conforms to the reality comprises the following steps:
and when the constraint type is Ti or RVij or no constraint, respectively carrying out grabbing distance detection and grabbing angle detection, wherein the grabbing distance detection is to judge whether the distance of the virtual hand A, B is greater than the length-width average value of the minimum bounding box of the part, and the grabbing angle detection is to judge whether the grabbing angle alpha is greater than a set threshold angle.
Further, the detection method of the two-hand interaction object model further includes: and when the distance between the virtual hand and the corresponding touch point is larger than d, the state of the part is restored to the normal state.
Further, the somatosensory controller is LeapMotion, and the method further comprises the following gesture feature extraction steps:
s1, extracting three overall features and four local features, wherein the three overall features are the lengths of a rectangular bounding box in three axial directions, the rectangular bounding box is a minimum bounding box containing finger feature points, the axial direction of the bounding box is determined by the palm center direction and the palm direction, and the four local features are the distances between adjacent fingertips of a virtual hand;
s2, performing normalization processing on the extracted features;
s3, acquiring and setting gesture data for the features after the normalization processing to establish a gesture database data file;
and S4, importing the gesture database data file into a system, constructing a collaborative dismounting process model based on the gesture database data file, and performing real-time gesture recognition through a K neighbor classification algorithm in the interaction process.
Has the advantages that:
the system 1 has universality, can be transplanted to most disassembly and assembly scenes, and can quickly generate a corresponding virtual disassembly and assembly training system as long as corresponding disassembly and assembly models are prepared and corresponding disassembly and assembly process files are written according to different disassembly and assembly cases.
And 2, the real natural interactive operation can be embodied, the interactive scheme adopts gesture interaction, and the real interactive process can be restored to a certain extent.
The 3 system can support multi-person cooperative training.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like elements throughout the drawings. In the drawings:
FIG. 1 illustrates the architecture of the interactive hardware platform of the present invention;
FIG. 2 illustrates the flow of an implementation of the interactive hardware platform of the present invention;
FIG. 3 illustrates a collaborative disassembly and assembly process model of the present invention;
FIG. 4 illustrates a multi-person interactive workflow of the present invention;
FIG. 5 illustrates the one-handed interaction object model of the present invention;
FIG. 6 illustrates a one-handed interaction flow of the present invention;
FIG. 7 illustrates a two-handed interactive object model of the present invention;
FIG. 8 is a schematic diagram illustrating an interaction gesture of the present invention;
FIG. 9 shows a schematic diagram of feature extraction of the present invention;
FIG. 10 is a schematic structural diagram of an electronic device according to the present invention;
fig. 11 is a schematic structural diagram of a computer-readable storage medium according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The interactive hardware platform of the present invention is shown in fig. 1. The system comprises a server side and a plurality of client sides, wherein the server side is provided with a central control server, the client sides are composed of a helmet display HTC VIVE Pro, a body sensing controller Leap Motion and a graphic workstation, and the network environment is a local area network. HTC VIVE Pro is connected with the graphic workstation through the cable with the Leap Motion, and the graphic workstation passes through the network and is connected with well accuse server, and the Leap Motion passes through the accessory to be fixed on the helmet in addition, and wherein the effect of HTCVIVE Pro is the visual angle information of gathering the user, and the effect of Leap Motion is the hand gesture information of gathering the user.
The implementation flow of the interactive hardware platform is shown in fig. 2, and a central control server is provided with a data analysis module, a collaborative disassembly and assembly process model construction module, a collaborative disassembly and assembly process model processing module, an interactive process processing module, a collaborative interaction module and a human-computer interaction module.
The data analysis module is used for reading the imported model file and the imported data file and preparing for building the collaborative dismounting process model.
And the collaborative dismounting process model building module builds a collaborative dismounting process model according to the model file and the data file at the beginning of system operation, and initializes an executable dismounting task in the collaborative dismounting process model.
And the collaborative dismounting process model processing module updates the collaborative dismounting process model according to the completion of the dismounting task, updates the executable task set according to the dismounting task sequence when the dismounting task is completed, initializes the newly added executable task, changes the task object into an interactive state and adds interactive prompt information.
The interactive process processing module is used for realizing a specific virtual assembly scene according to the input of a user based on the helmet display and the somatosensory controller, and realizing the pose control of a virtual object by tracking the pose change of a virtual hand between adjacent frames.
The collaborative interaction module is used for realizing the consistency of the scenes of the clients and comprises the synchronization of the poses of the virtual avatar, the synchronization of the poses of the virtual hand, the synchronization of the gesture recognition state, the synchronization of the poses of the task object and the tool and the synchronization of the interaction information.
The human-computer interaction module is used for realizing interaction between a user and the central control server and mainly comprises gesture operation interaction, virtual avatar control, virtual tablet interaction and task guidance.
The invention discloses a multi-person collaborative dismounting method based on gesture interaction, which comprises the following operation flows:
step 1, training scene creation: the trainees select training subjects according to training requirements and create or join training scenes in the local area network.
Step 2, constructing a collaborative disassembly and assembly process model: after the training scene is established, a collaborative disassembly and assembly process model is constructed, and the initialization of an executable disassembly and assembly task is carried out.
Step 3, checking executable tasks: after the student makes the interface gesture, the step prompt panel appears near the gesture, and the student can check all the dismounting tasks and executable tasks from the step prompt panel, and can choose to skip a certain step and directly skip to the step 6. When other gestures are switched, the step prompts the panel to be hidden.
Step 4, virtual stand-by position control: when the virtual avatar is close to the operation object, the trainee can walk to approach, and when the virtual avatar is far away, the trainee controls the virtual avatar to approach by doing forward gestures and backward gestures.
And step 5, executing a dismounting task: the students execute tasks by single person or in coordination according to task requirements, the interactive process processing module realizes translation or rotation of the virtual object by tracking pose transformation of the virtual hand between adjacent frames, and corresponding grabbing gestures need to be kept in the operation process. When the object is placed at a designated position or the operation amount reaches a threshold value, the task is completed.
Step 6, updating the collaborative disassembly and assembly process model: and when the dismounting task is completed, updating the collaborative dismounting process model, and changing the executable task set.
And 7, repeating the steps 3-6: until all disassembly and assembly tasks are completed.
In the collaborative disassembly and assembly process model building module, the definition of the collaborative disassembly and assembly process model is a key link. The collaborative disassembly and assembly process model is defined as shown in fig. 3, and is composed of a disassembly and assembly task sequence, an executable task set, a disassembly and assembly task set, and a disassembly and assembly step set, wherein the disassembly and assembly task sequence, the disassembly and assembly task set, and the disassembly and assembly step set are not changed after being generated, and include four basic elements:
(1) disassembling and assembling task sequences: and a directed graph constructed by the task number of each disassembly and assembly task and the prior task set is used for controlling the execution sequence of the disassembly and assembly tasks.
(2) A set of executable tasks: the system consists of executable tasks, and updates according to a disassembly and assembly task sequence when the disassembly and assembly tasks are completed.
(3) Disassembling and assembling tasks: the dismounting task consists of a task number, a step number set, a task zone bit and a prior task set. The task number is the unique identification of the dismounting task. The step number set links tasks and disassembly and assembly steps, one task at least comprises one disassembly and assembly step, and when the step number set comprises a plurality of disassembly and assembly steps, the task needs to be executed by multiple persons in a coordinated mode. The task flag bit is used for marking whether the corresponding disassembly and assembly steps are completely finished. The prior task set represents a task set which needs to be completed first when the task can be executed, and is used for generating a disassembly and assembly task sequence.
(4) Disassembling and assembling: the disassembling and assembling steps comprise step numbers, step objects, an interactive process, disassembling and assembling information, tool types and grabbing gestures. The step number is the unique identifier of the disassembly and assembly step. The step object refers to a part to be disassembled and assembled. The interactive process refers to the disassembly and assembly operation of the parts, namely, the interactive object model mentioned later or four types of special operations of free movement, display, hiding and touch, wherein the free movement operation corresponds to the unconstrained condition in the interactive object model, and the other three types of special operations are mainly used for representing inspection tasks, simplifying repeated actions, representing instantaneous actions and the like. The disassembly and assembly information is prompt information of the disassembly and assembly step and is used for assisting disassembly and assembly training. The tool type refers to the tool that should be used when performing the step. The grabbing gesture refers to a gesture that should be used when grabbing an object.
In the interaction process processing module, the multi-person interaction workflow is as shown in fig. 4, and the specific interaction method is mainly realized according to the interaction object model provided by the invention, the interaction object model can restore the real interaction process to a certain extent, and the interaction object model comprises a one-hand interaction object model and a two-hand interaction object model.
The single-hand interaction object model is shown in fig. 5 and is composed of virtual hands (or tools), parts, part state judgment, drive control and part pose constraint. The single-hand interaction flow chart is shown in fig. 6. The pose constraint of the part is artificially determined, the pose constraint of the same part in different disassembly and assembly steps can be different, and the part mainly comprises position constraint and posture constraint, wherein the posture constraint is subdivided into rotation posture constraint and revolution posture constraint according to the position of a rotation axis. Ti represents that the part is restricted in space position, the part position can only move along the positive direction or the negative direction of a space unit vector i, Ri represents that the part is restricted by the rotation posture, the part can only rotate in the direction taking the center of the part as the axis and taking a vector i as the axial direction, RVij represents that the part is restricted by the revolution posture, and the part can only rotate in the direction taking a vector j as the axis and taking a vector i as the axial direction. In which constraints can be combined with each other according to the actual scene, for example, if a part is constrained by Ti, Tj, Rk, it means that the position of the part can only change on a plane formed by a vector i and a vector j, and the posture of the part can only rotate around a direction taking the center of the part as an axis and a vector k as an axial direction.
The part state judgment is an interactive basis, the part state comprises a grabbing state and a normal state, wherein the grabbing state is divided into strict grabbing and semi-grabbing according to the constraint condition, the part can be driven and controlled only in the grabbing state, the strict grabbing means that the pose change of the part is completely consistent with the pose change of the virtual hand, and the semi-grabbing means that the pose of the part in some directions is controlled by the virtual hand and corresponds to the constraint. The method mainly comprises two parts of collision detection and gesture detection, detects whether a grabbing gesture is correct when a virtual hand (or tool) collides with a part, enters a grabbing state if the grabbing gesture is correct, and enters a strict grabbing or semi-grabbing state according to a constraint condition. When the gesture is a releasing gesture, the part state is changed from a grabbing state to a normal state and is not driven to be controlled any more, and whether gravity falling needs to be simulated or not is judged according to the constraint condition.
And the driving control is responsible for realizing the pose change of the part, when the part enters a grabbing state, the part starts to track the pose change of the virtual hand (or tool), and the pose is adjusted according to the constraint condition until the step is finished. The displacement tracking is to track the position of the previous frame and the position of the current frame of the virtual hand, the tracking object of the attitude tracking is related to whether the virtual hand is operated by the bare hand, when the virtual hand is operated by the bare hand, the Euler angle change of the virtual hand in the palm direction is tracked, and when the tool is operated by the tool, the Euler angle change of the tool in the tool rotation direction is tracked.
The double-hand interaction object model is an extension of the single-hand interaction object model, the single-hand interaction object model is contained in the double-hand interaction object model, and the double-hand interaction object model is shown in fig. 7 and comprises a virtual hand A, a virtual hand B, parts, touch points PosA and PosB, a virtual manipulator, part pose constraint, part state judgment and drive control. The virtual hand, the part state judgment, the drive control and the part pose constraint can form a single-hand interaction model, the interaction process is basically similar to the single-hand interaction process, and the details are not repeated here.
The virtual manipulator is an invisible virtual manipulator which is generated only when the part is judged to be in a grabbing state, the virtual manipulator is controlled by the virtual manipulators A and B together, the position of the virtual manipulator is located in the middle of the virtual manipulators A and B, and a certain designated axis (the designated x axis) always points from A to B (or B points to A).
And the touch points PosA and PosB are positions of the virtual hands A and B when the part is judged to be in the grabbing state, are bound with the pose of the part after being generated, are used for assisting in judging the state of the part, and fail when the part is recovered to be in the normal state.
The part pose constraint of the two-hand interaction object model is the same as the part pose constraint of the one-hand interaction object model.
The part state judgment of the two-hand interaction object model is more complicated than that of the one-hand interaction object model, and whether the object grabbed by the two hands is in accordance with the reality or not needs to be detected besides collision detection and gesture detection. When the constraint type is Ti, RVij or unconstrained, it is necessary to perform grabbing distance detection and grabbing angle detection, respectively, where the grabbing distance detection is to determine whether the distance between two hands is greater than the length-width average value of the minimum bounding box of the part, and the grabbing angle detection is to determine whether a grabbing angle α is greater than a set threshold angle (θ ═ 150 °), where α is obtained by equation (6), P1 and P2 are position vectors of virtual hands a and B, respectively, and j is a part center position vector.
Wherein phiRVComprises the following steps:
ΦRV=sign·getangle(dis(P1-j,i),dis(P2-j,i),dis(P2-P1,i)) (6)
wherein getangle (a, b, c) represents the angle of corresponding edge c obtained by chord length.
Figure BDA0002986819090000081
Where dis (a, b) represents the projected distance of vector a on a plane normal to unit vector b.
dis(a,b)=|a-α·b·b| (8)
Where sign denotes the direction of rotation, which is given by the following equation, sgn is a sign function.
sign=sgn((P1-j)×(P2-j)·i) (9)
When the constraint type is Ri, two-hand grasping determination is not required. The method comprises the steps of detecting whether gestures of virtual hands A and B are correct when the virtual hands A and B are detected to collide with parts, continuously detecting whether grabbing is in accordance with reality when the gestures are correct, generating touch points PosA, PosB and a virtual manipulator if the grabbing is in accordance with reality, changing the parts from a normal state to a grabbing state, and driving and controlling the parts by the virtual manipulator. Because the user is free to grasp the part when the virtual hands grasp the part, and the part is driven by the virtual hands, the positions of the virtual hands A and B are necessarily inconsistent with the positions of the touch points PosA and PosB, an elastic distance d is needed to be set to judge the state of the part, when the distance between the virtual hands and the corresponding touch points is larger than d, the state of the part is restored to a normal state, and in addition, when any one gesture of the virtual hands A and B is a releasing gesture, the state of the part is also restored to the normal state. Like the one-hand interaction object model, when the state of the part is restored to a normal state, whether gravity drop needs to be simulated or not is judged according to the constraint type.
The driving control of the two-hand interaction object model is basically consistent with that of the one-hand interaction object model, and the part tracks the pose change of the virtual manipulator.
Sometimes, some objects can be operated by one hand or both hands, so that a mark needs to be given to a part to judge whether the object is a one-hand interaction object model or a two-hand interaction object model at the moment, and the mark is assigned by judging the number of hands touching the part.
In the human-computer interaction module, gesture interaction is a main interaction means, 8 interaction gestures are defined according to the dismounting requirement, as shown in fig. 8, wherein the grabbing, pinching, releasing and pressing gestures are basic gestures and are used for operating an object, the forward, backward and interface gestures are auxiliary gestures, the forward and backward gestures are used for the remote movement of a virtual substitute, the interface gestures are used for displaying a virtual flat plate, and a user can check all dismounting tasks, execute the tasks and select to skip an executable task by means of the virtual flat plate.
In order to realize the gesture recognition based on the LeapMotion, the invention provides a gesture feature extraction method, as shown in fig. 9, the feature extraction method of the invention comprises the following steps: and extracting three overall features and four local features, wherein the three overall features are the lengths of a rectangular bounding box in three axial directions, the rectangular bounding box is the minimum bounding box containing 24 finger feature points, and the axial direction of the bounding box is determined by the palm center direction and the palm direction. The four local features are the distance between adjacent fingertips, i.e. the distance between adjacent light colored dots in the figure. And then, normalizing the extracted features, wherein the normalization of the features is respectively divided by the maximum values corresponding to the sample sets. And then 8 types of gesture data are acquired for the normalized features through a developed gesture acquisition system, so that a gesture library is established, finally, a gesture library data file is imported into a virtual disassembly and assembly training system, a collaborative disassembly and assembly process model is constructed based on the gesture library data file, and real-time gesture recognition is carried out through a K neighbor classification algorithm in the interaction process. The gesture recognition is combined with collision detection and an interactive object model, so that a real dismounting process can be restored.
According to different dismounting cases, the corresponding virtual dismounting training system can be rapidly developed as long as the corresponding dismounting model is prepared and the corresponding dismounting process file is written. In addition, in order to embody real natural interactive operation, the interactive scheme adopts gesture interaction, performs gesture recognition based on LeapMotion, and combines collision detection and the proposed interactive object model to simulate real operation, so that single-person operation or cooperative operation can be realized, wherein 8 interactive gestures are defined according to disassembly and assembly requirements, gesture data is collected to establish a gesture library, and a gesture feature extraction method is proposed to combine a K neighbor classification algorithm to recognize the gestures.
It should be noted that:
the method of this embodiment can be implemented by a program step and a device that can be stored in a computer-readable storage medium and executed by a controller.
The gesture interaction can be implemented based on a somatosensory controller such as a data glove device besides a Leapmotion device.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, fig. 10 shows a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device conventionally comprises a processor 31 and a memory 32 arranged to store computer-executable instructions (program code). The memory 32 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 32 has a storage space 33 storing program code 34 for performing any of the method steps in the embodiments. For example, the storage space 33 for the program code may comprise respective program codes 34 for implementing respective steps in the above method. The program code can be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium such as described in fig. 11. The computer readable storage medium may have memory segments, memory spaces, etc. arranged similarly to the memory 32 in the electronic device of fig. 10. The program code may be compressed, for example, in a suitable form. In general, the memory unit stores program code 41 for performing the steps of the method according to the invention, i.e. program code readable by a processor such as 31, which when run by an electronic device causes the electronic device to perform the individual steps of the method described above.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A multi-person collaborative disassembly and assembly system based on gesture interaction,
comprises a server end and a plurality of client ends,
the server side has well accuse server, and the customer end has the figure workstation of networking to well accuse server to and connect the helmet mounted display that is used for gathering user's visual angle information of figure workstation respectively, be used for gathering the somatosensory controller of user's hand gesture information, its characterized in that, well accuse server is provided with:
the data analysis module is used for reading the imported model file and the imported data file;
the collaborative dismounting process model building module is used for building a collaborative dismounting process model according to the model file and the data file at the beginning of system operation and initializing an executable dismounting task in the collaborative dismounting process model;
the collaborative dismounting process model processing module is used for updating the newly added executable dismounting task according to the completion of the dismounting task by the user;
the interactive process processing module is used for realizing a virtual assembly scene according to the input of a user based on the helmet display and the somatosensory controller and realizing the pose control of a virtual object by tracking the pose change of a virtual hand between adjacent frames;
the collaborative interaction module is used for realizing scene synchronization of each client;
and the human-computer interaction module is used for realizing the interaction between the user and the central control server.
2. The system of claim 1, wherein the collaborative disassembly process model comprises:
the disassembly and assembly task sequence is used for controlling the execution sequence of the disassembly and assembly tasks by a directed graph constructed by the task numbers of the disassembly and assembly tasks and a preceding task set;
the executable task set consists of executable tasks and is updated according to the dismounting task sequence when the dismounting task is completed;
the disassembly and assembly task comprises a task number, a step number set used for associating the task with the disassembly and assembly step, a task flag bit used for marking the completion degree of the corresponding disassembly and assembly step, and a prior task set used for indicating that the task set needs to be completed first when the task can be executed;
and the disassembling and assembling step comprises a step number, a step object used for representing parts to be disassembled and assembled, an interactive process used for representing the operations to be disassembled and assembled, disassembling and assembling information used for prompting to assist disassembling and assembling training, a tool type used for representing tools needed by the execution step, and a grabbing gesture used for representing the gestures needed by the execution step.
3. The system of claim 2, wherein the task is represented by the number of assembling and disassembling steps included in the system, and whether the multi-user cooperative assembling and disassembling is required.
4. The system of claim 2, wherein the interactive process comprises a single-handed interactive object model, and the single-handed interactive object model comprises:
a virtual hand or tool;
parts are required to be disassembled and assembled;
the part pose constraint is used for specifying the constraint condition of the part pose change;
judging the state of the part, wherein the state of the part comprises a grabbing state allowing to implement drive control and a normal state forbidding to implement drive control, detecting whether a grabbing gesture is correct when a virtual hand or a tool collides with the part, if so, entering the grabbing state, and entering a strict grabbing state or a semi-grabbing state according to the constraint condition, wherein the posture change of the part in the strict grabbing state is completely consistent with the posture change of the virtual hand, and the posture change of the part in the semi-grabbing state is controlled by the virtual hand and corresponds to the constraint condition; when the gesture is judged to be the releasing gesture, the state of the control part is changed from the grabbing state to the normal state;
and the driving control is used for realizing the pose change of the part, controlling the part to start tracking the pose change of the virtual hand or the tool after entering a grabbing state, and adjusting the pose according to the constraint condition until the step is finished.
5. The multi-user cooperative disassembly and assembly system according to claim 4, wherein the tracking is tracking a previous frame position and a current frame position of the virtual hand or the tool, wherein the Euler angle change of the virtual hand in the palm direction is specifically tracked when the virtual hand is tracked, and the Euler angle change of the tool in the tool rotation direction is specifically tracked when the tool is tracked.
6. The multi-user cooperative disassembly and assembly system according to claim 4, wherein the pose constraints of the part include position constraints and pose constraints, wherein the pose constraints are rotation pose constraints and revolution pose constraints according to the rotation axis position, and are defined by using Ti, Tj and RVij as parameters, Ti represents that the part is constrained in the space position, the position of the part can only move in the forward direction or the reverse direction of a space unit vector i, Ri represents that the part is constrained in the rotation pose, the part can only rotate in the direction taking the center of the part as the axis and the vector i as the axial direction, and RVij represents that the part is constrained in the revolution pose, and the part can only rotate in the direction taking the vector j as the axis and the vector i as the axial direction.
7. The multi-user cooperative assembly and disassembly system of claim 6, wherein the interaction process is further extended with a two-hand interaction object model based on the one-hand interaction object model, the two-hand interaction object model further comprises a virtual hand A, a virtual hand B, a touch point PosA, a touch point PosB, and a virtual manipulator,
the virtual manipulator is an invisible virtual manipulator generated when the part is judged to be in a grabbing state, is controlled by the virtual manipulator A, B together, is positioned in the middle of the virtual manipulator A, B, and has a designated axis always pointing from A to B or from B to A;
the touch points PosA and PosB are positions of the virtual hand A, B when the part is determined to be in the grabbing state, and are bound with the pose of the part after being generated to assist in determining the state of the part.
8. The system of claim 7, wherein the detecting manner of the two-hand interaction object model further comprises:
when the virtual hands A, B are all collided with the parts, detecting whether the gesture of the virtual hand A, B is correct, continuously detecting whether the grabbing meets the reality when the gesture is correct, if so, generating touch points PosA, PosB and a virtual operating hand, and changing the parts from a normal state to a grabbing state and driving and controlling the parts by the virtual operating hand;
the method for detecting whether the grabbing conforms to the reality comprises the following steps:
and when the constraint type is Ti or RVij or no constraint, respectively carrying out grabbing distance detection and grabbing angle detection, wherein the grabbing distance detection is to judge whether the distance of the virtual hand A, B is greater than the length-width average value of the minimum bounding box of the part, and the grabbing angle detection is to judge whether the grabbing angle alpha is greater than a set threshold angle.
9. The system of claim 8, wherein the detecting manner of the two-hand interaction object model further comprises: and when the distance between the virtual hand and the corresponding touch point is larger than d, the state of the part is restored to the normal state.
10. The system of claim 1, wherein the somatosensory controller is a LeapMotion, further comprising the following gesture feature extraction steps:
s1, extracting three overall features and four local features, wherein the three overall features are the lengths of a rectangular bounding box in three axial directions, the rectangular bounding box is a minimum bounding box containing finger feature points, the axial direction of the bounding box is determined by the palm center direction and the palm direction, and the four local features are the distances between adjacent fingertips of a virtual hand;
s2, performing normalization processing on the extracted features;
s3, acquiring and setting gesture data for the features after the normalization processing to establish a gesture database data file;
and S4, importing the gesture database data file into a system, constructing a collaborative dismounting process model based on the gesture database data file, and performing real-time gesture recognition through a K neighbor classification algorithm in the interaction process.
CN202110302514.2A 2021-03-22 2021-03-22 Multi-person collaborative dismounting system based on gesture interaction Pending CN112905017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110302514.2A CN112905017A (en) 2021-03-22 2021-03-22 Multi-person collaborative dismounting system based on gesture interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110302514.2A CN112905017A (en) 2021-03-22 2021-03-22 Multi-person collaborative dismounting system based on gesture interaction

Publications (1)

Publication Number Publication Date
CN112905017A true CN112905017A (en) 2021-06-04

Family

ID=76106324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110302514.2A Pending CN112905017A (en) 2021-03-22 2021-03-22 Multi-person collaborative dismounting system based on gesture interaction

Country Status (1)

Country Link
CN (1) CN112905017A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020123812A1 (en) * 1998-12-23 2002-09-05 Washington State University Research Foundation. Virtual assembly design environment (VADE)
CN101441677A (en) * 2008-12-25 2009-05-27 上海交通大学 Natural interactive virtual assembly system based on product full semantic model
CN104504958A (en) * 2014-12-22 2015-04-08 中国民航大学 Airplane digitalized coordination virtual maintenance training device and coordination maintenance method
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space
CN110515455A (en) * 2019-07-25 2019-11-29 山东科技大学 It is a kind of based on the dummy assembly method cooperateed in Leap Motion and local area network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020123812A1 (en) * 1998-12-23 2002-09-05 Washington State University Research Foundation. Virtual assembly design environment (VADE)
CN101441677A (en) * 2008-12-25 2009-05-27 上海交通大学 Natural interactive virtual assembly system based on product full semantic model
CN104504958A (en) * 2014-12-22 2015-04-08 中国民航大学 Airplane digitalized coordination virtual maintenance training device and coordination maintenance method
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space
CN110515455A (en) * 2019-07-25 2019-11-29 山东科技大学 It is a kind of based on the dummy assembly method cooperateed in Leap Motion and local area network

Similar Documents

Publication Publication Date Title
CN105868715B (en) Gesture recognition method and device and gesture learning system
Chua et al. Model-based 3D hand posture estimation from a single 2D image
Lambrecht et al. Spatial programming for industrial robots through task demonstration
Guyon et al. Results and analysis of the chalearn gesture challenge 2012
Coupeté et al. Gesture recognition using a depth camera for human robot collaboration on assembly line
KR20140048128A (en) Method and system for analyzing a task trajectory
Sun et al. Augmented reality based educational design for children
Gu et al. Automated assembly skill acquisition and implementation through human demonstration
Wang et al. Perception of demonstration for automatic programing of robotic assembly: framework, algorithm, and validation
Staretu et al. Leap motion device used to control a real anthropomorphic gripper
Shao et al. A new descriptor for multiple 3D motion trajectories recognition
CN104239119B (en) A kind of method and system that training on electric power emulation is realized based on kinect
Dong et al. Augmented reality assisted assembly training oriented dynamic gesture recognition and prediction
El-Sawah et al. A framework for 3D hand tracking and gesture recognition using elements of genetic programming
Gil et al. 3D visual sensing of the human hand for the remote operation of a robotic hand
Yanaokura et al. A multimodal learning-from-observation towards all-at-once robot teaching using task cohesion
Manou et al. Understanding industrial robot programming by aid of a virtual reality environment
Grif et al. Approach to the Sign language gesture recognition framework based on HamNoSys analysis
CN112905017A (en) Multi-person collaborative dismounting system based on gesture interaction
Soroni et al. Hand Gesture Based Virtual Blackboard Using Webcam
Lin et al. Action recognition for human-marionette interaction
Sanjeewa et al. Understanding the hand gesture command to visual attention model for mobile robot navigation: service robots in domestic environment
CN113505750A (en) Identification method, device, electronic equipment and computer readable storage medium
Kurrek et al. Reinforcement learning lifecycle for the design of advanced robotic systems
Bai et al. Strategy with machine learning models for precise assembly using programming by demonstration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination