WO2019041900A1 - 增强现实环境中识别装配操作、模拟装配的方法和装置 - Google Patents

增强现实环境中识别装配操作、模拟装配的方法和装置 Download PDF

Info

Publication number
WO2019041900A1
WO2019041900A1 PCT/CN2018/088092 CN2018088092W WO2019041900A1 WO 2019041900 A1 WO2019041900 A1 WO 2019041900A1 CN 2018088092 W CN2018088092 W CN 2018088092W WO 2019041900 A1 WO2019041900 A1 WO 2019041900A1
Authority
WO
WIPO (PCT)
Prior art keywords
assembly
assembly operation
part model
depth information
augmented reality
Prior art date
Application number
PCT/CN2018/088092
Other languages
English (en)
French (fr)
Inventor
于海
徐敏
彭林
韩海韵
王鹤
王刚
鲍兴川
侯战胜
朱亮
何志敏
张泽浩
Original Assignee
全球能源互联网研究院有限公司
国家电网有限公司
国网江苏省电力公司电力科学研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 全球能源互联网研究院有限公司, 国家电网有限公司, 国网江苏省电力公司电力科学研究院 filed Critical 全球能源互联网研究院有限公司
Publication of WO2019041900A1 publication Critical patent/WO2019041900A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the present disclosure relates to the field of assembly technology, but is not limited to the field of assembly technology, and in particular relates to a method and apparatus for identifying an assembly operation, simulating an assembly in an augmented reality environment.
  • Assembly technology is a very important area in modern manufacturing, and the associated assembly process typically takes about 40% to 60% of the person's hours in the entire design and manufacturing process. How to find an assembly technology that is efficient, reliable, and can guarantee product quality, while reducing production costs and improving product competitiveness, is of great significance to the entire manufacturing industry.
  • the relevant assembly method is to process the physical prototype parts of the product, and help the user to find the design flaws and loopholes through the part assembly process, thereby improving the product design. This assembly method can provide the user with true vision and hearing because it is a product prototype. And tactile feedback has been widely used in the past two decades, but the prototype method is a very time-consuming and resource-intensive process.
  • VA Virtual Assembly
  • the virtual assembly uses virtual reality technology to generate a virtual three-dimensional assembly environment through the computer.
  • the virtual assembly system provides the user with the operation means in the assembly environment through motion tracking and force feedback technology to simulate the entire assembly process.
  • the operator first imports the part model established in the Computer Aided Design (CAD) system into the virtual assembly system, and then wears the positioning system and the force feedback device to directly operate the virtual part in the virtual assembly environment for assembly. Inspect the product's assemblability through a virtual assembly process, gain an assembly experience, evaluate and improve product design.
  • CAD Computer Aided Design
  • the virtual assembly does not need to process the product prototype, but only operates the virtual CAD model, and can be repeatedly designed and modified, thereby greatly shortening the development cycle and reducing the development cost, making the assembly process faster, more efficient, and economical.
  • virtual assembly technology also has certain defects, that is, the operator is in a virtual assembly environment composed entirely of computer graphics, does not contain information in the real environment, and only simulates the real working environment through virtual scenes. Its realism virtualized by techniques such as visual and force feedback is limited. Although the performance of computer hardware and software is getting stronger and stronger, it is often difficult to develop a system that can generate complex real-world scenarios and handle complex assembly operations while achieving real-time requirements.
  • Augmented Reality due to its characteristics of virtual and real, can solve the problem of lack of realism in virtual reality scenes. If augmented reality technology is applied to the assembly field, it can provide one to the operator.
  • the mixed environment that contains the surrounding real assembly environment and virtual information at the same time greatly enhances the user's realism.
  • engineers design and plan product assemblies and assembly sequences by manipulating virtual models in the real assembly shop, and adjust and improve product assembly based on feedback from the shop floor plan.
  • Many researchers continue to study and explore the key technologies and applications of augmented reality in assembly technology. In order to improve the rapid learning ability of technicians for the assembly and maintenance of increasingly complex emerging machinery.
  • a first aspect of an embodiment of the present disclosure provides a method of identifying an assembly operation in an augmented reality environment, comprising: acquiring a multi-frame continuous image of an assembler; and extracting the assembly in each frame image of the multi-frame continuous image Depth information of the human bone node of the person; the preset assembly operation is identified according to the depth information.
  • a second aspect of an embodiment of the present disclosure provides an augmented reality based analog assembly method, comprising: identifying an assembly in an augmented reality environment using the first aspect of the embodiments of the present disclosure and any of the alternatives thereof
  • the method of operation identifies an assembly operation of the assembler; driving the pre-established virtual hand to perform the assembly operation on the selected target part model according to the assembly operation, the target part model being established according to the target device in an augmented reality environment Virtual model.
  • a third aspect of an embodiment of the present disclosure provides an apparatus for identifying an assembly operation in an augmented reality environment, comprising: an acquisition module configured to acquire a multi-frame continuous image of an assembler; and an extraction module configured to extract the multi-frame continuous Depth information of the human skeleton node of the assembler in each frame image in the image; the first identification module is configured to identify a preset assembly operation according to the depth information.
  • a fourth aspect of an embodiment of the present disclosure provides an augmented reality based analog assembly apparatus, comprising: a second identification module configured to be enhanced with the first aspect of the disclosed embodiments and any of the alternatives thereof
  • the method of identifying an assembly operation in a real environment identifies an assembly operation of an assembler; the execution module is configured to drive the pre-established virtual hand to perform the assembly operation on the selected target part model according to the assembly operation, the target part model is A virtual model built from a target device in an augmented reality environment.
  • the embodiment of the present disclosure further provides a computer storage medium storing a computer program; after the computer program is executed by the processor, the foregoing method for recognizing an assembly operation in an augmented reality environment can be implemented
  • a method and device for recognizing an assembly operation and simulating an assembly in an augmented reality environment provided by an embodiment of the present disclosure.
  • an augmented reality environment a multi-frame continuous image of an assembly person is acquired in real time, and image analysis is performed to extract each frame image.
  • the depth information of the human skeleton node of the assembler is described, and the assembly operation of the assembler is identified according to the depth information, thereby realizing the application of the somatosensory technology to the assembly field, and is more humanized.
  • the method and device for recognizing an assembly operation and simulating an assembly in an augmented reality environment provided by an embodiment of the present disclosure, and introducing the augmented reality into the guide assembly operation, can not only effectively verify whether the product design is reasonable or meet the requirements, and the virtual in the related art
  • the assembler can be in a mixed reality scene that is both virtual and real, and also realizes the interaction between the virtual part model object and the real object, realizing the non-inductive interaction, and the operator does not need to pass Additional dedicated actions or dedicated interactive tools interact to enhance the immersion of the virtual assembly, making the virtual assembly closer to the real assembly, greatly improving the user's direct perception of the surrounding real world and real-time interactive experience.
  • FIG. 1 is a schematic flowchart of a method for identifying an assembly operation in an augmented reality environment according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a Kinect-based somatosensory recognition method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flow chart of a method for scaling a part model according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of an augmented reality based analog assembly method according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a tree structure of an assembly part model according to an embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of a method for implementing a virtual hand interaction according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of an apparatus for identifying an assembly operation in an augmented reality environment according to an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of an augmented reality based analog assembly apparatus according to an embodiment of the present disclosure.
  • the embodiment provides a method for identifying an assembly operation in an augmented reality environment, which can be applied to simulate an assembly process in the field of industrial assembly, thereby improving the efficiency and quality of assembly design and planning in an early design stage, as shown in FIG.
  • the method may include the following steps but is not limited to the following steps:
  • S11 Obtaining a multi-frame continuous image of the assembler; optionally, the gesture of the assembler can be captured by using a deep somatosensory device, and the deep somatosensory device can realize functions such as instant dynamic capture, microphone input, image recognition, and voice recognition, so that the user Get rid of the constraints of traditional input devices, here the deep somatosensory device can be Kinect, the workflow is shown in Figure 2.
  • the continuous multi-frame image here may be: a plurality of images continuously acquired by the camera, for example, the camera performs video acquisition, and the continuous multi-frame image may be a plurality of image frames continuously distributed in the video.
  • S12 extracting depth information of an assembly human body skeleton node in each frame image of the multi-frame continuous image; the depth information includes various human body features of the assembly personnel, at least including the skeleton node feature, and the bone may be collected at a certain moment.
  • the spatial position of the node is obtained, and the relative position and angle of the bone node are obtained, and for a continuous period of time, the motion vector of the bone node can be obtained.
  • depth information of the human bone node is extracted from each of the successive multi-frame images acquired by Kinect.
  • the depth information may be information acquired by a depth camera that reflects imaging of a human bone in a three-dimensional space.
  • the three-dimensional information data may include: X-axis data, Y-axis data, and Z-axis data, wherein the X-axis, the Y-axis, and the Z-axis are perpendicular to each other.
  • the motion trajectory of the bone node may be used as a feature of the dynamic gesture, for example, detecting the initial state and the termination state of the dynamic gesture bone node, and finally calculating and recognizing the predefined interaction gesture by the human bone node spatial position transformation.
  • the somatosensory technology is applied to the assembly field, and the assembly operation according to the depth information does not need the aid of the auxiliary tool, thereby reducing the burden on the assembler, and the operation is simpler and more humanized than the recognition scheme using the collision detection in the related art.
  • step S13 may include: the assembling operation is selecting a part model, and identifying the preset assembly operation according to the depth information comprises: acquiring an initial state of at least one hand bone node of the assembler in the depth information; The first action set matching the initial state is selected in the assembly operation library; the motion track of the opponent bone node is tracked, and the first action that does not conform to the motion track in the first action set is eliminated, and the second action set is obtained, where the first action set is obtained.
  • the second action set may include a first action component that does not conform to the motion trajectory; and the selected target part model is determined according to the second action set.
  • many parts of the device, in the assembly process, whether adding a delete model or panning the rotation scale model, must first select the model, so selecting the part model is the basis of the model interaction in the assembly process.
  • Selecting the part model interaction can track and recognize at least one hand (such as the right hand) bone node of the assembler through Kinect, obtain the initial state of the right hand bone node in the depth information, and then select all and change the initial in the assembly operation library.
  • the first action of the state matching is used as the first action set to track the motion trajectory of the right hand bone node, and in the process, the first action that does not conform to the motion trajectory of the right hand bone node is removed, and the remaining components constitute the second action.
  • the set determines the selected target part model based on the second set of actions.
  • a virtual hand can be created to map the motion of the real hand to the two-dimensional virtual hand on the screen, and the selection of the model can be realized by following the free movement of the real hand.
  • step S13 may include: the assembling operation is adding a part model, and identifying the preset assembly operation according to the depth information comprises: acquiring a motion trajectory of at least one arm bone node of the assembler in the depth information; The motion track of the node is when the arm is lifted, the part model library is loaded for the operator to select the part model; when the movement track of the arm bone node is the arm forward, confirm the addition of the selected target part model; and when the arm The trajectory of the bone node hides the part model library when the arm is lowered.
  • the model library since the model library needs to be loaded when browsing or adding components, in order to provide a relatively large operating space for the entire assembly operation, the model library is hidden by default.
  • the left arm lifting gesture can be used to represent the loaded part model library, and the assembler can use the part model. Selecting the part model in the library, you can put the left arm down gesture to represent the hidden part model library. Then, the right hand controls the virtual hand to select the required components in the model library. Finally, the selected target part model can be confirmed by the right hand push gesture, and the process of adding the model is completed.
  • step S13 may include: the assembly operation is moving the part model, and identifying the preset assembly operation according to the depth information comprises: acquiring at least one hand bone node of the assembler in the depth information after the target part model is selected The trajectory of the movement; when the trajectory of the hand skeletal node is the hand to keep the hovering state up to the preset duration, confirm that the spatial coordinate movement permission of the target part model is obtained; the spatial coordinate of the target part model follows the movement trajectory of the hand to make the target The part model is moved to the specified location.
  • the assembly operation is moving the part model
  • identifying the preset assembly operation according to the depth information comprises: acquiring at least one hand bone node of the assembler in the depth information after the target part model is selected The trajectory of the movement; when the trajectory of the hand skeletal node is the hand to keep the hovering state up to the preset duration, confirm that the spatial coordinate movement permission of the target part model is obtained; the spatial coordinate of the target part model follows the movement trajectory of the hand to make the target The part model is moved to
  • one-handed operation with one hand can be used.
  • the target part model that needs to be translated is selected by the virtual hand, and then the right hand forward gesture is used to confirm the selected model.
  • the translation operation needs to acquire the spatial coordinate authority of the model, the model is translated as the virtual hand moves. Therefore, when the motion trajectory of the right-hand bone node is the hand-holding state until the preset time length is reached, it is confirmed that the space coordinate movement authority of the target part model is obtained, and the coordinates of the target part model are acquired at this time, and the movement of the right-hand bone node is followed.
  • the trajectory can be panned to a specified position, ultimately implementing a panning operation on the model.
  • step S13 may include: the assembling operation is to zoom the part model, and identifying the preset assembly operation according to the depth information comprises: acquiring the movement of the two-handed bone node of the assembler in the depth information after the target part model is selected Track; when the movement track of the two-handed bone node is relatively open with both hands, the target part model is enlarged to a first preset size, and the first preset size is less than or equal to the maximum magnification of the target part model; When the hands are relatively closed, the target part model is reduced to a second preset size, and the second preset size is less than or equal to the minimum reduced size of the target part model.
  • a zoom operation is required to achieve zooming in or out of the individual part model.
  • each part model has a corresponding scaling range, as shown in Figure 3, where S(Scale) represents the current size of the selected model (target part model), Smax ( Scale max) represents the maximum magnification of the model, and Smin (Scale rain) represents the minimum size reduction of the model. If the current size is larger than the maximum magnification (S>Smax) or smaller than the minimum reduction size (S ⁇ Smin), the zoom gesture is invalid and the model is not enlarged or reduced.
  • step S13 may include: the assembly operation is a rotating part model, and identifying the preset assembly operation according to the depth information comprises: acquiring at least one hand bone node of the assembler in the depth information after the target part model is selected The trajectory of the movement; when the trajectory of the hand skeletal node is rotated by the hand, the target part model follows the rotation trajectory of the hand to rotate the target part model to the specified orientation.
  • it is often necessary to rotate the model For example, when the cylinder head is assembled after the crankshaft and the piston linkage are assembled, the cylinder is turned over and the cylinder is The parts that need to be rotated the most in the entire assembly.
  • the assembly bracket is designed to be rotatable in order to enable the assembler to view and interact 360 degrees of the entire assembly.
  • the trajectory is the right hand rotation, and the target part model follows the rotation trajectory of the right hand, that is, the rotation operation of the selected model is realized by the right hand rotation gesture.
  • step S13 may include: the assembling operation is to delete the part model, and identifying the preset assembly operation according to the depth information comprises: acquiring at least one arm skeleton node of the assembler in the depth information after the target part model is selected The trajectory of the movement; when the trajectory of the arm skeletal node is that the arm swings with its upper arm as the central axis, it is determined to delete the target part model.
  • the assembling operation is to delete the part model
  • identifying the preset assembly operation according to the depth information comprises: acquiring at least one arm skeleton node of the assembler in the depth information after the target part model is selected The trajectory of the movement; when the trajectory of the arm skeletal node is that the arm swings with its upper arm as the central axis, it is determined to delete the target part model.
  • you add an incorrect or redundant part model you need to remove it.
  • step S13 may include: performing an assembly operation to perform a next assembly process, and identifying a preset assembly operation according to the depth information comprises: acquiring a motion trajectory of a single arm bone node of the assembler in the depth information; When the movement path of the arm bone node is a single arm swinging toward the first preset direction, it is judged whether all the assembly processes have been completed; when all the assembly processes are not completed, it is confirmed that the next assembly process is performed, otherwise it is confirmed that the operation is invalid.
  • the assembly process may be divided into a multi-step assembly process.
  • the right arm to the left (first preset direction) gesture may be used to indicate that the next assembly process operation is performed.
  • the trajectory of the right arm bone node is tracked. If it is determined that the right arm is swung to the left side, it is also necessary to determine whether all the assembly processes have been completed. If not, it is determined to perform the next assembly process, otherwise the operation is invalid. Avoid system misuse.
  • step S13 may include: the assembling operation is returning to the previous assembly process, and identifying the preset assembly operation according to the depth information comprises: acquiring a motion track of a single arm bone node of the assembler in the depth information; When the movement track of the arm skeleton node is a single arm swinging toward the second preset direction, it is judged whether the current progress is the first assembly process; if the current assembly process is not the first assembly process, it is confirmed that the previous assembly process is returned, otherwise the confirmation is performed. This operation is invalid.
  • the general assembly process may be divided into a multi-step assembly process, such as a specific implementation of an automobile engine assembly process, such as a gesture of returning to the previous step by using a left arm toward the right (second preset direction).
  • a multi-step assembly process such as a specific implementation of an automobile engine assembly process, such as a gesture of returning to the previous step by using a left arm toward the right (second preset direction).
  • the method for recognizing an assembly operation in an augmented reality environment provides a method for extracting a multi-frame continuous image of an assembler in real time in an augmented reality environment, and performing image analysis to extract a human skeleton node of an assembly person in each frame image.
  • the depth information is used to identify the assembly operation of the assembler according to the depth information, thereby realizing the application of the somatosensory technology to the assembly field, and the operation is simpler and more humanized than the recognition scheme using the collision detection in the related art.
  • the present embodiment provides a simulation assembly method based on augmented reality, which can be applied to simulate the assembly process in the field of industrial assembly, thereby improving the efficiency and quality of assembly design and planning in the early design stage, as shown in FIG. 4, including step:
  • S42 Perform an assembly operation on the selected target part model by driving the pre-established virtual hand according to the assembly operation, and the target part model is a virtual model established according to the target device in the augmented reality environment.
  • the preparation work is first performed, including:
  • Step a three-dimensional geometric model modeling software, in an augmented reality environment, the basic feature modeling and surface modeling complex to build components, process model points, lines, assembly relationship between the faces.
  • Step 2 The assembly hierarchy of the 3D model. Before you can assemble, you must plan the hierarchical relationship of the assembly. While considering the assembly process, the 3D model of each component of the device is divided into assembly levels according to the specific conditions of the selected rendering engine to visualize the motion simulation.
  • the device geometry model has a tree structure. As shown in Figure 5, the root node is the total assembly, the leaf nodes are parts, and the non-leaf nodes in the middle represent sub-assemblies, and the upper and lower nodes are father and son. Relationships, parallel nodes are relatively independent. In the entire hierarchical model, all child nodes move independently with respect to the parent node, and also move with the motion of the parent node.
  • the assembly hierarchy model uses the tree structure to represent the assembly relationship between the total assembly, the sub-assembly, and the parts, and reasonably and vividly expresses the parent-child relationship between the total assembly, the sub-assembly, and the individual parts, and indicates the assembly order. That is, the assembly of the lower component precedes the assembly of the upper component.
  • Step 3 Division of the assembly process.
  • the equipment has a complex structure and a wide range of components, simplifying the entire assembly process into multiple assembly processes.
  • Step 4 Model Library Virtual Panel.
  • the model library virtual panel is the UI menu.
  • the Unity 3D engine integrates the WYSIWYG UI solution, and has been expanding the technology for this system to ensure the realization of the ideal UI system.
  • the subsequent state preservation work is performed after exiting the application environment, that is, after killing (closing) the application process, unmounting the disk, and uninstalling the module, the subsequent only need to back up a minimum Linux system, thereby simplifying the device driver layer.
  • the state of the kernel layer is saved, and the configuration options in step one are saved completely to the application layer, facilitating the implementation of a multi-state selection strategy.
  • the assembly process is completed by performing an assembly operation on the selected target part model by driving the pre-established virtual hand according to the assembly operation.
  • the augmented reality-based simulation assembly method provided by the embodiment introduces the augmented reality into the lead assembly operation, which can not only effectively check whether the product design is reasonable or meets the requirements, and can be assembled by the assembly personnel compared with the virtual assembly scheme in the related art.
  • the virtual part model object and the real object are interactively operated at the same time, which greatly improves the user's direct perception ability and real-time interactive experience to the surrounding real world.
  • This embodiment provides an apparatus for identifying an assembly operation in an augmented reality environment, as shown in Figure 7, comprising:
  • the obtaining module 71 is configured to acquire a multi-frame continuous image of the assembler; for details, refer to the detailed description of step S11 in the foregoing Embodiment 1.
  • the extraction module 72 is configured to extract depth information of a human skeleton node of an assembly person in each frame image of the multi-frame continuous image; for details, refer to the detailed description of step S12 in the foregoing embodiment.
  • the first identification module 73 is configured to identify a preset assembly operation according to the depth information. For details, refer to the detailed description of step S13 in the foregoing embodiment.
  • the apparatus for recognizing an assembly operation in an augmented reality environment provided by the embodiment, in the augmented reality environment, acquiring a multi-frame continuous image of an assembler in real time, and performing image analysis to extract a human skeleton node of an assembly person in each frame image
  • the depth information is used to identify the assembly operation of the assembler according to the depth information, thereby realizing the application of the somatosensory technology to the assembly field, and the operation is simpler and more humanized than the recognition scheme using the collision detection in the related art.
  • This embodiment provides an analog assembly device based on augmented reality, as shown in FIG. 8, including:
  • the second identification module 81 is configured to identify the assembly operation of the assembler by the method of identifying the assembly operation in the augmented reality environment of Embodiment 1; for details, refer to the detailed description of the step S41 in the foregoing embodiment.
  • the execution module 82 is configured to drive the pre-established virtual hand to perform an assembly operation on the selected target part model according to the assembly operation, and the target part model is a virtual model established according to the target device in the augmented reality environment.
  • the target part model is a virtual model established according to the target device in the augmented reality environment.
  • the augmented reality-based simulation assembly device introduces the augmented reality into the guide assembly operation, and can not only effectively check whether the product design is reasonable or meets the requirements, and can be assembled by the assembly personnel compared with the virtual assembly solution in the related art.
  • the virtual part model object and the real object are interactively operated at the same time, which greatly improves the user's direct perception ability and real-time interactive experience to the surrounding real world.
  • the embodiment of the invention provides an electronic device, which may include: a virtual reality device or an augmented reality device.
  • the electronic device can include:
  • An image collector for example, various types of cameras, the image collector being configured for image acquisition;
  • a memory configured to store information
  • the stored information may include computer executable instructions, which may include source code or object code executable by the processor, such as an application, a software development tool, or a system plug-in of an operating system. Wait;
  • a processor configured to connect to the image collector and the memory, respectively, configured to implement the method for identifying an assembly operation in an augmented reality environment provided by one or more of the foregoing technical solutions by executing the computer executable instructions, or The augmented reality based analog assembly method provided by one or more of the foregoing technical solutions is implemented.
  • the processor may comprise: a central processing unit, a microprocessor, a digital signal processor, a programmable array, an application processor or a programmable array, etc.; the processor may be via various buses, for example, an integrated circuit bus, respectively Connected to the image collector and the memory.
  • the present embodiment provides an electronic device that can implement the aforementioned augmented reality based analog assembly method, or an augmented reality based analog assembly method, for example, the method illustrated in FIGS. 1 to 4 and FIG. One or more of them.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种在增强现实环境中识别装配操作的方法和装置,一种基于增强现实的模拟装配方法和装置,以及一种计算机存储介质,所述识别装配操作的方法包括:获取装配人员的多帧连续图像(S11);提取多帧连续图像中每一帧图像里装配人员的人体骨骼节点的深度信息(S12);根据深度信息识别预设的装配操作(S13)。

Description

增强现实环境中识别装配操作、模拟装配的方法和装置
相关申请的交叉引用
本申请基于申请号为201710785919.X、申请日为2017年09月04日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及装配技术领域但不限于装配技术领域,具体涉及一种增强现实环境中识别装配操作、模拟装配的方法和装置。
背景技术
装配技术是现代制造业中非常重要的领域,相关装配过程一般在产品的整个设计制造过程中占用约40%-60%的人时数。如何找到高效、可靠,能够保证产品质量,同时又能够减少生产成本,提高产品竞争力的装配技术对整个制造业都有着重要意义。相关装配方式就是加工出产品的物理原型零件,通过零件装配过程帮助用户发现设计的不足和漏洞,进而改进产品设计,这种装配方式由于是制造出产品原型,能够给用户提供真实的视觉、听觉和触觉反馈,在过去二十多年中得到非常广泛的应用,但是原型方式是一个非常耗时耗资源的过程,加工出产品的原型后如果发现出产品设计缺陷,无法在原型上直接修改,需要重新设计,重新制造,反复整个过程直到得到满意的设计结果,因此导致整个设计开发周期长,成本高。
随着计算机辅助技术和虚拟现实技术的发展,虚拟原型技术逐渐发展成熟并得到应用,被称之为虚拟装配(Virtual Assembly,VA)。虚拟装配采用虚拟现实技术,通过计算机生成一个全虚拟的三维装配环境,虚拟装配系统通过运动跟踪和力反馈等技术给用户提供在装配环境中的操作手段, 仿真整个装配过程。例如,,操作人员先将在计算机辅助设计(Computer Aided Design,CAD)系统建立的零件模型导入到虚拟装配系统中,然后佩戴定位系统和力反馈设备,在虚拟装配环境中直接操作虚拟零件进行装配,通过虚拟的装配过程检验产品的可装配性,获得装配体验,评估和改进产品设计。
由此可见,虚拟装配不需要加工出产品原型,而只是操作虚拟的CAD模型,可以反复设计修改,因此能够大幅度的缩短开发周期和降低开发成本,使装配过程变得更加快速、高效、经济。然而虚拟装配技术同样存在一定的缺陷,那就是操作人员处在一个完全由计算机图形组成的虚拟的装配环境中,没有包含真实环境中的信息,仅仅是通过虚拟场景来模拟出真实的工作环境,其通过视觉,力反馈等技术所虚拟出来的真实感是有限的。虽然现在的计算机软硬件性能已经越来越强,但是要开发能够满足生成有足够真实感的场景,能够处理复杂的装配操作,同时又达到实时性要求的系统往往还是很困难的。
增强现实技术(Augmented Reality,AR)由于其虚实结台的特性,恰恰能够解执虚拟现实的场景真实感不足的问题,如果将增强现实技术应用到装配领域中,则能够给操作者提供一个即包含周围真实装配环境又同时有虚拟信息的混台环境,大大增强用户的真实感。在AR环境下,工程师通过操作真实装配车间里的虚拟模型来设计和规划产品装配及其装配序列,并根据车间设计规划反馈的信息调整和完善产品装配。许多学者不断研究探索增强现实在装配技术中的关键技术和应用。为了提高技术人员对日益复杂的新兴机械设备装配维修的快速学习能力。
发明内容
本公开相关技术。
本公开实施例的第一方面提供了一种在增强现实环境中识别装配操作 的方法,包括:获取装配人员的多帧连续图像;提取所述多帧连续图像中每一帧图像里所述装配人员的人体骨骼节点的深度信息;根据所述深度信息识别预设的装配操作。
可选本公开实施例的第二方面提供了一种基于增强现实的模拟装配方法,包括:采用本公开实施例的第一方面及其任一可选方案所述的在增强现实环境中识别装配操作的方法识别装配人员的装配操作;根据所述装配操作驱动预先建立的虚拟手对被选中的目标零件模型执行所述装配操作,所述目标零件模型是在增强现实环境中根据目标设备建立的虚拟模型。
本公开实施例的第三方面提供了一种在增强现实环境中识别装配操作的装置,包括:获取模块,配置为获取装配人员的多帧连续图像;提取模块,配置为提取所述多帧连续图像中每一帧图像里所述装配人员的人体骨骼节点的深度信息;第一识别模块,配置为根据所述深度信息识别预设的装配操作。
本公开实施例的第四方面提供了一种基于增强现实的模拟装配装置,包括:第二识别模块,配置为采用本公开实施例的第一方面及其任一可选方案所述的在增强现实环境中识别装配操作的方法识别装配人员的装配操作;执行模块,配置为根据所述装配操作驱动预先建立的虚拟手对被选中的目标零件模型执行所述装配操作,所述目标零件模型是在增强现实环境中根据目标设备建立的虚拟模型。
本公开实施例还提供一种计算机存储介质,所述计算机存储介质存储有计算机程序;所述计算机程序被处理器执行后,能够实现前述的在增强现实环境中识别装配操作的方法
1.本公开实施例提供的增强现实环境中识别装配操作、模拟装配的方法和装置,在增强现实环境中,通过实时获取装配人员的多帧连续图像,并进行图像分析,提取每一帧图像里所述装配人员的人体骨骼节点的深度信息,并根据该深度信息来识别装配人员的装配操作,从而实现将体感技术应用于装配领域,,更加人性化。
2.本公开实施例提供的增强现实环境中识别装配操作、模拟装配的方 法和装置,将增强现实引入导装配操作中,不仅能够有效检验产品设计是否合理或满足要求,与相关技术中的虚拟装配方案相比,可以使装配人员处在一种虚实融合、亦真亦幻的混合现实场景中,同时对虚拟的零件模型对象和真实对象实现交互操作,实现了无感交互,操作者无需通过额外的专用动作或专用交互工具进行交互,提升了虚拟装配的浸入度,使得虚拟装配的更贴近真实装配,极大地提高了用户对周围真实世界的直接感知能力和实时的交互体验。
附图说明
为了更清楚地说明本公开具体实施方式或相关技术中的技术方案,下面将对具体实施方式或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种在增强现实环境中识别装配操作的方法的一个流程示意图;
图2为本公开实施例提供的一种的基于Kinect的体感识别方法的一个流程示意图;
图3为本公开实施例提供的一种对零件模型缩放方法的一个流程示意图;
图4为本公开实施例提供的一种的基于增强现实的模拟装配方法的一个流程示意图;
图5为本公开实施例提供的一种的装配体零件模型的树状结构示意图;
图6为本公开实施例提供的一种的实现虚拟手交互方法的一个流程示意图;
图7为本公开实施例的在增强现实环境中识别装配操作的装置的一个示意图;
图8为本公开实施例的基于增强现实的模拟装配装置的一个示意图。
具体实施方式
下面将结合附图对本公开的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
在本公开的描述中,需要说明的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。
此外,下面所描述的本公开不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
本实施例提供一种在增强现实环境中识别装配操作的方法,可适用于工业装配领域中对装配过程进行模拟,从而提高早期设计阶段中装配设计和规划的效率和质量,如图1所示,该方法可包括如下步骤但不限于下列步骤:
S11:获取装配人员的多帧连续图像;可选地,可以采用深度体感设备对装配人员的手势进行捕捉,深度体感设备可以实现即时动态捕捉、麦克风输入、图像识别和语音识别等功能,使用户摆脱了传统输入设备的束缚,此处深度体感设备可以是Kinect,其工作流程如图2所示。为了识别装配操作,需要对连续多帧图像进行处理,通过对手的运动轨迹的检测提取手势特征并进行分类识别。此处的连续多帧图像可为:摄像头连续采集的多个图像,例如,摄像头进行视频采集,所述连续多帧图像可为视频中连续分布的多个图像帧。
S12:提取多帧连续图像中每一帧图像里装配人员的人体骨骼节点的深度信息;深度信息中包含装配人员的各种人体特征,其中至少包括骨骼节点特征,对于某一时刻,可以采集骨豁节点的空间位置,进而获取骨豁节点之问的相对位置和夹角,而对于某个连续的一段时间段,则可以获取骨骼节点的运动向量。通过利用这些获取到的三维信息数据,可以用来对人体的姿势和手势进行识别。可选地,从Kinect采集的连续多帧图像中的每 一帧图像里提取人体骨骼节点的深度信息。在一些实施例中,所述深度信息可为通过深度摄像头采集的能够反映人体骨骼在三维空间内的成像的信息。所述三维信息数据可包括:X轴数据、Y轴数据及Z轴数据,其中,X轴、Y轴及Z轴两两相互垂直。
S13:根据深度信息识别预设的装配操作。可选地,可以将骨骼节点的运动轨迹作为动态手势的特征,例如,对动态手势骨骼节点初始状态和终止状态的检测,最终由人体骨骼节点空间位置变换计算和识别出预定义的交互手势。将体感技术应用于装配领域,根据深度信息识别装配操作无需借助辅助工具,减轻了装配人员的负担,与相关技术中采用碰撞检测的识别方案相比,操作更加简单,更加人性化。
可选在一些实施例中,步骤S13可以包括:装配操作是选择零件模型,根据深度信息识别预设的装配操作包括:获取深度信息中装配人员的至少一个手骨骼节点的初始状态;在预先建立的装配操作库中选取与初始状态匹配的第一动作集;对手骨骼节点的运动轨迹进行跟踪,剔除掉第一动作集中不符合运动轨迹的第一动作,得到第二动作集,此处的第二动作集可包括不符合运动轨迹的第一动作构成;根据第二动作集确定选择目标零件模型。可选地,设备很多的零部件,在进行装配过程中,不论是添加删除模型,还是平移旋转缩放模型,都要先对模型进行选择,因此选择零件模型是装配过程中模型交互的基础。选择零件模型交互可以通过Kinect对装配人员的至少一个手(比如可以是右手)骨骼节点进行跟踪识别,通过获取深度信息中右手骨骼节点的初始状态,然后再装配操作库中选出所有与改初始状态匹配的第一动作作为第一动作集,对右手骨骼节点进行运动轨迹跟踪,在此过程中剔除掉第一动作集中不符合右手骨骼节点运动轨迹的第一动作,剩下的组成第二动作集,根据第二动作集确定选择的目标零件模型。在装配过程模拟过程中,可以建立虚拟手,将真实手的运动映射到屏幕中二维虚拟手上,跟随真实手自由移动实现对模型的选择。
可选在一些实施例中,步骤S13可以包括:装配操作是添加零件模型,根据深度信息识别预设的装配操作包括:获取深度信息中装配人员的至少一个手臂骨骼节点的运动轨迹;当手臂骨骼节点的运动轨迹为手臂上举时, 加载零件模型库,以供操作人员进行零件模型选择;当手臂骨骼节点的运动轨迹为手臂朝前推时,确认添加被选择的目标零件模型;以及当手臂骨骼节点的运动轨迹为手臂下放时,隐藏零件模型库。可选地,由于在浏览或添加零部件的时候才需要加载模型库,为了给整个装配操作提供一个比较宽敞的操作空间,在默认状态下会隐藏模型库。因此,当需要添加模型的时候,为了操作的简便,可以根据装配人员的至少一个手臂骨骼节点的运动轨迹来实现,比如可以利用左手臂举起手势表示加载零件模型库,装配人员可以从零件模型库中选择零件模型,可以将左手臂放下手势代表隐藏零件模型库。然后,右手控制虚拟手在模型库中选择需要的零部件。最后,可以通过右手前推手势进行确认所选择的目标零件模型,至此完成对模型的添加过程。
可选在一些实施例中,步骤S13可以包括:装配操作是移动零件模型,根据深度信息识别预设的装配操作包括:在选中目标零件模型后,获取深度信息中装配人员的至少一个手骨骼节点的运动轨迹;当手骨骼节点的运动轨迹为手保持悬停状态达到预设时长时,确认取得目标零件模型的空间坐标移动权限;将目标零件模型的空间坐标跟随手的运动轨迹,以使目标零件模型移动到指定位置。可选地,在增强现实场景中,要实现对模型的平移,首先要选中需要移动的目标零件模型,然后抓取该模型,将模型平移到适当的位置。比如可以采用右手单手操作,先通过虚拟手选择需要平移的目标零件模型,然后利用右手前推手势进行所选模型的确认。由于平移操作需要获取模型的空间坐标权限,来实现模型随着虚拟手的运动而平移。因此,当右手骨骼节点的运动轨迹为手保持悬停状态达到预设时长时,确认取得目标零件模型的空间坐标移动权限,并获取此时目标零件模型的坐标,将其跟随右手骨骼节点的运动轨迹就可以将其平移到指定位置,最终实现对模型的平移操作。
可选在一些实施例中,步骤S13可以包括:装配操作是缩放零件模型,根据深度信息识别预设的装配操作包括:在选中目标零件模型后,获取深度信息中装配人员的双手骨骼节点的运动轨迹;当双手骨骼节点的运动轨迹为双手相对张开时,将目标零件模型放大至第一预设尺寸,第一预设尺 寸小于或等于目标零件模型的最大放大尺寸;当双手骨骼节点的运动轨迹为双手相对合拢时,将目标零件模型缩小至第二预设尺寸,第二预设尺寸小于或等于目标零件模型的最小缩小尺寸。可选地,比如,由于装配操作空间的有限性和观察视角的局限性,在发动机装配的过程中,需要缩放操作来实现对个别零件模型的放大或缩小。可以先用右手选择需要缩放的目标零件模型,再利用右手前推手势进行确认所选的目标零件模型,然后双手置于胸前,通过根据双手骨骼节点的运动轨迹来识别缩放操作,利用双手张开一次放大模型至第一预设尺寸,双手并拢一次缩小模型至第二预设尺寸。需要说明的是,由于现实空间的限制,每个零件模型都有对应的缩放尺寸范围,如图3所示,其中,S(Scale)表示所选模型(目标零件模型)当前尺寸大小,Smax(Scale max)表示模型最大放大尺寸,Smin(Scale rain)表示模型最小缩小尺寸。如果当前尺寸大小大于最大放大尺寸(S>Smax)或小于最小缩小尺寸(S<Smin),则缩放手势无效,不会对模型进行放大或缩小操作。
可选在一些实施例中,步骤S13可以包括:装配操作是旋转零件模型,根据深度信息识别预设的装配操作包括:在选中目标零件模型后,获取深度信息中装配人员的至少一个手骨骼节点的运动轨迹;当手骨骼节点的运动轨迹为手原地旋转,将目标零件模型跟随手的旋转轨迹,以使目标零件模型旋转到指定方位。可选地,在装配的过程中,时常会需要对模型进行旋转操作,例如,在装配完曲轴和活塞连杆机构后要装配气缸盖的时候,就要对缸体进行翻转,而且缸体在整个装配中是需要旋转最多的部件。此外,为了能够实现装配人员对整个装配体进行360度的观察和交互,装配支架设计成了可以旋转的。当出现许多零件模型需要旋转的时候,便不能简单采用一个旋转的手势来表示对模型的旋转。因此,旋转模型的时候,也要先用右手选择需要旋转的模型,然后通过右手前推手势确认所选模型,可以通过根据装配人员的右手骨骼节点的运动轨迹实现旋转,当右手骨骼节点的运动轨迹为右手原地旋转,将目标零件模型跟随右手的旋转轨迹,即通过右手旋转手势实现对所选模型的旋转操作。
可选在一些实施例中,步骤S13可以包括:装配操作是删除零件模型, 根据深度信息识别预设的装配操作包括:在选中目标零件模型后,获取深度信息中装配人员的至少一个手臂骨骼节点的运动轨迹;当手臂骨骼节点的运动轨迹为手臂以其上臂作为中轴线左右挥动时,确定删除目标零件模型。可选地,当添加了错误或多余的零件模型时,便需要将其删除。为了贴近人们的使用习惯,可以借鉴橡皮擦擦出文字的动作,比如将右手臂挥动代表删除操作,首先选定目标零件模型,通过对右手臂骨骼节点的运动轨迹进行跟踪,如果右手臂骨骼节点的运动轨迹为右手臂以其上臂作为中轴线左右挥动时,确定删除目标零件模型。
可选在一些实施例中,步骤S13可以包括:装配操作是执行下一步装配工序,根据深度信息识别预设的装配操作包括:获取深度信息中装配人员的单个手臂骨骼节点的运动轨迹;当单个手臂骨骼节点的运动轨迹为单个手臂朝第一预设方向摆动时,判断是否已完成所有装配工序;当未完成所有装配工序时,确认执行下一步装配工序,否则确认本次操作无效。可选地,装配过程可以被划分为多步装配工序,比如对于汽车发动机装配工序的具体实现,可以利用右手臂朝左(第一预设方向)摆手势表示执行下一步装配工序操作,通过对右手臂骨骼节点的运动轨迹进行跟踪,如果判定右手臂朝其左边摆动,则还需要确定是否已完成所有装配工序,如果不是,则确定执行下一步装配工序,否则本次操作无效。避免系统误操作。
可选在一些实施例中,步骤S13可以包括:装配操作是返回上一步装配工序,根据深度信息识别预设的装配操作包括:获取深度信息中装配人员的单个手臂骨骼节点的运动轨迹;当单个手臂骨骼节点的运动轨迹为单个手臂朝第二预设方向摆动时,判断当前进行的是否是第一步装配工序;若当前进行的不是第一步装配工序,确认返回上一步装配工序,否则确认本次操作无效。可选地,一般装配过程可以被划分为多步装配工序,比如对于汽车发动机装配工序的具体实现,比如采用左手臂朝向右(第二预设方向)摆手势表示返回上一步工序操作。通过跟踪左手臂骨骼节点的运动轨迹,如果左手臂骨骼节点的运动轨迹为左手臂朝其右方摆动时,还需要确定当前装配工序是否为第一步装配工序,如果不是,则确认返回上一步装配工序,否则本次操作无效,即如果尚未进行任何一个装配工序操作, 则左手右摆手势无法有效的执行返回上一步操作,以避免系统误操作带来的损失。
本实施例提供的在增强现实环境中识别装配操作的方法,在增强现实环境中,通过实时获取装配人员的多帧连续图像,并进行图像分析,提取每一帧图像里装配人员的人体骨骼节点的深度信息,并根据该深度信息来识别装配人员的装配操作,从而实现将体感技术应用于装配领域,与相关技术中采用碰撞检测的识别方案相比,操作更加简单,更加人性化。
本实施例供了一种基于增强现实的模拟装配方法,可适用于工业装配领域中对装配过程进行模拟,从而提高早期设计阶段中装配设计和规划的效率和质量,如图4所示,包括步骤:
S41:采用前述实施例的在增强现实环境中识别装配操作的方法识别装配人员的装配操作;具体参见可前述实施例中的详细描述。
S42:根据装配操作驱动预先建立的虚拟手对被选中的目标零件模型执行装配操作,目标零件模型是在增强现实环境中根据目标设备建立的虚拟模型。可选地,在实际应用中的装配模拟过程之前,首先进行准备工作,包括:
骤一:采用三维建模软件,在增强现实环境中,通过基本的特征建模和复杂的曲面建模来构建零部件的几何模型,处理模型点、线、面之间的装配约束关系。合理的选取零件坐标系原点位置、适当的简化模型细、统一的命名零件模型文件、规范的保存模型文件存储路径。由于零件太多,尽管许多零部件在几何建模的时候已经简化,但是所有模型叠加在一起后的文件大小依然很庞大,这样对后期模型的渲染造成很大的影响,会占用大量的计算机资源(内存和GPU等),降低系统的性能。另外,所有由Solidworks创建的零件及装配体的几何模型的文件存储格式分别是.stdprt和.SLDASM格式,最终要导入三维游戏引擎Unity3D中进行渲染,而Unity3D只支持.fox和.X两种几何模型格式。所以,在模型导入渲染引擎之前,需要对几何模型进行优化和格式转换。
步骤二:三维模型的装配层次结构。在进行装配之前,必须先要规划好装配的层次关系。在考虑装配工艺的同时,还要根据所选渲染引擎可视 化运动仿真的具体情况,对设备各零部件的三维模型划分装配层次。比如,在Unity3D中,设备几何模型呈树状结构,如图5所示,根节点为总装配体,叶子节点为零件,而中问的非叶子节点代表子装配体,上下节点之间呈父子关系,平行节点之间相对独立。在整个层次模型中,所有子节点相对于父节点独立运动的同时,还会随着父节点的运动而运动。装配的过程中,以主体设备为父节点向下依次展开。装配层次模型利用树结构来表示总装配体、子装配体以及零件之间的装配关系,合理而形象的表达了总装配体、子装配体和单个零件之间的父子关系,并表明了装配顺序,即下层零部件的装配先于上层零部件的装配。
步骤三:装配工序的划分。设备组成结构复杂,零部件繁多,将整个装配过程简化成多个装配工序。
步骤四:模型库虚拟面板。模型库虚拟面板是UI菜单,Unity三维引擎中集成了所见即所得的UI解决方案,并一直在为这套系统做技术扩展,以保证最终能实现较理想的UI系统。需要说明的是,退出应用环境才进行后续的状态保存工作,即杀掉(关闭)应用进程、卸载磁盘、卸载模块后,这样后续只需要备份一个最小的Linux系统,从而简化了对设备驱动层和内核层的状态保存,同时使步骤一中配置选项保存完全面向应用层,便于实现多状态选择策略。
然后建立虚拟手:在增强现实环境下装配在没有数据手套、鼠标键盘等输入设备辅助的情况下,当真实的手通过手势去操控虚拟的模型时,以虚拟手作为媒介映射真实手的运动,来实现与虚拟模型的交互,其实现流程如图6所示。如此通过根据装配操作驱动预先建立的虚拟手对被选中的目标零件模型执行装配操作,完成装配工序。
本实施例提供的基于增强现实的模拟装配方法,将增强现实引入导装配操作中,不仅能够有效检验产品设计是否合理或满足要求,与相关技术中的虚拟装配方案相比,可以使装配人员处在一种虚实融合、亦真亦幻的混合现实场景中,同时对虚拟的零件模型对象和真实对象实现交互操作,极大地提高了用户对周围真实世界的直接感知能力和实时的交互体验。
本实施例供了一种在增强现实环境中识别装配操作的装置,如图7所 示,包括:
获取模块71,配置为获取装配人员的多帧连续图像;具体参见前述实施例1中对步骤S11的详细描述。
提取模块72,配置为提取多帧连续图像中每一帧图像里装配人员的人体骨骼节点的深度信息;具体参见前述实施例中对步骤S12的详细描述。
第一识别模块73,配置为根据深度信息识别预设的装配操作。具体参见前述实施例中对步骤S13的详细描述。
本实施例提供的在增强现实环境中识别装配操作的装置,在增强现实环境中,通过实时获取装配人员的多帧连续图像,并进行图像分析,提取每一帧图像里装配人员的人体骨骼节点的深度信息,并根据该深度信息来识别装配人员的装配操作,从而实现将体感技术应用于装配领域,与相关技术中采用碰撞检测的识别方案相比,操作更加简单,更加人性化。
本实施例供了一种基于增强现实的模拟装配装置,如图8所示,包括:
第二识别模块81,配置为采用实施例1的在增强现实环境中识别装配操作的方法识别装配人员的装配操作;具体参见前述实施例中对步骤S41的详细描述。
执行模块82,配置为根据装配操作驱动预先建立的虚拟手对被选中的目标零件模型执行装配操作,目标零件模型是在增强现实环境中根据目标设备建立的虚拟模型。具体参见前述实施例中对步骤S42的详细描述。
本实施例提供的基于增强现实的模拟装配装置,将增强现实引入导装配操作中,不仅能够有效检验产品设计是否合理或满足要求,与相关技术中的虚拟装配方案相比,可以使装配人员处在一种虚实融合、亦真亦幻的混合现实场景中,同时对虚拟的零件模型对象和真实对象实现交互操作,极大地提高了用户对周围真实世界的直接感知能力和实时的交互体验。
本发明实施例提供一种电子设备,该电子设备可包括:虚拟现实设备或者增强现实设备。所述电子设备可包括:
图像采集器,例如,各种类型的摄像头,所述图像采集器配置为图像采集;
存储器,配置为存储信息,存储的信息可包括:计算机可执行指令, 该计算机可执行指令可包括:应用程序、软件开发工具或操作系统的系统插件等可以被处理器执行的源代码或目标代码等;
处理器,分别与所述图像采集器及所述存储器连接,配置为通过执行所述计算机可执行指令,实现前述一个或多个技术方案提供的在增强现实环境中识别装配操作的方法,或者,实现前述一个或多个技术方案提供的基于增强现实的模拟装配方法。
所述处理器可包括:中央处理器、微处理器、数字信号处理器、可编程阵列、应用处理器或可编程阵列等;所述处理器可以通过各种总线,例如,集成电路总线,分别与所述图像采集器及所述存储器连接。
总之,本实施提供了一种电子设备,该电子设备可以实现前述的基于增强现实的模拟装配方法,或者,基于增强现实的模拟装配方法,例如,执行图1至图4及图6所示方法中的一个或多个。
显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本公开创造的保护范围之中。

Claims (13)

  1. 一种在增强现实环境中识别装配操作的方法,其中,包括:
    获取装配人员的多帧连续图像;
    提取所述多帧连续图像中每一帧图像里所述装配人员的人体骨骼节点的深度信息;
    根据所述深度信息识别预设的装配操作。
  2. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是选择零件模型,所述根据所述深度信息识别预设的装配操作包括:
    获取所述深度信息中所述装配人员的至少一个手骨骼节点的初始状态;
    在预先建立的装配操作库中选取与所述初始状态匹配的第一动作集;
    对所述手骨骼节点的运动轨迹进行跟踪,剔除掉所述第一动作集中不符合所述运动轨迹的第一动作,得到第二动作集;
    根据所述第二动作集确定选择目标零件模型。
  3. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是添加零件模型,所述根据所述深度信息识别预设的装配操作包括:
    获取所述深度信息中所述装配人员的至少一个手臂骨骼节点的运动轨迹;
    当所述手臂骨骼节点的运动轨迹为所述手臂上举时,加载零件模型库,以供操作人员进行零件模型选择;
    当所述手臂骨骼节点的运动轨迹为所述手臂朝前推时,确认添加被选择的目标零件模型;以及
    当所述手臂骨骼节点的运动轨迹为所述手臂下放时,隐藏所述零件模型库。
  4. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是移动零件模型,所述根据所述深度信息识别预设的装 配操作包括:
    在选中目标零件模型后,获取所述深度信息中所述装配人员的至少一个手骨骼节点的运动轨迹;
    当所述手骨骼节点的运动轨迹为手保持悬停状态达到预设时长时,确认取得所述目标零件模型的空间坐标移动权限;
    将所述目标零件模型的空间坐标跟随所述手的运动轨迹,以使所述目标零件模型移动到指定位置。
  5. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是缩放零件模型,所述根据所述深度信息识别预设的装配操作包括:
    在选中目标零件模型后,获取所述深度信息中所述装配人员的双手骨骼节点的运动轨迹;
    当所述双手骨骼节点的运动轨迹为所述双手相对张开时,将所述目标零件模型放大至第一预设尺寸,所述第一预设尺寸小于或等于所述目标零件模型的最大放大尺寸;
    当所述双手骨骼节点的运动轨迹为所述双手相对合拢时,将所述目标零件模型缩小至第二预设尺寸,所述第二预设尺寸小于或等于所述目标零件模型的最小缩小尺寸。
  6. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是旋转零件模型,所述根据所述深度信息识别预设的装配操作包括:
    在选中目标零件模型后,获取所述深度信息中所述装配人员的至少一个手骨骼节点的运动轨迹;
    当所述手骨骼节点的运动轨迹为所述手原地旋转,将所述目标零件模型跟随所述手的旋转轨迹,以使所述目标零件模型旋转到指定方位。
  7. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是删除零件模型,所述根据所述深度信息识别预设的装配操作包括:
    在选中目标零件模型后,获取所述深度信息中所述装配人员的至少一 个手臂骨骼节点的运动轨迹;
    当所述手臂骨骼节点的运动轨迹为所述手臂以其上臂作为中轴线左右挥动时,确定删除所述目标零件模型。
  8. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是执行下一步装配工序,所述根据所述深度信息识别预设的装配操作包括:
    获取所述深度信息中所述装配人员的单个手臂骨骼节点的运动轨迹;
    当所述单个手臂骨骼节点的运动轨迹为所述单个手臂朝第一预设方向摆动时,判断是否已完成所有装配工序;
    当未完成所有装配工序时,确认执行下一步装配工序,否则确认本次操作无效。
  9. 根据权利要求1所述的在增强现实环境中识别装配操作的方法,其中,所述装配操作是返回上一步装配工序,所述根据所述深度信息识别预设的装配操作包括:
    获取所述深度信息中所述装配人员的单个手臂骨骼节点的运动轨迹;
    当所述单个手臂骨骼节点的运动轨迹为所述单个手臂朝第二预设方向摆动时,判断当前进行的是否是第一步装配工序;
    若当前进行的不是所述第一步装配工序,确认返回上一步装配工序,否则确认本次操作无效。
  10. 一种基于增强现实的模拟装配方法,其中,包括:
    采用如权利要求1至9中任一项所述的在增强现实环境中识别装配操作的方法识别装配人员的装配操作;
    根据所述装配操作驱动预先建立的虚拟手对被选中的目标零件模型执行所述装配操作,所述目标零件模型是在增强现实环境中根据目标设备建立的虚拟模型。
  11. 一种在增强现实环境中识别装配操作的装置,其中,包括:
    获取模块,配置为获取装配人员的多帧连续图像;
    提取模块,配置为提取所述多帧连续图像中每一帧图像里所述装配人员的人体骨骼节点的深度信息;
    第一识别模块,配置为根据所述深度信息识别预设的装配操作。
  12. 一种基于增强现实的模拟装配装置,其中,包括:
    第二识别模块,配置为采用如权利要求1至9中任一项所述的在增强现实环境中识别装配操作的方法识别装配人员的装配操作;
    执行模块,配置为根据所述装配操作驱动预先建立的虚拟手对被选中的目标零件模型执行所述装配操作,所述目标零件模型是在增强现实环境中根据目标设备建立的虚拟模型。
  13. 一种计算机存储介质,所述计算机存储介质存储有计算机程序;所述计算机程序被处理器执行后,能够实现权利要求1至9或10中任一项所述的方法。
PCT/CN2018/088092 2017-09-04 2018-05-23 增强现实环境中识别装配操作、模拟装配的方法和装置 WO2019041900A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710785919.X 2017-09-04
CN201710785919.XA CN107678537A (zh) 2017-09-04 2017-09-04 增强现实环境中识别装配操作、模拟装配的方法和装置

Publications (1)

Publication Number Publication Date
WO2019041900A1 true WO2019041900A1 (zh) 2019-03-07

Family

ID=61135592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/088092 WO2019041900A1 (zh) 2017-09-04 2018-05-23 增强现实环境中识别装配操作、模拟装配的方法和装置

Country Status (2)

Country Link
CN (1) CN107678537A (zh)
WO (1) WO2019041900A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334421A (zh) * 2019-06-24 2019-10-15 武汉开目信息技术股份有限公司 零件设计模型可加工性批量分析方法及装置
CN111833436A (zh) * 2020-06-29 2020-10-27 华中科技大学 一种基于Unity 3D的自适应装配指导方法及系统
CN111968244A (zh) * 2020-06-30 2020-11-20 国网河北省电力有限公司培训中心 电力设备虚拟构建方法、装置、系统、终端及存储介质
CN112381933A (zh) * 2020-12-03 2021-02-19 北京航星机器制造有限公司 一种基于三维设计软件的安检机换型快速设计方法及装置
CN112685837A (zh) * 2021-01-06 2021-04-20 安徽农业大学 一种基于装配语义及目标识别的植保无人机的建模方法
CN113593314A (zh) * 2020-04-30 2021-11-02 青岛海尔空调器有限总公司 设备虚拟拆装培训系统及其培训方法
CN113610985A (zh) * 2021-06-22 2021-11-05 富泰华工业(深圳)有限公司 虚实交互的方法、电子设备及存储介质
WO2022083238A1 (zh) * 2020-10-22 2022-04-28 北京字节跳动网络技术有限公司 构建虚拟装配体的方法、装置和计算机可读存储介质
CN116301390A (zh) * 2023-05-24 2023-06-23 中科航迈数控软件(深圳)有限公司 机床装配指导方法、装置、ar眼镜及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107678537A (zh) * 2017-09-04 2018-02-09 全球能源互联网研究院有限公司 增强现实环境中识别装配操作、模拟装配的方法和装置
CN108509086B (zh) * 2018-02-11 2021-06-08 合肥市科技馆 一种基于多媒体互动的自助餐互动系统
CN109102533A (zh) * 2018-06-19 2018-12-28 黑龙江拓盟科技有限公司 一种基于混合现实的特征点定位方法
CN109521868B (zh) * 2018-09-18 2021-11-19 华南理工大学 一种基于增强现实与移动交互的虚拟装配方法
CN110210366B (zh) * 2019-07-05 2021-04-27 青岛理工大学 装配拧紧过程样本采集系统、深度学习网络及监测系统
CN110516715B (zh) * 2019-08-05 2022-02-11 杭州依图医疗技术有限公司 一种手骨分类方法及装置
CN112752025B (zh) * 2020-12-29 2022-08-05 珠海金山网络游戏科技有限公司 虚拟场景的镜头切换方法及装置
CN114155610B (zh) * 2021-12-09 2023-01-24 中国矿业大学 基于上半身姿态估计的面板装配关键动作识别方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853464A (zh) * 2014-04-01 2014-06-11 郑州捷安高科股份有限公司 一种基于Kinect的铁路手信号识别方法
CN103941866A (zh) * 2014-04-08 2014-07-23 河海大学常州校区 一种基于Kinect深度图像的三维手势识别方法
CN105107200A (zh) * 2015-08-14 2015-12-02 济南中景电子科技有限公司 基于实时深度体感交互与增强现实技术的变脸系统及方法
US20160257000A1 (en) * 2015-03-04 2016-09-08 The Johns Hopkins University Robot control, training and collaboration in an immersive virtual reality environment
CN106022213A (zh) * 2016-05-04 2016-10-12 北方工业大学 一种基于三维骨骼信息的人体动作识别方法
CN107080940A (zh) * 2017-03-07 2017-08-22 中国农业大学 基于深度相机Kinect的体感交互转换方法及装置
CN107678537A (zh) * 2017-09-04 2018-02-09 全球能源互联网研究院有限公司 增强现实环境中识别装配操作、模拟装配的方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104808788B (zh) * 2015-03-18 2017-09-01 北京工业大学 一种非接触式手势操控用户界面的方法
CN106340217B (zh) * 2016-10-31 2019-05-03 华中科技大学 基于增强现实技术的制造装备智能系统及其实现方法
CN106980385B (zh) * 2017-04-07 2018-07-10 吉林大学 一种虚拟装配装置、系统及方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853464A (zh) * 2014-04-01 2014-06-11 郑州捷安高科股份有限公司 一种基于Kinect的铁路手信号识别方法
CN103941866A (zh) * 2014-04-08 2014-07-23 河海大学常州校区 一种基于Kinect深度图像的三维手势识别方法
US20160257000A1 (en) * 2015-03-04 2016-09-08 The Johns Hopkins University Robot control, training and collaboration in an immersive virtual reality environment
CN105107200A (zh) * 2015-08-14 2015-12-02 济南中景电子科技有限公司 基于实时深度体感交互与增强现实技术的变脸系统及方法
CN106022213A (zh) * 2016-05-04 2016-10-12 北方工业大学 一种基于三维骨骼信息的人体动作识别方法
CN107080940A (zh) * 2017-03-07 2017-08-22 中国农业大学 基于深度相机Kinect的体感交互转换方法及装置
CN107678537A (zh) * 2017-09-04 2018-02-09 全球能源互联网研究院有限公司 增强现实环境中识别装配操作、模拟装配的方法和装置

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334421A (zh) * 2019-06-24 2019-10-15 武汉开目信息技术股份有限公司 零件设计模型可加工性批量分析方法及装置
CN113593314B (zh) * 2020-04-30 2023-10-20 青岛海尔空调器有限总公司 设备虚拟拆装培训系统及其培训方法
CN113593314A (zh) * 2020-04-30 2021-11-02 青岛海尔空调器有限总公司 设备虚拟拆装培训系统及其培训方法
CN111833436A (zh) * 2020-06-29 2020-10-27 华中科技大学 一种基于Unity 3D的自适应装配指导方法及系统
CN111968244A (zh) * 2020-06-30 2020-11-20 国网河北省电力有限公司培训中心 电力设备虚拟构建方法、装置、系统、终端及存储介质
CN111968244B (zh) * 2020-06-30 2024-05-10 国网河北省电力有限公司培训中心 电力设备虚拟构建方法、装置、系统、终端及存储介质
WO2022083238A1 (zh) * 2020-10-22 2022-04-28 北京字节跳动网络技术有限公司 构建虚拟装配体的方法、装置和计算机可读存储介质
CN112381933A (zh) * 2020-12-03 2021-02-19 北京航星机器制造有限公司 一种基于三维设计软件的安检机换型快速设计方法及装置
CN112381933B (zh) * 2020-12-03 2024-04-05 北京航星机器制造有限公司 一种基于三维设计软件的安检机换型快速设计方法及装置
CN112685837A (zh) * 2021-01-06 2021-04-20 安徽农业大学 一种基于装配语义及目标识别的植保无人机的建模方法
CN113610985A (zh) * 2021-06-22 2021-11-05 富泰华工业(深圳)有限公司 虚实交互的方法、电子设备及存储介质
CN113610985B (zh) * 2021-06-22 2024-05-17 富泰华工业(深圳)有限公司 虚实交互的方法、电子设备及存储介质
CN116301390B (zh) * 2023-05-24 2023-09-15 中科航迈数控软件(深圳)有限公司 机床装配指导方法、装置、ar眼镜及存储介质
CN116301390A (zh) * 2023-05-24 2023-06-23 中科航迈数控软件(深圳)有限公司 机床装配指导方法、装置、ar眼镜及存储介质

Also Published As

Publication number Publication date
CN107678537A (zh) 2018-02-09

Similar Documents

Publication Publication Date Title
WO2019041900A1 (zh) 增强现实环境中识别装配操作、模拟装配的方法和装置
Martinez-Gonzalez et al. Unrealrox: an extremely photorealistic virtual reality environment for robotics simulations and synthetic data generation
EP3882861A2 (en) Method and apparatus for synthesizing figure of virtual object, electronic device, and storage medium
WO2020228644A1 (zh) 基于ar场景的手势交互方法及装置、存储介质、通信终端
Mossel et al. 3DTouch and HOMER-S: intuitive manipulation techniques for one-handed handheld augmented reality
Gutierrez et al. IMA-VR: A multimodal virtual training system for skills transfer in Industrial Maintenance and Assembly tasks
JP2014501413A (ja) ジェスチャ認識のためのユーザ・インタフェース、装置および方法
Fiorentino et al. Design review of CAD assemblies using bimanual natural interface
CN111862333A (zh) 基于增强现实的内容处理方法、装置、终端设备及存储介质
US10553009B2 (en) Automatically generating quadruped locomotion controllers
CN109035415B (zh) 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN110573992B (zh) 使用增强现实和虚拟现实编辑增强现实体验
JP2014235634A (ja) 手指動作検出装置、手指動作検出方法、手指動作検出プログラム、及び仮想物体処理システム
WO2018156087A1 (en) Finite-element analysis augmented reality system and method
WO2015126392A1 (en) Emulating a user performing spatial gestures
Tao et al. Manufacturing assembly simulations in virtual and augmented reality
Wang et al. Real-virtual interaction in AR assembly simulation based on component contact handling strategy
CN110544315B (zh) 虚拟对象的控制方法及相关设备
US7088377B2 (en) System and method for designing, synthesizing and analyzing computer generated mechanisms
Hughes et al. From raw 3D-Sketches to exact CAD product models–Concept for an assistant-system
KR101211178B1 (ko) 증강 현실 컨텐츠 재생 시스템 및 방법
CN114327063A (zh) 目标虚拟对象的交互方法、装置、电子设备及存储介质
Osorio-Gómez et al. An augmented reality tool to validate the assembly sequence of a discrete product
US20180329503A1 (en) Sensor system for collecting gestural data in two-dimensional animation
WO2020067204A1 (ja) 学習用データ作成方法、機械学習モデルの生成方法、学習用データ作成装置及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18849642

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18849642

Country of ref document: EP

Kind code of ref document: A1