CN115359213A - Multi-person online collaborative equipment virtual design implementation method and system - Google Patents

Multi-person online collaborative equipment virtual design implementation method and system Download PDF

Info

Publication number
CN115359213A
CN115359213A CN202211008637.6A CN202211008637A CN115359213A CN 115359213 A CN115359213 A CN 115359213A CN 202211008637 A CN202211008637 A CN 202211008637A CN 115359213 A CN115359213 A CN 115359213A
Authority
CN
China
Prior art keywords
virtual
equipment
model
character
online collaborative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211008637.6A
Other languages
Chinese (zh)
Inventor
黄辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bangkang Industrial Robot Technology Co ltd
Original Assignee
Shenzhen Bangkang Industrial Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bangkang Industrial Robot Technology Co ltd filed Critical Shenzhen Bangkang Industrial Robot Technology Co ltd
Priority to CN202211008637.6A priority Critical patent/CN115359213A/en
Publication of CN115359213A publication Critical patent/CN115359213A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for realizing multi-user online collaborative equipment virtual design, which comprises the following steps: s1, acquiring RGB, depth and skeleton data streams of a character and generating a character 3D model; s2, marking and tracking a real environment 3D model by using a development toolkit; s3, establishing a model and a map of a virtual equipment scene by using the 3D data; s4, fusing self-adaptive information entropy and sharpening adjustment to improve an ORB-SLAM2 algorithm, removing misjudgment on human joint skeleton postures by using an average filtering method, and generating character interaction information by using a multi-mode fusion mode; and S5, uploading the point cloud data generated by fusion to a cloud end by using a network transmission protocol, realizing feature point extraction matching, object three-dimensional registration tracking and human body pose resolving by using a cloud end service module, and synchronously transmitting the data to local user VR equipment. The invention is seamlessly embedded into the operation based on VR and AR auxiliary design systems, and can assist multiple persons to cooperatively complete the operation task on line.

Description

Multi-person online collaborative equipment virtual design implementation method and system
Technical Field
The invention relates to the technical field of virtual reality, augmented reality and man-machine interaction, in particular to a method and a system for realizing virtual design of multi-person online cooperative equipment.
Background
Virtual Reality (VR) is a new form of computer media that presents a digitized environment to a user that mimics the real environment, with the computer providing the user with a completely Virtual Reality. In the virtual digital world, people can operate virtual objects to obtain the experience of being personally on the scene. On the basis of the VR technology, the AR technology provides a real and virtual combined environment, the real environment can be completely perceived in the visual field of a user besides virtual information generated by a computer, necessary information expansion is provided for a virtual scene, and the defects of the VR technology in the aspects of full virtual immersion and the like are overcome.
However, in the conventional auxiliary design, the provided information is dispersed and independent, and has the characteristics of weaker hierarchy and logicality, and with the increase of the complexity of engineering equipment, a worker needs to repeatedly look up related documents during maintenance and design, so that time and labor are wasted, and mistakes are easily made.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a multi-user online collaborative equipment virtual design implementation method and system which can assist a plurality of users to cooperatively complete an operation task online based on the seamless embedding of VR and AR aided design systems into an operation activity, aiming at the defects of the prior art.
In order to solve the technical problems, the invention adopts the following technical scheme.
A multi-person online collaborative equipment virtual design implementation method comprises the following steps: s1, acquiring RGB, depth and skeleton data streams of a character and generating a character 3D model; s2, marking and tracking a real environment 3D model by using a development kit; s3, establishing a model and a map of a virtual equipment scene by using the 3D data; s4, fusing self-adaptive information entropy and sharpening adjustment to improve an ORB-SLAM2 algorithm, removing misjudgment on the posture of the human joint skeleton by using an average filtering method, and generating character interaction information by using a multi-mode fusion mode; and S5, uploading point cloud data generated by fusion to a cloud end by using a network transmission protocol, realizing feature point extraction matching, object three-dimensional registration tracking and human body pose resolving by using a cloud end service module, synchronously transmitting data to local user VR equipment, projecting by using the coordinates obtained in the step S3 to generate a map, performing fusion processing by using multi-mode interaction information in the step S4, and outputting a processing result to the VR equipment so as to realize multi-user online cooperative work.
Preferably, in step S1, RGB, depth and skeleton data streams of the person are acquired by using a 3D camera.
Preferably, the RGB and depth data are used for obtaining three-dimensional point cloud representation of the whole scene, the skeleton data are used for segmenting and extracting people from the complete 3D scene point cloud in a Voronoi segmentation mode, the extracted people 3D point cloud is triangulated by using a dense mesh strategy, texture information is extracted, mesh representation is further obtained, and finally a 3D model is output.
Preferably, in the step S2, the development kit is a Vuforia development kit.
Preferably, the Vuforia development kit is based on an image recognition technology, a template matching algorithm in computer graphics is selected to recognize a specific image, the determined "identification map" is analyzed and stored in advance, the coordinate of the virtual object in the camera is calculated according to the information, the "identification map" is searched and recognized in the current image, the "identification map" is displayed in an overlapping manner, after the identification of the "identification map" actually matched with the template, the "identification map" is overlapped in a predetermined three-dimensional coordinate to complete calibration, the position of the virtual model in the world coordinate is determined, and after the coordinate conversion, the virtual world coordinate is determined.
Preferably, in step S3, a 3DMAX tool is used to render the model, and a Unity3D tool is used to build the virtual scene and the interaction logic.
A multi-person online collaborative equipment virtual design implementation system is used for implementing the method.
The invention discloses a method for realizing multi-user online collaborative equipment virtual design, wherein in the realization process, a plurality of virtual equipment are connected to form an equipment network, so that the collaborative operation of a plurality of pieces of equipment is realized, the method comprises the realization of equipment virtual design and multi-user collaborative technology, the equipment virtual design is realized by virtualizing and software actual hardware through a software platform by means of visualization technology, so that a virtual equipment operation training system is built, and the multi-user online collaborative mode realizes the connection of a plurality of pieces of virtual equipment. The virtual design simulates a real operation flow through a software platform, can reflect various conditions of equipment in actual operation, breaks through the limitations of traditional equipment in the aspects of operation training personnel, data processing and the like, virtualizes an actual hardware part, effectively reduces the system research and development cost, and realizes communication with external application.
Drawings
FIG. 1 is a flow chart of a method for implementing multi-user online collaborative equipment virtual design according to the present invention.
Detailed Description
The invention is described in more detail below with reference to the figures and examples.
The invention discloses a method for realizing multi-person online collaborative equipment virtual design, please refer to FIG. 1, which comprises the following steps:
s1, acquiring RGB, depth and skeleton data streams of a character and generating a character 3D model;
s2, marking and tracking a real environment 3D model by using a development kit;
s3, establishing a model and a map of a virtual equipment scene by using the 3D data;
s4, fusing self-adaptive information entropy and sharpening adjustment to improve an ORB-SLAM2 algorithm, removing misjudgment on the posture of the human joint skeleton by using an average filtering method, and generating character interaction information by using a multi-mode fusion mode;
and S5, uploading point cloud data generated by fusion to a cloud end by using a network transmission protocol, realizing feature point extraction matching, object three-dimensional registration tracking and human body pose resolving by using a cloud end service module, synchronously transmitting data to local user VR equipment, projecting by using the coordinates obtained in the step S3 to generate a map, performing fusion processing by using multi-mode interaction information in the step S4, and outputting a processing result to the VR equipment so as to realize multi-user online cooperative work.
In the implementation process of the method, a plurality of virtual devices are connected to form a device network, the cooperative operation of the plurality of devices is realized, the implementation of equipment virtual design and multi-user cooperative technology is included, the equipment virtual design is realized by means of visualization technology, actual hardware is virtualized and software is realized through a software platform, so that a virtual device operation training system is built, and the connection of the plurality of virtual devices is realized in a multi-user online cooperative mode. The virtual design simulates a real operation flow through a software platform, can reflect various conditions of equipment in actual operation, breaks through the limitations of traditional equipment in the aspects of operation training personnel, data processing and the like, virtualizes an actual hardware part, effectively reduces the system research and development cost, and realizes communication with external applications.
In a preferred embodiment, in step S1, RGB, depth and skeleton data streams of the person are acquired by a 3D camera. Furthermore, RGB and depth data are used for obtaining three-dimensional point cloud representation of the whole scene, skeleton data are used for segmenting and extracting people from the complete 3D scene point cloud in a Voronoi segmentation mode, a dense mesh strategy is used for conducting triangulation on the extracted people 3D point cloud and extracting texture information, then mesh representation is obtained, and finally a 3D model is output.
In practice, the 3D camera may be a Microsoft Kinect v2 device from which the RGB data, depth data and skeleton data stream may be obtained for generating a 3D model of the person. The RGB data and depth information are used for obtaining three-dimensional point cloud representation of the whole scene, the skeleton data are used for segmenting and extracting people from complete 3D scene point cloud by using Voronoi-based segmentation, the extracted clean 3D point cloud of the people is triangulated by using a dense mesh strategy, texture information is extracted, mesh representation is obtained, and finally a 3D model is output.
Preferably, in step S2, the development kit is a Vuforia development kit. The Vuforia development kit is based on an image recognition technology, a template matching algorithm in computer graphics is selected to recognize a specific image, a determined 'identification drawing' is analyzed and stored in advance, the coordinate of a virtual object in a camera is calculated according to the information, the 'identification drawing' is searched and recognized in the current image, the 'identification drawing' is displayed in an overlapping mode, after the 'identification drawing' matched with the template is recognized actually, the 'identification drawing' is overlapped in a preset three-dimensional coordinate to finish calibration, the position of a virtual model in a world coordinate is determined, and the virtual world coordinate is determined after coordinate conversion.
In this embodiment, the Vuforia Development Kit is specifically an SDK (Software Development Kit, SDK) Development Kit.
In step S3 of this embodiment, a 3DMAX tool is used to render a model, and a Unity3D tool is used to establish a virtual scene and an interactive logic. And a 3DMAX tool rendering mode is adopted, so that the model fineness is improved.
On the basis, the invention also discloses a multi-person online collaborative equipment virtual design implementation system, which is used for implementing the multi-person online collaborative equipment virtual design implementation method.
Compared with the prior art, the multi-user online collaborative equipment virtual design implementation method and the system have the advantages that the VR and AR aided design system based on the invention is seamlessly embedded into the operation activity, can assist a plurality of users to collaboratively complete the operation task online, and better meet the application requirements.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents or improvements made within the technical scope of the present invention should be included in the scope of the present invention.

Claims (7)

1. A multi-person online collaborative equipment virtual design implementation method is characterized by comprising the following steps:
s1, acquiring RGB, depth and skeleton data streams of a character and generating a character 3D model;
s2, marking and tracking a real environment 3D model by using a development toolkit;
s3, establishing a model and a map of a virtual equipment scene by using the 3D data;
s4, fusing self-adaptive information entropy and sharpening adjustment to improve an ORB-SLAM2 algorithm, removing misjudgment on the posture of the human joint skeleton by using an average filtering method, and generating character interaction information by using a multi-mode fusion mode;
and S5, uploading point cloud data generated by fusion to a cloud end by using a network transmission protocol, realizing feature point extraction matching, object three-dimensional registration tracking and human body pose resolving by using a cloud end service module, synchronously transmitting data to local user VR equipment, projecting by using the coordinates obtained in the step S3 to generate a map, performing fusion processing by using multi-mode interaction information in the step S4, and outputting a processing result to the VR equipment so as to realize multi-user online cooperative work.
2. The multi-person online collaborative equipment virtual design implementation method according to claim 1, wherein in step S1, RGB, depth and skeleton data streams of a character are acquired by a 3D camera.
3. The multi-user online collaborative equipment virtual design implementation method of claim 1, wherein RGB and depth data are used to obtain a three-dimensional point cloud representation of a whole scene, skeleton data are used to segment and extract a character from a complete 3D scene point cloud by Voronoi segmentation, a dense mesh strategy is used to triangulate the extracted character 3D point cloud and extract texture information, thereby obtaining a mesh representation, and finally outputting a 3D model.
4. The multi-person online collaborative equipment virtual design implementation method according to claim 1, wherein in step S2, the development kit is a Vuforia development kit.
5. The multi-user online collaborative equipment virtual design implementation method according to claim 4, wherein the Vuforia development kit is based on an image recognition technology, a template matching algorithm in computer graphics is selected to recognize a specific image, a determined "marker map" is analyzed and stored in advance, coordinates of a virtual object in a camera are calculated according to the information, the "marker map" is searched and recognized in the current image and is displayed in an overlapping manner on the "marker map", after the "marker map" which is actually matched with the template is recognized, the virtual object is overlapped in a predetermined three-dimensional coordinate to complete calibration, the position of the virtual model in the world coordinate is determined, and the virtual world coordinate is determined after coordinate conversion.
6. The multi-person online collaborative equipment virtual design implementation method of claim 1, wherein in the step S3, a model is rendered using a 3DMAX tool, and a virtual scene and interaction logic are built using a Unity3D tool.
7. A multi-person online collaborative equipment virtual design implementation system, characterized in that the system is configured to implement the method of any one of claims 1-6.
CN202211008637.6A 2022-08-22 2022-08-22 Multi-person online collaborative equipment virtual design implementation method and system Pending CN115359213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211008637.6A CN115359213A (en) 2022-08-22 2022-08-22 Multi-person online collaborative equipment virtual design implementation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211008637.6A CN115359213A (en) 2022-08-22 2022-08-22 Multi-person online collaborative equipment virtual design implementation method and system

Publications (1)

Publication Number Publication Date
CN115359213A true CN115359213A (en) 2022-11-18

Family

ID=84002322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211008637.6A Pending CN115359213A (en) 2022-08-22 2022-08-22 Multi-person online collaborative equipment virtual design implementation method and system

Country Status (1)

Country Link
CN (1) CN115359213A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116415338A (en) * 2023-04-24 2023-07-11 连云港智源电力设计有限公司 Virtual transformer substation modeling system and method based on three-dimensional model state visualization

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116415338A (en) * 2023-04-24 2023-07-11 连云港智源电力设计有限公司 Virtual transformer substation modeling system and method based on three-dimensional model state visualization

Similar Documents

Publication Publication Date Title
CN108776773B (en) Three-dimensional gesture recognition method and interaction system based on depth image
Kasahara et al. Second surface: multi-user spatial collaboration system based on augmented reality
CN108509026B (en) Remote maintenance support system and method based on enhanced interaction mode
EP2568355A2 (en) Combined stereo camera and stereo display interaction
CN107484428B (en) Method for displaying objects
CN110163942B (en) Image data processing method and device
KR101711736B1 (en) Feature extraction method for motion recognition in image and motion recognition method using skeleton information
CN111294665B (en) Video generation method and device, electronic equipment and readable storage medium
CN112837406B (en) Three-dimensional reconstruction method, device and system
CN106815555B (en) Augmented reality method and system for distributed scene target recognition
CN111260084A (en) Remote system and method based on augmented reality collaborative assembly maintenance
CN109460150A (en) A kind of virtual reality human-computer interaction system and method
CN106898049A (en) A kind of spatial match method and system for mixed reality equipment
CN104656893A (en) Remote interaction control system and method for physical information space
KR20170064026A (en) The way of a smart education services for 3D astronomical educational services, using virtual reality, augmented reality-based immersive interface
CN111897422B (en) Real object interaction method and system for real-time fusion of virtual and real objects
CN106774870A (en) A kind of augmented reality exchange method and system
CN112613123A (en) AR three-dimensional registration method and device for aircraft pipeline
CN115359213A (en) Multi-person online collaborative equipment virtual design implementation method and system
Rani et al. Hand gesture control of virtual object in augmented reality
CN114169546A (en) MR remote cooperative assembly system and method based on deep learning
CN116935008A (en) Display interaction method and device based on mixed reality
CN104484034A (en) Gesture motion element transition frame positioning method based on gesture recognition
JP2000194876A (en) Virtual space sharing device
CN111383343B (en) Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination