CN110634188A - Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses - Google Patents

Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses Download PDF

Info

Publication number
CN110634188A
CN110634188A CN201810648691.4A CN201810648691A CN110634188A CN 110634188 A CN110634188 A CN 110634188A CN 201810648691 A CN201810648691 A CN 201810648691A CN 110634188 A CN110634188 A CN 110634188A
Authority
CN
China
Prior art keywords
virtual
model
mixed reality
data
logic gate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810648691.4A
Other languages
Chinese (zh)
Inventor
杜晶
张弦
马云
贾惟宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visual Interactive (beijing) Technology Co Ltd
Original Assignee
Visual Interactive (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visual Interactive (beijing) Technology Co Ltd filed Critical Visual Interactive (beijing) Technology Co Ltd
Priority to CN201810648691.4A priority Critical patent/CN110634188A/en
Publication of CN110634188A publication Critical patent/CN110634188A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/10Electronic devices other than hearing aids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Otolaryngology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a method for realizing interaction with a virtual 3D model and MR mixed reality intelligent glasses, wherein an image logic gate circuit array in a mixed reality coprocessor acquires depth information of a position to be interacted with the virtual 3D model from data acquired by an image sensing module; and the position posture logic gate circuit array outputs the relative position relation between the model and the glasses wearer according to the data acquired by the position posture sensing module. The coprocessor realizes the instant interaction between the wearer and the model according to the data and the interaction instruction. This scheme is through increasing mixed reality coprocessor (having on-chip storage), shares the task of treater among the prior art, realizes the instant interaction to virtual 3D model, has solved the virtual 3D model that shows on the mixed reality intelligence glasses that the treater ability caused inadequately and has superimposed the position inaccuracy in the virtual space, and data output is asynchronous, and the virtual reality that leads to fuses incoherently, and the person of wearing can't be instant with the interactive problem of virtual 3D model.

Description

Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses
This application claims priority from the patent application having application number 201810598599.1.
Technical Field
The invention relates to the field of mixed reality data calculation, in particular to a method for realizing interaction with a virtual 3D model and MR mixed reality intelligent glasses.
Background
Currently, interacting with virtual 3D models is one of the important components in the AR/MR (Augmented Reality/Mixed Reality) domain. In the prior art, interacting with a virtual 3D model involves recognition calculations (graphical image recognition) and, position and pose calculations (including at least the position and pose of the mixed reality device, the position and pose of the virtual 3D model).
The position and posture data is obtained by collecting the current position and space posture data of the mixed reality intelligent glasses through an Inertial sensing Unit (IMU). The processor (e.g. CPU) calculates the data (e.g. Euler angle, acceleration, geomagnetic data), determines the actual moving distance and angle according to the calculation result, finally superimposes the image and/or virtual 3D model(s) to be displayed on the accurate coordinates, and refracts/projects the image and/or virtual 3D model(s) onto the optical display medium of the mixed reality intelligent glasses.
As is known, in an application scenario of mixed reality, in an active application state, a position of the intelligent glasses of mixed reality and an image acquired by the intelligent glasses of mixed reality are constantly changed, and a position of the virtual 3D model relative to the intelligent glasses of mixed reality is also changed, so that a relative position relationship between the intelligent glasses of mixed reality and the virtual model dynamically changes, and in a process of changing the relative position, refraction/projection of the virtual 3D model also changes, so that a data amount increases geometrically. The data volume is mainly reflected in geometric doubling length: ambient information data collected by the various input devices and aspects of processing of such data.
In previous AR/MR scenarios, the only purpose was to display a 3D virtual model or to refract/project a 2D graphical image visualization to the AR/MR device. However, with the development of technology, in an existing scene, a plurality of virtual 3D models are usually displayed simultaneously or interacted with each other before, each virtual 3D model has corresponding position and posture data generated in real time, and meanwhile, an environmental image and a related environment where the virtual 3D models are located also continuously change, which is mainly reflected in ambient light, whether a display plane superposed by the current virtual 3D model is flat or not, and the like.
In the prior art, when processing AR/MR data, a single processor architecture is usually adopted, and it is responsible for all operations including position posture operation and graphic image operation by all operation CPUs, as can be seen from the above, all AR/MR related calculations in the prior art are performed in the processors, but the CPU processors are in a serial processing mode, and process corresponding tasks one by one, and when the growth rate of a data task is greater than the processing rate, task accumulation is inevitably caused, and the processing rate is also reduced to a certain extent. The method comprises the steps that a plurality of virtual 3D models which are visually refracted/projected need to be tracked in an AR/MR application scene, a CPU needs to continuously process data of position postures and graphic image data of the virtual 3D models simultaneously in a data processing task process, at the moment, the CPU processes the data one by one, the position posture data is processed after the graphic image data is processed, due to a serial processing mode, the two data results cannot be output simultaneously, the virtual 3D models cannot be prevented from being jammed when being visually refracted/projected, and when the display cannot be consistent, the purpose of interaction with the virtual 3D models cannot be achieved.
The processor (CPU) is used for processing other tasks while the data (graphic image recognition and position posture) are processed, the tasks to be processed can generate a large amount of backlog, the main processor (CPU) can not process the tasks in time, the tasks can not be processed in time, the output data can not be fused in time, the data output cannot be synchronized, and therefore tracking of the virtual 3D model can not be achieved (the maximum position posture data can be generated due to tracking). For example: when the position posture data result and the recognition calculation result are asynchronous, the virtual 3D model displayed on the mixed reality intelligent glasses is inaccurate in superposed position in a virtual space, imaging is discontinuous and blocked, even equipment is halted, and interaction with the virtual 3D model cannot be performed.
Disclosure of Invention
The invention provides a method for realizing interaction with a virtual 3D model and MR mixed reality intelligent glasses. The mixed reality coprocessor carries out data processing on the position posture data set, outputs the relative position relation between the virtual 3D model and the mixed reality intelligent glasses, and also carries out data processing on the image data set to acquire the depth information of the virtual 3D model to-be-interacted position. Through increasing mixed reality coprocessor, share treater task burden among the prior art, can be in succession to the position tracking of virtual 3D model, solved among the prior art virtual 3D model that shows on the mixed reality intelligence glasses in the virtual space superimposed position inaccuracy, because the data output that the throughput causes inadequately is asynchronous, the virtual reality that leads to fuses incoherently, even equipment crash, more can't interact with virtual 3D model.
In order to achieve the above object, the technical solution of the present invention provides a method for implementing interaction with a virtual 3D model, including: the position and attitude sensing module collects a position and attitude data set, and the image sensing module collects an image data set. And the mixed reality coprocessor performs data processing on the image data set to acquire depth information of the to-be-interacted position of the virtual 3D model. The mixed reality coprocessor carries out data processing on the position posture data set and outputs the relative position relation between the virtual 3D model and the mixed reality intelligent glasses. And the mixed reality coprocessor realizes the interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction. And the interaction instruction and the instruction content corresponding to the interaction instruction are prestored in a memory or a cloud end in the mixed reality intelligent glasses.
As the preferred of above-mentioned scheme, position gesture response module gathers position gesture data set, includes: the inertia measurement unit continuously collects angular velocity change data and acceleration change data in a three-dimensional space of the mixed reality intelligent glasses. The geomagnetic sensor is used for at least sensing magnetic field data and longitude and latitude change data of the mixed reality intelligent glasses in the current use environment.
Preferably, the method for processing the position and posture data set by the mixed reality coprocessor to output the relative position relationship between the virtual 3D model and the mixed reality smart glasses includes: and the position and attitude logic gate circuit array in the mixed reality coprocessor checks angular velocity change data, acceleration change data, magnetic field data and longitude and latitude change data according to the position and attitude IP inside the mixed reality coprocessor, continuously processes the data, acquires relative position data between the virtual 3D model and the mixed reality intelligent glasses, integrates the relative position data and outputs a relative position relation. The position and attitude logic gate circuit array is obtained by compressing a position and attitude algorithm file, resetting a data interface type, packaging the compressed position and attitude algorithm file and the set data interface type to a programmable logic gate circuit array after verification, wherein a solidified hardware circuit is a position and attitude IP core, and at least two position and attitude IP cores obtained by the method are arranged in the position and attitude logic gate circuit array.
Preferably, the image sensing module acquires an image data set, and includes: the single/double/multi-view depth camera module collects a depth frame image; the optical flight sensor continuously collects the to-be-interacted position of the virtual 3D model and the distance between the mixed reality glasses.
Preferably, the data processing of the image data set by the mixed reality coprocessor to obtain the depth information of the position to be interacted of the virtual 3D model includes: and an image identification logic gate circuit array in the mixed reality coprocessor acquires the surface of the position to be interacted of the virtual 3D model from the depth frame image as a plane or a non-plane. And determining the virtual size of the virtual 3D model by combining a preset proportional relation according to the distance, wherein the virtual size is the virtual superposition size in the real space of the virtual 3D model. The image recognition logic gate circuit array is obtained by compressing an image recognition algorithm file, judging whether a compression result is correct according to a golden reference model, if so, downloading the compression result to a programmable logic gate circuit array generation hardware circuit to generate an image recognition IP core, and at least two image recognition IP cores obtained by the method are arranged in the image recognition logic gate circuit array.
Preferably, the obtaining, by an image recognition logic gate circuit array in the mixed reality coprocessor, a surface of the virtual 3D model where the position to be interacted is located is a plane or a non-plane from the depth frame image includes: and the image identification IP core in the image identification logic gate circuit array extracts pixel characteristic points from the depth frame image. And judging whether the surface of the position to be interacted of the virtual 3D model is a plane or a non-plane according to the shape formed by the pixel feature points. If the pixel characteristic points form a complete contour and no other pixel characteristic points exist in the contour, the surface is a horizontal plane, and if the contour has other pixel characteristic points, the surface is a non-plane.
Preferably, the method for realizing interaction between the wearer wearing the intelligent glasses and the virtual 3D model by the mixed reality coprocessor according to the relative position relationship, the depth information and the interaction instruction includes: and the mixed reality coprocessor calls corresponding instruction content according to the interactive instruction. The virtual 3D model generates shape/form/position change at the position to be interacted according to instruction content, the virtual 3D model is virtually superposed on the position to be interacted according to a judgment result that the surface of the position to be interacted is a plane or a non-plane, and the changed virtual 3D model is refracted/projected onto a digital optical display component of the mixed reality intelligent glasses. And the mixed reality coprocessor also tracks the virtual 3D model according to the relative position relation. If the surface is a plane, the virtual 3D model is virtually superposed on any pixel point in the plane according to the interactive instruction; and if the virtual 3D model is a non-plane, virtually superposing the virtual 3D model on any pixel characteristic point in the plane according to the interactive instruction.
In order to achieve the above object, the technical solution of the present invention further provides MR mixed reality smart glasses for interacting with a virtual 3D model, including: the image sensing module is used for acquiring an image data set; wherein, the image sensing module group at least includes: the single/double/multi-view depth camera module is used for collecting a depth frame image; and the optical flying sensor is used for continuously acquiring the distance between the virtual 3D model to-be-interacted position and the mixed reality glasses. The position and posture sensing module is used for acquiring a position and posture data set; wherein, position gesture response module includes at least: the inertia measurement unit is used for continuously acquiring angular velocity change data and acceleration change data in a three-dimensional space of the mixed reality intelligent glasses; the geomagnetic sensor is used for at least sensing magnetic field data and longitude and latitude change data of the mixed reality intelligent glasses in the current use environment. And the mixed reality coprocessor is used for carrying out data processing on the position posture data set and outputting the relative position relation between the virtual 3D model and the mixed reality intelligent glasses. And the mixed reality coprocessor is also used for carrying out data processing on the position posture data set and outputting the relative position relation between the virtual 3D model and the mixed reality intelligent glasses. And the mixed reality coprocessor is also used for carrying out data processing on the image data set, acquiring the depth information of the position to be interacted of the virtual 3D model, and realizing the interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction. And the interaction instruction and the instruction content corresponding to the interaction instruction are prestored in a memory or a cloud end of the mixed reality intelligent glasses.
Preferably, the mixed reality coprocessor at least includes: an image recognition logic gate circuit array and a position posture logic gate circuit array. The image identification logic gate circuit array is used for acquiring a plane or a non-plane surface of a position to be interacted of the virtual 3D model from the depth frame image, and specifically comprises the following steps: and the pixel characteristic point extraction unit is used for identifying an image identification IP core in the logic gate circuit array and extracting pixel characteristic points from the depth frame image acquired by the single/double/multi-view depth camera module. And the surface type obtaining unit is used for obtaining the surface of the position to be interacted of the virtual 3D model as a plane or a non-plane according to the shape formed by the pixel characteristic points extracted by the pixel characteristic point extracting unit, wherein the surface is a plane if the pixel characteristic points form a complete contour and other pixel characteristic points are not in the contour, and the surface is a non-plane if the contour has other pixel characteristic points. And the position and posture logic gate circuit array is used for checking the angular velocity change data, the acceleration change data, the magnetic field data and the longitude and latitude change data according to the position and posture IP inside the position and posture logic gate circuit array, continuously processing the data, acquiring the relative position data between the virtual 3D model and the mixed reality intelligent glasses, integrating the relative position data and outputting the relative position relation. The mixed reality coprocessor is further used for calling corresponding instruction content according to an interaction instruction, indicating the virtual 3D model to generate shape/position change at the position to be interacted according to the instruction content, virtually superposing the virtual 3D model on the position to be interacted according to the surface type and the judgment result that the surface of the position to be interacted is a plane or a non-plane, and refracting/projecting the changed virtual 3D model onto a digital optical display component of the mixed reality intelligent glasses. If the surface where the position to be interacted of the virtual 3D model is obtained by the surface type obtaining unit is a plane, the virtual 3D model is virtually superposed on any pixel point in the plane according to the interaction instruction; and if the surface of the position to be interacted of the virtual 3D model obtained by the surface type obtaining unit is a non-plane, virtually superposing the virtual 3D model on any pixel characteristic point in the plane according to the interaction instruction. The mixed reality coprocessor also determines the virtual size of the virtual 3D model according to the distance and a preset proportional relation; the virtual size is a virtual superposition size in the real space of the virtual 3D model. The image recognition logic gate circuit array is obtained by compressing an image recognition algorithm file, judging whether a compression result is correct according to a golden reference model, and downloading the compression result to a programmable logic gate circuit array generation hardware circuit to generate an image recognition IP core if the compression result is correct; at least two image recognition IP cores obtained by the method are arranged in the image recognition logic gate circuit array. The position and attitude logic gate circuit array is obtained by compressing a position and attitude algorithm file, resetting a data interface type, packaging the compressed position and attitude algorithm file and the reset data interface type to a programmable logic gate circuit array after verification, wherein a solidified hardware circuit is a position and attitude IP core, and at least two position and attitude IP cores obtained by the method are arranged in the position and attitude logic gate circuit array. And the mixed reality coprocessor also tracks the virtual 3D model according to the relative position relation.
As a preferable aspect of the above, the MR mixed reality smart glasses further include: the wireless communication assembly is used for realizing that the mixed reality intelligent glasses carry out data transmission through Bluetooth and a wireless network in a private/public network environment and simultaneously or non-simultaneously with a private/public cloud end, at least one mobile/non-mobile intelligent terminal and at least one wearable intelligent device. The control assembly is used for receiving a control instruction and sending the control instruction to the integrated operation assembly, and the type of the control instruction at least comprises the following components: touch instruction, key instruction, remote control instruction and voice control instruction. And the integrated operation component is used for receiving the control instruction sent by the control component and making corresponding feedback, and is also used for processing data by being matched with the mixed reality coprocessor. A power supply component comprising at least one set of polymer batteries for power management of the mixed reality smart glasses, comprising at least: the power management function, the electric quantity display function and the quick charging function are switched to the energy-saving mode through the integrated operation component when the electric quantity is lower than the electric quantity threshold value. The fast charging control circuit, the boosting control circuit and the low-voltage regulator control circuit in the power supply assembly are used for achieving a fast charging function.
The invention provides a method for realizing interaction with a virtual 3D model and MR mixed reality intelligent glasses, wherein a mixed reality coprocessor in the glasses is a logic gate circuit array for processing a position posture data set, acquires related data according to a position posture sensing module and outputs a relative change position relation between the virtual 3D model and the mixed reality intelligent glasses; and the mixed reality coprocessor is also provided with a logic gate circuit array for processing an image data set, and the depth information of the position to be interacted of the virtual 3D model is obtained according to the image data set acquired by the image sensing module. The position relation and the depth information are used for realizing real-time interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model through a logic gate circuit array in the mixed reality coprocessor according to the relative position relation, the depth information and an interaction instruction.
The virtual 3D model tracking method has the advantages that through the technical scheme, the task burden of the processor in the prior art is shared by adding the mixed reality coprocessor, the position tracking of the virtual 3D model can be continuously carried out, and the problems that the virtual 3D model displayed on the mixed reality intelligent glasses is inaccurate in the superposed position in the virtual space due to insufficient capacity of the processor, and virtual-real fusion is incoherent, equipment is halted, and a wearer cannot interact with the virtual 3D model in real time due to asynchronous data output caused by insufficient capacity of the processor are solved in the prior art.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, a brief description will be given below of the drawings required for the embodiments or the technical solutions in the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a method for interacting with a virtual 3D model according to an embodiment of the present invention;
FIG. 2 is a flow chart of an embodiment of the present invention;
FIG. 3 is a detailed flowchart of step 203 in FIG. 2;
FIG. 4 is a schematic structural diagram of MR mixed reality smart glasses for implementing interaction with a virtual 3D model according to the present invention;
fig. 5 is a schematic structural diagram of the position and orientation sensing module 41 in fig. 4;
FIG. 6 is a schematic structural diagram of the image sensor module 42 shown in FIG. 4;
FIG. 7 is a schematic diagram of the mixed reality coprocessor 43 of FIG. 4;
FIG. 8 is a schematic diagram of the image recognition logic gate array 72 of FIG. 7;
FIG. 9 is a schematic view of a mobile interactive instruction according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a scenario of a point-and-touch interactive command according to an embodiment of the present invention;
fig. 11 is a schematic diagram illustrating connection and transmission of various components in the mixed-reality smart glasses according to the present invention;
FIG. 12 is a schematic diagram of a product of mixed reality smart eyewear provided in accordance with the present invention;
FIG. 13 is a schematic view of the measurement of the inertial measurement unit in the position and orientation sensing module according to the present invention;
fig. 14A and 14B are schematic structural diagrams of a PCBA master control board according to the technical solution of the present invention;
fig. 15A and 15B are schematic structural diagrams of a PCBA data acquisition board in the technical solution of the present invention;
fig. 16A and 16B are schematic structural diagrams of a PCBA function board in the mixed reality smart glasses according to the solution of the present invention;
fig. 17 is a PCBA touch pad of the touch component of the mixed reality smart glasses in the solution of the present invention;
FIG. 18 shows a physical assembly of the PCBA and the battery assembly shown in FIGS. 14A-17;
fig. 19 is a schematic diagram of electrical connection of an inertia measurement unit in the position and orientation sensing module according to the present invention;
fig. 20 is a schematic diagram illustrating electrical connection of the geomagnetic sensor in the position and orientation sensing module according to the present invention;
fig. 21 is a schematic diagram illustrating electrical connections of the optical fiber sensors in the image sensor module according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Fig. 1 is a flowchart of a method for implementing interaction with a virtual 3D model according to an embodiment of the present invention, as shown in fig. 1, including:
step 101, an image sensing module acquires an image data set, and a position and posture sensing module acquires a position and posture data set.
For the image sensing module:
the single/double/multi-view depth camera module collects a depth frame image. And continuously acquiring the distance between the virtual 3D model to-be-interacted position and the mixed reality glasses by using the optical flight sensor.
For the position and posture sensing module:
the inertial measurement unit continuously collects angular velocity change data and acceleration change data in the three-dimensional space of the mixed reality intelligent glasses;
the geomagnetic sensor is used for at least sensing magnetic field data and longitude and latitude change data of the mixed reality intelligent glasses in the current use environment.
And 102, performing data processing on the image data set by the mixed reality coprocessor to acquire depth information of a to-be-interacted position of the virtual 3D model.
And an image identification logic gate circuit array in the mixed reality coprocessor acquires that the surface of the position to be interacted of the virtual 3D model is a plane or a non-plane from the depth frame image.
Specifically, the method comprises the following steps: and the image identification logic gate circuit array identifies an IP core according to the image in the image identification logic gate circuit array, and extracts pixel characteristic points from the depth frame image. And judging whether the surface of the position to be interacted of the virtual 3D model is a plane or a non-plane according to the shape formed by the pixel feature points. If the pixel characteristic points form a complete contour, and the contour has no other pixel characteristic points, the surface of the position to be interacted is a plane, and if the contour has other pixel characteristic points, the surface of the position to be interacted is a non-plane.
The mixed reality coprocessor also determines the virtual size of the virtual 3D model according to the distance and the preset proportional relation. The virtual dimension is a virtual overlay dimension in the virtual 3D model real space.
The image recognition logic gate circuit array is obtained by compressing the image recognition algorithm file, judging whether the compression result is correct according to the golden reference model, and downloading the compression result to a programmable logic gate circuit array generation hardware circuit to generate an image recognition IP core if the compression result is correct. At least two image recognition IP cores obtained by the method are arranged in the image recognition logic gate circuit array.
And 103, performing data processing on the position and posture data set by the mixed reality coprocessor, and outputting a relative position relation between the virtual 3D model and the mixed reality intelligent glasses.
And the position posture logic gate circuit array in the mixed reality coprocessor checks the angular velocity change data, the acceleration change data, the magnetic field data and the longitude and latitude change data according to the position posture IP in the mixed reality coprocessor, continuously processes the data, acquires the relative position data between the virtual 3D model and the mixed reality intelligent glasses, integrates the relative position data and outputs the relative position relation.
The position and attitude logic gate circuit array is obtained by compressing a position and attitude algorithm file, resetting a data interface type, packaging the compressed position and attitude algorithm file and the reset data interface type to a programmable logic gate circuit array after verification, wherein a solidified hardware circuit is a position and attitude IP core, and at least two position and attitude IP cores obtained by the method are arranged in the position and attitude logic gate circuit array.
And step 104, the mixed reality coprocessor realizes the interaction between the wearer of the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction.
And the mixed reality coprocessor calls corresponding instruction content according to the interactive instruction.
And the virtual 3D model generates shape/form/position change at the position to be interacted according to the instruction content, and virtually superimposes the virtual 3D model on the position to be interacted according to a judgment result that the surface of the position to be interacted is a plane or a non-plane, and the changed virtual 3D model is refracted/projected onto a digital optical display component of the mixed reality intelligent glasses.
Specifically, the mixed reality coprocessor displays the virtual 3D model and tracks the virtual 3D model according to the relative position relationship.
If the surface is a plane, the virtual 3D model is virtually superposed on any pixel point in the plane according to the interaction instruction; if the virtual 3D model is a non-plane, the virtual 3D model is virtually superposed on any pixel feature point in the plane according to the interactive instruction.
The invention provides a method for realizing interaction with a virtual 3D model, wherein a logic gate circuit array for processing a position posture data set in a mixed reality coprocessor in glasses collects related data according to a position posture sensing module and outputs a relative change position relation between the virtual 3D model and mixed reality intelligent glasses; and the hybrid coprocessor is also provided with a logic gate circuit array for processing the image data set, and the depth information of the position to be interacted of the virtual 3D model is acquired according to the image data set acquired by the image sensing module. The two logic gate circuit arrays work simultaneously, the position relation and the depth information are output simultaneously, and real-time interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model is achieved through the logic gate circuit arrays in the mixed reality coprocessor according to the relative position relation, the depth information and the interaction instruction.
The virtual 3D intelligent glasses have the advantages that through the technical scheme, the task burden of a processor in the prior art is shared by adding the mixed reality coprocessor, synchronous output of image data and position posture data can be achieved, real-time interaction of the virtual 3D model is achieved, and the problems that due to the fact that the processor capacity is insufficient, the virtual 3D model displayed on the mixed reality intelligent glasses is not accurate in virtual space overlapping position, data output is asynchronous, virtual-real fusion is not consistent, and a wearer cannot interact with the virtual 3D model in real time are solved.
The technical solution of the present invention is described with a specific embodiment, in this embodiment, a specific position and attitude algorithm related to the position and attitude logic gate circuit array is illustrated with unisistortpoints as an example, and is not used to limit a specific algorithm type in the position and attitude logic gate circuit array.
Fig. 2 is a flow chart of an embodiment of the present invention, as shown in fig. 2:
step 201, the inertial measurement unit collects angular velocity change and acceleration change data.
Wherein the inertial Measurement unit is an imu (inertial Measurement unit). The intelligent glasses comprise three acceleration sensors and three angular velocity sensors (gyroscopes), wherein the three acceleration sensors and the three angular velocity sensors are respectively used for acquiring acceleration components and angle information of the intelligent glasses in mixed reality.
Step 202, the geomagnetic sensor senses the current environmental magnetic field and the change of longitude and latitude.
The IMU, with its acceleration sensors and gyroscopes, can describe substantially the full motion state of the device. However, with long-time movement, accumulated deviation also occurs, and the movement posture cannot be accurately described, such as the inclination of the control picture.
An electronic compass in the geomagnetic sensor measures the earth magnetic field, and performs correction compensation through an absolute pointing function, so that accumulated deviation can be effectively solved, and the motion direction, the posture angle, the motion strength, the motion speed and the like of a human body can be corrected.
And step 203, the position and posture IP core encodes the data to generate a relative position relation and a track.
The position posture IP core is arranged in the position posture logic gate circuit array.
The above data includes: the distance, the angular speed change data, the acceleration change data, the magnetic field data and the longitude and latitude change data.
The relative positional relationship specifically refers to a relative positional relationship between the virtual 3D model and the mixed reality glasses.
In an application scene, in steps 201 and 202, each component continuously acquires corresponding data, and finally in step 203, a dynamic position relationship between the virtual 3D model and the mixed reality intelligent glasses and respective motion trajectories are acquired.
Specifically, the position and attitude logic gate circuit array is obtained by compressing the position and attitude algorithm file, resetting the data interface type, and after verification, packaging the rewritten position and attitude algorithm file and the data interface type into a programmable logic gate circuit array (such as an FPGA chip) so as to be solidified into a hardware circuit, wherein the hardware circuit is a position and attitude IP core. Thus, this position-pose IP core has a high capability of concurrent multi-channel processing data.
Now, specifically describing the manner of obtaining the logic gate corresponding to the IP core of a position posture in the logic gate array of position posture mentioned in step 203, taking the unidentiortpoints algorithm in the position posture algorithm as an example, as shown in fig. 3:
step 301, rewriting the solvePnP function to a solvePnP _ MC function.
Using the convertTo function, Points3D were converted to the form of MAT Points3DM, which was then overwritten after entering sloppepnp. The member variable of the classification is each subfunction interface, and since HLS does not identify, the form of global variable is not adopted. The function members of the breakdown category are the subfunctions.
Step 302, define interface MAT type.
The type is HLS, Mat < PNum, PNum _ COL, HLS _32FC2> Points2D _ s;
hls::Mat<PNum,PNum_COL,HLS_32FC3>Points3D_s。
step 303, the interface of extension R T is an input/output interface.
And step 304, inputting an initial value to finish the operation frame.
Wherein, when the assigned initial value is input to the main function, other calculations are masked.
Specifically, MAT is replaced by window, and loop _ fill: for (int buf _ row is 0; buf _ row < W _ HEIGHT; buf _ row + +) is used as a window initial value.
And step 305, finishing the UndristorPoints logic gate circuit.
Specifically, the replacement function, the RT input/output interface, and the interface data type are mapped to an FPGA chip, and a logic gate circuit array having the algorithm function of uniportpoints is generated, where the above process is a reconstruction of the algorithm of uniportpoints.
And packaging the reconstructed position and attitude algorithm into a programmable logic gate array to obtain a position and attitude IP core corresponding to the algorithm, wherein a plurality of position and attitude IP cores are arranged in the position and attitude calculation logic gate array, and each position and attitude IP core is provided with a position and attitude algorithm corresponding to the position and attitude IP core.
Similarly, the image recognition IP core related in the technical scheme of the invention has the same acquisition method as the position posture IP core.
Therefore, the position posture algorithm and the image recognition algorithm are respectively compressed into a position posture IP core and an image recognition IP core and are packaged into the FPGA chip, the FPGA chip is provided with an on-chip storage (Block RAM), an on-chip storage area of the FPGA chip is in seamless butt joint with the data operation circuit on the same chip, and the data to be processed flows inside the chip.
Furthermore, a plurality of logic gate circuit units are arranged in the FPGA chip, each logic gate circuit unit is provided with a million-level logic gate circuit array, data to be processed can be shunted when data processing is carried out, and meanwhile data processing is carried out, so that the processing speed is greatly improved.
And step 204, the depth camera module collects a depth frame image of the position to be interacted.
Specifically, the degree of depth camera module can be monocular degree of depth camera according to the equipment demand, also can be two mesh degree of depth camera modules, can also be the degree of depth camera array that a plurality of degree of depth cameras are constituteed.
Step 205, the optical fly sensor collects the distance between the virtual 3D model and the mixed reality glasses.
The distance acquisition device is used for acquiring the distance between the mixed reality intelligent glasses and the virtual object by detecting the flight (round trip) time of the light pulse according to the continuous transmission of the light pulse to the target and the receiving of the light returned from the object. Specifically, the distance between the glasses and the virtual 3D model to be interacted with is calculated according to the flight time of the light in the virtual coordinates corresponding to the glasses and the virtual 3D model.
And step 206, acquiring the surface of the virtual 3D model where the position to be interacted is located, wherein the surface is a plane or a non-plane.
And the surface of the virtual 3D model to be interacted is a plane or a non-plane according to the pixel characteristic points in the depth frame image obtained by an image recognition IP core in the image recognition logic gate circuit array. Specifically, the image recognition IP core extracts a central pixel point in the current depth frame image, calculates the weight of a pixel point adjacent to the central pixel point, compares the weight with the corresponding adjacent pixel value, and marks the position as 0 if the value of the adjacent pixel point is smaller than the value of the central pixel point, otherwise, marks the position as 1 and marks the position as a pixel feature point. After all the pixel points in the depth frame image are processed, a set of pixel feature points is finally obtained, and the pixel feature points form a corresponding outline shape, so that the surface type (plane or non-plane) is obtained.
If the features in the feature point set are continuous, a continuous graphic shape can be formed, and no other feature points exist in the shape, the surface where the positions to be interacted are located is a plane.
If each feature point in the feature point combination is in a discrete shape in one shape, the surface where the positions to be interacted are located is a non-plane.
The image recognition logic gate circuit array is obtained by compressing an image recognition algorithm file, judging whether a compression result is correct according to a golden reference model, and downloading the compression result to a programmable logic gate circuit array generation hardware circuit to generate an image recognition IP core if the compression result is correct. The image identification logic gate circuit array is provided with at least two image identification IP cores obtained by the method.
The golden reference model is a pixel image processed by a reference image according to an original image recognition algorithm, and is used as a standard for verifying whether a rewriting result is correct or not in the invention.
And step 207, acquiring the interactive instruction content.
The interaction instruction and the instruction content corresponding to the interaction instruction are prestored in a memory or a cloud end of the mixed reality intelligent glasses and are used for being called and executed during interaction. Including but not limited to storing the following instructions of interaction: the single finger touches the instruction, the palm faces upwards to display the instruction, the opening/closing of the two fingers is an amplifying/reducing instruction, and the sliding of the single finger is a rotating instruction. The instruction content corresponding to these interactive instructions is only used for example, and the interactive instructions and the interactive content corresponding to the interactive instructions in the actual application can be freely defined according to the requirements of the wearer/software.
And the image identification logic gate circuit array performs instruction characteristic contour extraction on the depth frame image acquired by the single/double-eye depth camera according to the interactive instruction stored in the register, and acquires the action corresponding to the interactive instruction in the depth frame image.
And step 208, virtualizing the virtual size of the 3D model according to the distance.
And determining the virtual superposition size of the virtual 3D model in the real space according to the distance acquired by the optical flight sensor in the step 205 and by combining a preset proportional relation.
For example, when the optical flight sensor acquires that the distance between the position to be interacted and the mixed reality smart glasses is L (L is a natural number and is a unit of length), the virtual size of the virtual 3D model is a × b × c (abc is a natural number and is a unit of length), and if the distance L is reduced by 10%, the virtual size of the virtual 3D model is 0.9a × 0.9b 0.9c and is reduced in equal proportion.
Step 209 refracts/projects the interaction result onto the digital optical display assembly.
The interaction result at least comprises: and changing the virtual 3D model according to the instruction content, and adjusting the virtual size according to the preset proportional relation.
The Digital optical Display module includes, but is not limited to, a refractive/reflective Digital optical microdisplay Device, an LCD (Liquid Crystal Display), an LED (Light Emitting Diode), an OLED (Organic Light Emitting Diode), a DMD (Digital Micromirror Device), a DLP (Digital Light Processing), and an LCOS (Liquid Crystal on Silicon). The digital light display medium device comprises, but is not limited to, a lens adhered with a light waveguide grating, a semi-transparent/full-transparent display optical prism assembly, a free-form surface optical lens semi-transparent/full-transparent display assembly, a waveguide optical semi-transparent/full-transparent display lens assembly and the like.
The following table (table 1) shows that, according to the technical solution of the present invention, the processing speed of the image recognition logic gate array in the mixed reality coprocessor is compared with that of the prior art 8-core ARM-Cortex A7CPU under the same condition and with the same resolution of 600P, 720P and 1080P in 1 frame, as shown in the table, the processing speed of the image logic gate array is 199.2 times that of the 8-core ARM-Cortex A7CPU when processing 600P images, 327.8 times that of the 8-core ARM-Cortex A7CPU when processing 720P images, and 282.8 times that of the 8-core ARM-Cortex A7CPU when processing 1080P images. (Note: the data are the average of 1000 measurements). The algorithm used in the processing process is an ELBP algorithm frequently called in a gesture recognition algorithm.
Figure BDA0001704132960000141
TABLE 1
As can be seen from the processing speeds in table 1, the mixed reality coprocessor has a great advantage in processing speed over the conventional CPU. And the table above is the comparison of the processing speed of 1 frame of image, in the practical application scene, the image is multi-frame continuous, obviously, if there are M frames of images (M ≧ 2), the image data volume will increase by M times. Therefore, when processing continuous frame images, the processing speed can be multiplied, and the problem of insufficient data processing capacity is solved.
The invention provides a method for realizing interaction with a virtual 3D model, wherein a logic gate circuit array for processing a position posture data set in a mixed reality coprocessor in glasses collects related data according to a position posture sensing module and outputs a relative change position relation between the virtual 3D model and mixed reality intelligent glasses; the mixed reality coprocessor is also provided with a logic gate circuit array for processing an image data set, the depth information of the position to be interacted of the virtual 3D model is obtained according to the image data set acquired by the image sensing module, and the two logic gate circuit arrays work simultaneously and respectively output the position relation and the depth information at the same time. The logic gate circuit array in the mixed reality coprocessor realizes real-time interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction.
The method has the advantages that by the technical scheme, synchronous output of image data and position posture data can be realized, so that instant interaction of the virtual 3D model is realized, and the problems that the virtual 3D model displayed on the mixed reality intelligent glasses is inaccurate in virtual space overlapping position and asynchronous in data output, virtual-real fusion is not consistent, and a wearer cannot interact with the virtual 3D model instantly due to insufficient capacity of a processor are solved.
Fig. 4 is a schematic structural diagram of MR mixed reality smart glasses for implementing interaction with a virtual 3D model provided by the present invention:
the image sensing module 41 is configured to acquire an image data set.
And the position and posture sensing module 42 is used for acquiring a position and posture data set.
And the mixed reality coprocessor 43 is used for carrying out data processing on the image data set acquired by the image sensing module 41, acquiring depth information of a position to be interacted of the virtual 3D model, and realizing interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction. And the interaction instruction and the instruction content corresponding to the interaction instruction are prestored in a memory or a cloud end in the mixed reality intelligent glasses.
Mixed reality coprocessor 43 still is used for carrying out data processing to the position gesture data set that position gesture response module 42 gathered, the relative position relation between virtual 3D model and the mixed reality intelligence glasses of output.
Wireless communication subassembly 44 for realize that mixed reality intelligent glasses carry out data transmission through bluetooth, wireless network in the private/public network environment, simultaneously or non-simultaneously, and between the private/public high in the clouds, at least one remove/non-mobile intelligent terminal, at least one wearable smart machine. The method is also used for opening corresponding websites/applications and transmitting data materials such as videos, texts and images according to the requirements of the wearer wearing the mixed reality intelligent glasses.
In particular, the private/public network environments include at least local area networks, metropolitan area networks, wide area networks, and the internet.
Intelligent terminals include, but are not limited to: intelligent home terminals (intelligent lamps, intelligent door locks, intelligent purifiers and the like), intelligent transportation equipment, tablet computers, laptops, smart phones, intelligent electronic dictionaries and the like. Wearable smart devices include, but are not limited to: smart band, smart headset (wired/wireless), smart watch, etc.
The intelligent terminal and the wearable intelligent equipment can be connected and combined with the mixed reality intelligent glasses provided by the technical scheme of the invention in a one-to-one mode through Bluetooth or a wireless network, a plurality of intelligent equipment can be connected with one another to form an internet of things, one or more intelligent equipment in the internet of things can be controlled through the mixed reality intelligent glasses, and the intelligent equipment and the glasses can be controlled to interact with one another.
A control component 45, configured to receive a control instruction and send the control instruction to the integrated computing component 46, where the type of the control instruction at least includes: touch instruction, key instruction, remote control instruction and voice control instruction. The control instructions are used for realizing command interaction between the wearer and the mixed reality intelligent glasses.
The touch instruction is realized through a touch pad on the surface of the mixed reality intelligent glasses, and the touch instruction is controlled through touch modes including but not limited to infrared induction non-contact touch, capacitance touch, resistance touch and the like. The button instruction is realized through the physics button that is located mixed reality intelligent glasses surface. The remote control instruction is sent through the intelligent remote control equipment who is connected with mixed reality intelligence glasses, and the connected mode of intelligence remote control equipment and mixed reality glasses includes but not limited to bluetooth, infrared ray, zigBee, wiFi wireless network connection. Intelligent remote control devices include, but are not limited to: the intelligent remote controller comprises an intelligent remote controller, an intelligent home terminal, an intelligent mobile phone, an intelligent tablet computer and the like. The voice control command is obtained by a voice data acquisition component, including but not limited to a digital silicon microphone unit, an analog microphone unit, a silicon microphone unit, a digital matrix microphone unit and the like. The sound reproducing apparatus includes, but is not limited to, a speaker unit/speaker array, a bone conduction speaker array unit, and the like.
An integrated arithmetic component 46, configured to receive the control instruction sent by the control component 45 and execute the corresponding instruction content, for example: power on, power off, UI system menu switching, volume/brightness adjustment, display view angle switching, and the like.
Further comprising at least: the applications in the mixed-reality intelligent glasses can be opened/closed, the applications in the mobile/non-mobile intelligent terminals connected with the mixed-reality intelligent glasses can be opened/closed through the wireless communication component 44, the moving/static pixel files can be shot, and the other components in the mixed-reality intelligent glasses can be subjected to optimized setting/function setting and the like.
A power supply assembly 47, comprising at least one set of polymer batteries, for power managing the mixed reality smart glasses, comprising at least: the power management function, the electric quantity display function and the quick charging function are switched to the energy-saving mode through the integrated operation component when the electric quantity is lower than the electric quantity threshold value. Wherein the power supply means in the power supply assembly may be: wound polymer batteries and laminated polymer batteries.
Power management, further comprising: the method comprises the following steps of informing a low-level electric quantity, judging whether the equipment is allowed to sleep or not, judging whether electric quantity output is increased or not in the running process of MR mixed reality application to optimize output effect, carrying out reasonable power supply distribution on an integrated budget component, a mixed reality coprocessor and the like according to the current electric quantity condition, and adaptively adjusting the electric quantity distribution of each component in the mixed reality intelligent glasses and the like.
Power supply module 47 has fast control circuit, boost control circuit, low voltage regulator control circuit in the mixed reality intelligence glasses, can realize the quick charge function.
And the digital optical display component 48 is used for displaying the interaction result.
The digital optical display assembly 48 includes a left eye digital optical display assembly 48A and a right eye digital optical display assembly 48B.
The Digital optical Display assembly 48 is composed of a Digital optical micro Display Device and a Digital optical Display medium, and includes, but is not limited to, the following refractive/transmissive Digital optical micro Display Device, LCD (Liquid Crystal Display), LED (Light Emitting Diode), OLED (Organic Light Emitting Diode), DMD (Digital micro mirror Device), DLP (Digital Light Processing), LCOS (Liquid Crystal on Silicon). The digital light display medium device comprises, but is not limited to, a lens adhered with a light waveguide grating, a semi-transparent/full-transparent display optical prism assembly, a free-form surface optical lens semi-transparent/full-transparent display assembly, a waveguide optical semi-transparent/full-transparent display lens assembly and the like.
As shown in fig. 5, the image sensing module 41 includes:
and the single/double/multi-view depth camera module 51 is used for acquiring a depth frame image.
And the optical flying sensor 52 is used for continuously acquiring the distance between the virtual 3D model to-be-interacted position and the mixed reality glasses.
As shown in fig. 6, the position and orientation sensing module 42 includes:
and the inertia measurement unit 61 is used for continuously acquiring angular velocity change data and acceleration change data in the three-dimensional space of the mixed reality intelligent glasses.
And the geomagnetic sensor 62 is used for at least sensing magnetic field data and longitude and latitude change data of the mixed reality intelligent glasses in the current use environment.
As shown in fig. 7, the mixed reality coprocessor 43 includes:
an image recognition logic gate array 71 and a position posture logic gate array 72.
And the image identification logic gate circuit array 71 is used for acquiring the surface of the position to be interacted of the virtual 3D model from the depth frame image, wherein the surface is a plane or a non-plane.
The image recognition logic gate circuit array 71 is obtained by compressing the image recognition algorithm file, judging whether the compression result is correct according to the golden reference model, and downloading the compression result to the programmable logic gate circuit array generation hardware circuit to generate an image recognition IP core if the compression result is correct. At least two image recognition IP cores obtained by the method are arranged in the image recognition logic gate circuit array.
And the position and posture logic gate circuit array 72 is used for continuously processing the angular velocity change data, the acceleration change data, the magnetic field data and the longitude and latitude change data according to an internal position and posture algorithm to obtain relative position data between the virtual 3D model and the mixed reality intelligent glasses, integrating the relative position data and outputting a relative position relation.
The position and attitude logic gate circuit array 72 is obtained by compressing a position and attitude algorithm file, resetting a data interface type, and after verification, packaging the compressed position and attitude algorithm file and the reset data interface type into a programmable logic gate circuit array, wherein a solidified hardware circuit is a position and attitude IP core, and at least two position and attitude IP cores obtained by the method are arranged in the position and attitude logic gate circuit array 72.
As shown in fig. 8, the image recognition logic gate circuit array 71 specifically includes:
and the image pixel characteristic point extracting unit 81 is used for extracting the pixel characteristic points from the depth frame image acquired by the single/double/multi-view depth camera module 61 by the image recognition logic gate circuit array 52 according to the image recognition algorithm therein.
The surface type obtaining unit 82 is configured to determine, according to the shape formed by the pixel feature points extracted by the pixel feature point extracting unit 81, that the surface where the virtual 3D model to be interacted is located is a plane or a non-plane, specifically: if the pixel characteristic points form a complete contour, and the contour has no other pixel characteristic points, the surface is a plane, and if the contour has other pixel characteristic points, the surface is a non-plane.
The mixed reality coprocessor 43 is further configured to call corresponding instruction content according to the interaction instruction, instruct the virtual 3D model to generate shape/form/position change at the position to be interacted according to the instruction content, superimpose the virtual 3D model on the position to be interacted virtually according to a result that the surface where the position to be interacted is located is a plane or a non-plane, and refract/project the changed virtual 3D model onto the digital optical display component of the mixed reality smart glasses.
And the mixed reality coprocessor also determines the virtual size of the virtual 3D model by combining a preset proportional relation according to the distance. The virtual size is a virtual overlay size of the virtual 3D model in real space.
Wherein, the mixed reality coprocessor 43 tracks the virtual 3D model according to the relative position relationship.
If the surface where the position to be interacted of the virtual 3D model is obtained by the surface type obtaining unit 82 is a plane, the virtual 3D model is virtually superimposed on any pixel point in the plane according to the interaction instruction; and if the surface of the position to be interacted of the virtual 3D model obtained by the surface type obtaining unit is a non-plane, virtually superposing the virtual 3D model on any pixel characteristic point in the plane according to the interaction instruction.
The Digital optical display module according to the present disclosure includes a Digital optical micro display Device and a Digital optical display medium, including but not limited to the following refractive/transmissive Digital optical micro display Device, LCD (Liquid Crystal display), LED (Light Emitting Diode), OLED (Organic Light Emitting Diode), DMD (Digital micro mirror Device), DLP (Digital Light Processing), LCOS (Liquid Crystal on Silicon). The digital light display medium device comprises, but is not limited to, a lens adhered with a light waveguide grating, a semi-transparent/full-transparent display optical prism assembly, a free-form surface optical lens semi-transparent/full-transparent display assembly, a waveguide optical semi-transparent/full-transparent display lens assembly and the like.
Communication protocols among the image sensing module, the position and posture sensing module, the mixed reality coprocessor, the wireless communication component, the control component, the integrated operation component, the power supply component and the digital optical display component in the technology of the invention include but are not limited to: I2C, MIPI CSI, MIPI DSI, USB, SPI, SDIO, UART, PCM.
The technical solution of the present invention is described in an actual application scenario, and in an example actual scenario, the interactive instruction is described by taking a gesture instruction as an example, and is not used to limit an obtaining manner of the interactive instruction. The binocular depth camera module is used as specific image acquisition equipment, and the number of the depth camera module is not limited. In this scenario, the surface where the virtual 3D model to be interacted is located is illustrated by taking a plane as an example, and the type of the surface is not limited.
The schematic diagram is shown in FIG. 8:
FIG. 9 is a diagram illustrating a scenario of a mobile interaction command.
In position a, virtual 3D model-dinosaur is placed in mixed reality intelligent glasses wearer's palm, and the wearer palm faces up, and the palm gesture can be shot by the binocular degree of depth module of making a video recording among the mixed reality intelligent glasses. According to the arrow direction, the action of keeping the palm of the wearer facing upwards continuously moves the palm to the position B, and the virtual 3D model-dinosaur moves to the position B along with the movement of the palm.
FIG. 10 is a diagram illustrating a scenario of a click interaction command.
In the form C, the fingers of the mixed reality glasses wearer touch the display image of the virtual 3D model dinosaur in a virtual empty mode, and after the instruction takes effect, the virtual 3D model dinosaur appears beside the dinosaur, namely the form D.
In fig. 9, a displacement change occurs in the process from a to B; in fig. 10, in the processes from C to D, only the form change of the virtual 3D model — dinosaur has no displacement change, and the states of C and D are only placed in the same figure to more intuitively show the front-back comparison of the form change of the virtual 3D model.
Through the related description of fig. 9 and fig. 10 and the above description of the technical solution of the present invention, the present solution can perceive and continuously interact with the virtual 3D model output by the MR mixed reality coprocessor by naked eyes in the field angle range.
The transmission protocol between each part of the mixed reality intelligent glasses provided by the technical scheme of the invention is shown in fig. 11.
Fig. 12 is a schematic product diagram of the mixed reality smart glasses provided by the present invention, where the positions of the above components in the product are shown (glasses diagram).
Fig. 13 is a schematic measurement diagram of an inertial measurement unit in the position and orientation sensing module according to the present invention. The positive change direction of the X, Y and Z axes in the inertial measurement unit is shown by arrows in the figure.
Fig. 14A and 14B are schematic structural diagrams of a PCBA master control board according to the technical solution of the present invention.
PCBA (Printed Circuit Board Assembly).
As shown in the figure, the mixed reality coprocessor and the communication unit and the memory thereof, and the integrated operation component and the communication unit and the memory thereof are attached to the PCB through the manufacturing processes such as the patch and the like, so that the PCBA master control board is obtained. The integrated operation component communication unit and the mixed reality coprocessor communication unit are respectively used for data transmission of the integrated operation component, the mixed reality coprocessor and other components/units/memories.
Fig. 14A is a side a of the PCBA master, and fig. 14B is a side B of the PCBA master.
Fig. 15A and 15B are schematic structural diagrams of a PCBA data acquisition board in the technical solution of the present invention.
As shown in the figure, the drive assembly of the monocular/binocular depth camera module, the inertia measurement unit, the optical flight sensor and the geomagnetic sensor are attached to the PCB through the manufacturing processes such as patch and the like to form a PCBA data acquisition board.
Fig. 15A is a side a of the PCBA data acquisition board, fig. 15B is a side B of the PCBA data acquisition board, and the flat cable interface in fig. 15B is used for connecting with the PCBA master control board shown in fig. 14A.
Fig. 16A and 16B are schematic structural diagrams of a PCBA function board in the mixed reality smart glasses according to the solution of the present invention.
The wireless communication assembly, the power management chip of power supply module, the drive assembly of response module passes through preparation processes such as paster with above subassembly laminating on PCB printed circuit board, forms PCBA function board. Other connection components are also included on the PCBA function board, as shown in the figures.
Fig. 16A is a side a of the PCBA function board, fig. 16B is a side B of the PCBA function board, and the flat cable interface in fig. 16B is used for connection with the PCBA master control board shown in fig. 14.
Fig. 17 is a PCBA touch pad of the touch component of the mixed reality smart glasses according to the present invention.
As shown in the figure, the PCBA touch pad is used for receiving/sensing touch actions, and the touch pad driving component is used for converting the touch actions received/sensed by the touch pad into touch instructions and transmitting the touch instructions.
Figure 18 shows a physical assembly of the PCBA board and battery assembly shown in figures 14-17.
The PCBA master control board shown in fig. 14A and 14B is located inside the right temple (near the wearer's head) of the hybrid-implementation smart glasses, and the PCBA touchpad shown in fig. 17 is located outside the right temple for easy touching by the wearer.
The PCBA data acquisition board shown in fig. 15A and 15B is located in the upper half of the frame of the mixed reality smart glasses.
The PCBA feature board shown in fig. 16A and 16B is located on the outside of the left temple of the mixed reality smart glasses, and the power supply component is located on the inside of the left temple, as shown in the specific location.
The battery module in fig. 18 includes at least one group of polymer batteries (solid polymer electrolyte lithium ion battery, lithium ion battery with polymer positive electrode material, etc.), a short-circuit prevention protection circuit, a fast charge control circuit thereof, a boost control circuit, and a low-voltage regulator control circuit, and is used for controlling the charging/discharging of the battery according to the instruction of the power management chip.
The invention provides MR mixed reality intelligent glasses for realizing interaction with a virtual 3D model, and a position and attitude data set is acquired through a position and attitude sensing module. The image sensing module is used for collecting an image data set. And the mixed reality coprocessor is used for carrying out data processing on the position posture data set and the image data set, outputting the relative position relation between the virtual 3D model and the mixed reality intelligent glasses, and acquiring the depth information of the to-be-interacted position of the virtual 3D model. And the mixed reality coprocessor realizes the interaction between the wearer wearing the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction.
The virtual 3D intelligent glasses have the advantages that through the technical scheme, the task burden of a processor in the prior art is shared by adding the mixed reality coprocessor, synchronous output of image data and position posture data can be achieved, real-time interaction of the virtual 3D model is achieved, and the problems that due to the fact that the processor capacity is insufficient, the virtual 3D model displayed on the mixed reality intelligent glasses is not accurate in virtual space overlapping position, data output is asynchronous, virtual-real fusion is not consistent, and a wearer cannot interact with the virtual 3D model in real time are solved.
The technical scheme of the invention is applied to, but not limited to, AR/MR man-machine interaction operation and related applications.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of enabling interaction with a virtual 3D model, the method comprising:
the position and attitude sensing module acquires a position and attitude data set, and the image sensing module acquires an image data set;
the mixed reality coprocessor carries out data processing on the image data set to acquire depth information of a position to be interacted of the virtual 3D model;
the mixed reality coprocessor carries out data processing on the position posture data set and outputs a relative position relation between the virtual 3D model and the mixed reality intelligent glasses;
the mixed reality coprocessor realizes interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction;
and the interaction instruction and the instruction content corresponding to the interaction instruction are prestored in a memory or a cloud end of the mixed reality intelligent glasses.
2. The method of claim 1, wherein the position and orientation sensing module collects a position and orientation data set comprising:
the inertial measurement unit continuously collects angular velocity change data and acceleration change data in the three-dimensional space of the mixed reality intelligent glasses;
the geomagnetic sensor is used for at least sensing magnetic field data and longitude and latitude change data of the mixed reality intelligent glasses in the current use environment.
3. The method of claim 2, wherein the mixed reality coprocessor performs data processing on the position and posture data set and outputs a relative position relationship between a virtual 3D model and mixed reality smart glasses, and the method comprises the following steps:
the position and posture logic gate circuit array in the mixed reality coprocessor checks the angular velocity change data, the acceleration change data, the magnetic field data and the longitude and latitude change data according to the position and posture IP in the mixed reality coprocessor, continuously processes the data, obtains the relative position data between the virtual 3D model and the mixed reality intelligent glasses, integrates the relative position data and outputs the relative position relation;
the position and attitude logic gate circuit array is obtained by compressing a position and attitude algorithm file, resetting a data interface type, packaging the compressed position and attitude algorithm file and the reset data interface type to a programmable logic gate circuit array after verification, wherein a solidified hardware circuit is a position and attitude IP core, and at least two position and attitude IP cores obtained by the method are arranged in the position and attitude logic gate circuit array.
4. The method of claim 1, wherein the image sensing module acquires an image dataset comprising:
the single/double/multi-view depth camera module collects a depth frame image;
and continuously acquiring the distance between the virtual 3D model to-be-interacted position and the mixed reality glasses by using the optical flight sensor.
5. The method of claim 4, wherein the data processing of the image data set by the mixed reality coprocessor to obtain depth information of the position of the virtual 3D model to be interacted comprises:
an image identification logic gate circuit array in the mixed reality coprocessor acquires that the surface of the virtual 3D model at the position to be interacted is a plane or a non-plane from the depth frame image;
the mixed reality coprocessor also determines the virtual size of the virtual 3D model according to the distance and a preset proportional relation;
wherein the virtual dimension is a virtual overlay dimension in the virtual 3D model real space;
the image recognition logic gate circuit array is obtained by compressing an image recognition algorithm file, judging whether a compression result is correct according to a golden reference model, and downloading the compression result to a programmable logic gate circuit array generation hardware circuit to generate an image recognition IP core if the compression result is correct;
at least two image recognition IP cores obtained by the method are arranged in the image recognition logic gate circuit array.
6. The method of claim 5, wherein the image recognition logic gate circuit array in the mixed reality coprocessor obtains a surface where the virtual 3D model to be interacted is a plane or a non-plane from the depth frame image, and the method comprises the following steps:
the image identification IP core in the image identification logic gate circuit array extracts pixel characteristic points from the depth frame image;
judging whether the surface of the position to be interacted of the virtual 3D model is a plane or a non-plane according to the shape formed by the pixel feature points;
and if the pixel characteristic points form a complete contour and no other pixel characteristic points exist in the contour, the surface is a horizontal plane, and if the contour has other pixel characteristic points, the surface is a non-plane.
7. The method of claims 1-6, wherein the mixed reality coprocessor implements interaction with the virtual 3D model by a wearer wearing the mixed reality smart glasses according to the relative positional relationship, the depth information, and an interaction instruction, comprising:
the mixed reality coprocessor calls corresponding instruction content according to the interactive instruction;
the virtual 3D model generates shape/form/position change at the position to be interacted according to the instruction content, and virtually superimposes the virtual 3D model on the position to be interacted according to a judgment result that the surface of the position to be interacted is a plane or a non-plane, and the changed virtual 3D model is refracted/projected onto a digital optical display component of the mixed reality intelligent glasses;
the mixed reality coprocessor also tracks the virtual 3D model according to the relative position relation;
if the surface is a plane, the virtual 3D model is virtually superposed on any pixel point in the plane according to the interaction instruction; and if the virtual 3D model is a non-plane, virtually superposing the virtual 3D model on any pixel characteristic point in the plane according to the interactive instruction.
8. Realize the intelligent glasses of MR mixed reality with virtual 3D model interaction, its characterized in that, MR mixed reality intelligence glasses includes:
the image sensing module is used for acquiring an image data set; wherein, the image sensing module group at least includes: the single/double/multi-view depth camera module is used for collecting a depth frame image; the optical flying sensor is used for continuously acquiring the distance between the virtual 3D model to-be-interacted position and the mixed reality glasses;
the position and posture sensing module is used for acquiring a position and posture data set; wherein, position gesture response module includes at least: the inertia measurement unit is used for continuously acquiring angular velocity change data and acceleration change data in a three-dimensional space of the mixed reality intelligent glasses; the geomagnetic sensor is used for at least sensing magnetic field data and longitude and latitude change data of the mixed reality intelligent glasses in the current use environment;
the mixed reality coprocessor is further used for carrying out data processing on the position posture data set and outputting a relative position relation between the virtual 3D model and the mixed reality intelligent glasses;
the mixed reality coprocessor is used for carrying out data processing on the image data set, acquiring depth information of a position to be interacted of the virtual 3D model, and realizing interaction between a wearer wearing the mixed reality intelligent glasses and the virtual 3D model according to the relative position relation, the depth information and the interaction instruction;
and the interaction instruction and the instruction content corresponding to the interaction instruction are prestored in a memory or a cloud end of the mixed reality intelligent glasses.
9. The MR mixed reality smart glasses according to claim 8, the mixed reality coprocessor comprising at least:
an image recognition logic gate circuit array and a position posture logic gate circuit array;
the image identification logic gate circuit array is used for acquiring that the surface of the virtual 3D model where the position to be interacted is located is a plane or a non-plane from the depth frame image, and specifically comprises the following steps:
the pixel characteristic point extracting unit is used for identifying an image identification IP core in the logic gate circuit array and extracting pixel characteristic points from the depth frame image collected by the single/double/multi-view depth camera module;
the surface type obtaining unit is used for obtaining the surface of the position to be interacted of the virtual 3D model as a plane or a non-plane according to the shape formed by the pixel characteristic points extracted by the pixel characteristic point extracting unit, wherein the surface is a plane if the pixel characteristic points form a complete contour and other pixel characteristic points are not in the contour, and the surface is a non-plane if the contour has other pixel characteristic points;
the position and posture logic gate circuit array is used for checking the angular velocity change data, the acceleration change data, the magnetic field data and the longitude and latitude change data according to the position and posture IP inside the position and posture logic gate circuit array, continuously processing the data, acquiring the relative position data between the virtual 3D model and the mixed reality intelligent glasses, integrating the relative position data and outputting the relative position relation;
the mixed reality coprocessor is further used for calling corresponding instruction content according to the interaction instruction, indicating the virtual 3D model to generate shape/form/position change at the position to be interacted according to the instruction content, virtually superposing the virtual 3D model on the position to be interacted according to a judgment result that the surface of the position to be interacted is a plane or a non-plane by the surface type acquisition unit, and refracting/projecting the changed virtual 3D model onto a digital optical display component of the mixed reality intelligent glasses;
if the surface where the position to be interacted of the virtual 3D model is obtained by the surface type obtaining unit is a plane, the virtual 3D model is virtually superposed on any pixel point in the plane according to the interaction instruction; if the surface of the position to be interacted of the virtual 3D model obtained by the surface type obtaining unit is a non-plane, the virtual 3D model is virtually superposed on any pixel feature point in the plane according to the interaction instruction;
the mixed reality coprocessor also determines the virtual size of the virtual 3D model according to the distance and a preset proportional relation; wherein the virtual dimension is a virtual overlay dimension in the virtual 3D model real space;
the image recognition logic gate circuit array is obtained by compressing an image recognition algorithm file, judging whether a compression result is correct according to a golden reference model, and downloading the compression result to a programmable logic gate circuit array generation hardware circuit to generate an image recognition IP core if the compression result is correct; the image identification logic gate circuit array is provided with at least two image identification IP cores obtained by the method;
the position and attitude logic gate circuit array is obtained by compressing a position and attitude algorithm file, resetting the type of a data interface, packaging the compressed position and attitude algorithm file and the reset type of the data interface to a programmable logic gate circuit array after verification, wherein a solidified hardware circuit is a position and attitude IP core, and at least two position and attitude IP cores obtained by the method are arranged in the position and attitude logic gate circuit array;
and the mixed reality coprocessor also tracks the virtual 3D model according to the relative position relation.
10. The MR mixed reality smart glasses according to claims 8 and 9, further comprising:
the wireless communication assembly is used for realizing data transmission between the mixed reality intelligent glasses and the private/public cloud end, the at least one mobile/non-mobile intelligent terminal and the at least one wearable intelligent device through Bluetooth and a wireless network in a private/public network environment at the same time or non-simultaneously;
the control assembly is used for receiving a control instruction and sending the control instruction to the integrated operation assembly, and the type of the control instruction at least comprises the following components: a touch instruction, a key instruction, a remote control instruction and a voice control instruction are included;
the integrated operation component is used for receiving the control instruction sent by the control component and making corresponding feedback, and is also used for data processing in cooperation with the mixed reality coprocessor;
a power supply component comprising at least one set of polymer batteries for power management of the mixed reality smart glasses, comprising at least: the energy-saving control system comprises a power management function, an electric quantity display function and a quick charging function, and is switched to an energy-saving mode through an integrated operation component when the electric quantity is lower than an electric quantity threshold value;
the fast charging control circuit, the boosting control circuit and the low-voltage regulator control circuit in the power supply assembly are used for achieving a fast charging function.
CN201810648691.4A 2018-06-22 2018-06-22 Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses Pending CN110634188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810648691.4A CN110634188A (en) 2018-06-22 2018-06-22 Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810648691.4A CN110634188A (en) 2018-06-22 2018-06-22 Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses

Publications (1)

Publication Number Publication Date
CN110634188A true CN110634188A (en) 2019-12-31

Family

ID=68966372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810648691.4A Pending CN110634188A (en) 2018-06-22 2018-06-22 Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses

Country Status (1)

Country Link
CN (1) CN110634188A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983982A (en) * 2018-05-30 2018-12-11 太若科技(北京)有限公司 AR aobvious equipment and terminal device combined system
CN111462663A (en) * 2020-06-19 2020-07-28 南京新研协同定位导航研究院有限公司 Tour guide mode based on MR glasses
CN111897435A (en) * 2020-08-06 2020-11-06 陈涛 Man-machine identification method, identification system, MR intelligent glasses and application
CN114489348A (en) * 2022-04-07 2022-05-13 南昌虚拟现实研究院股份有限公司 Eyeball tracking data processing module, eyeball tracking system and method
CN116520997A (en) * 2023-07-05 2023-08-01 中国兵器装备集团自动化研究所有限公司 Mixed reality enhanced display and interaction system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983982A (en) * 2018-05-30 2018-12-11 太若科技(北京)有限公司 AR aobvious equipment and terminal device combined system
CN111462663A (en) * 2020-06-19 2020-07-28 南京新研协同定位导航研究院有限公司 Tour guide mode based on MR glasses
CN111897435A (en) * 2020-08-06 2020-11-06 陈涛 Man-machine identification method, identification system, MR intelligent glasses and application
CN111897435B (en) * 2020-08-06 2022-08-02 陈涛 Man-machine identification method, identification system, MR intelligent glasses and application
CN114489348A (en) * 2022-04-07 2022-05-13 南昌虚拟现实研究院股份有限公司 Eyeball tracking data processing module, eyeball tracking system and method
CN116520997A (en) * 2023-07-05 2023-08-01 中国兵器装备集团自动化研究所有限公司 Mixed reality enhanced display and interaction system
CN116520997B (en) * 2023-07-05 2023-09-26 中国兵器装备集团自动化研究所有限公司 Mixed reality enhanced display and interaction system

Similar Documents

Publication Publication Date Title
CN110634188A (en) Method for realizing interaction with virtual 3D model and MR mixed reality intelligent glasses
JP6393367B2 (en) Tracking display system, tracking display program, tracking display method, wearable device using them, tracking display program for wearable device, and operation method of wearable device
US20200366897A1 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
CN105453011B (en) Virtual objects direction and visualization
CN107223223B (en) Control method and system for first-view-angle flight of unmanned aerial vehicle and intelligent glasses
KR20220145317A (en) Method and appratus for processing screen using device
US11275453B1 (en) Smart ring for manipulating virtual objects displayed by a wearable device
CN106716303B (en) Stablize the movement of interaction ray
US20180068489A1 (en) Server, user terminal device, and control method therefor
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
KR102404790B1 (en) Method and apparatus for changing focus of camera
US20150193658A1 (en) Enhanced Photo And Video Taking Using Gaze Tracking
CN105639818A (en) Intelligent safety helmet based on augmented reality, space scanning and gesture recognition technologies
US11320667B2 (en) Automated video capture and composition system
CN103442244A (en) 3D glasses, 3D display system and 3D display method
CN106101687A (en) VR image capturing device and VR image capturing apparatus based on mobile terminal thereof
CN108616733B (en) Panoramic video image splicing method and panoramic camera
CN205585386U (en) Intelligent security cap based on augmented reality , spatial scanning and gesture recognition technology
CN113160427A (en) Virtual scene creating method, device, equipment and storage medium
WO2020264149A1 (en) Fast hand meshing for dynamic occlusion
CN203445974U (en) 3d glasses and 3d display system
US10296098B2 (en) Input/output device, input/output program, and input/output method
CN106210701A (en) A kind of mobile terminal for shooting VR image and VR image capturing apparatus thereof
US11580300B1 (en) Ring motion capture and message composition system
CN104239877A (en) Image processing method and image acquisition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191231