CN112233172A - Video penetration type mixed reality method, system, readable storage medium and electronic equipment - Google Patents

Video penetration type mixed reality method, system, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN112233172A
CN112233172A CN202011064045.7A CN202011064045A CN112233172A CN 112233172 A CN112233172 A CN 112233172A CN 202011064045 A CN202011064045 A CN 202011064045A CN 112233172 A CN112233172 A CN 112233172A
Authority
CN
China
Prior art keywords
real
virtual
coordinate system
world coordinate
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011064045.7A
Other languages
Chinese (zh)
Inventor
冀勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zero Environment Technology Co ltd
Original Assignee
Beijing Zero Environment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zero Environment Technology Co ltd filed Critical Beijing Zero Environment Technology Co ltd
Priority to CN202011064045.7A priority Critical patent/CN112233172A/en
Publication of CN112233172A publication Critical patent/CN112233172A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a video penetration type mixed reality method, a video penetration type mixed reality system, a readable storage medium and electronic equipment, which are applied to mixed reality glasses, wherein the method comprises the following steps: acquiring real camera coordinates, acquiring corresponding virtual camera coordinates, and acquiring mapping information of the real camera coordinates and the virtual camera coordinates; acquiring a virtual world coordinate system, and acquiring a real world coordinate system according to mapping information; for any object in the virtual world coordinate system, respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point in the real world coordinate, projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses, and enabling the object in the visual field to be located at the real coordinate point. The invention firstly obtains the virtual camera coordinate and the virtual world coordinate system according to the real camera coordinate, and then obtains the real world coordinate system by the virtual world coordinate system, thereby easily displaying the things in the virtual coordinate system in the real world coordinate system.

Description

Video penetration type mixed reality method, system, readable storage medium and electronic equipment
Technical Field
The invention relates to the technical field of mixed reality, in particular to a video penetration type mixed reality method, a video penetration type mixed reality system, a readable storage medium and electronic equipment.
Background
Mixed virtual reality, mr (mixed reality), is a combination of virtual reality vr (virtual reality) and augmented reality ar (augmented reality).
People can be immersed in a three-dimensional visual environment generated by a computer and superposed by real world and virtual affairs through the MR glasses terminal. The existing MR glasses can generate virtual things with strong real effect by matching with an additional camera with a positioning function.
The existing MR glasses can realize the function of mixed reality only by matching with an additional camera, and the realization process is complicated.
Disclosure of Invention
An object of the present invention is to provide a method, a system, a readable storage medium and an electronic device for video penetration mixed reality, which are easier to implement.
A video penetration type mixed reality method is applied to mixed reality glasses and comprises the following steps:
acquiring real camera coordinates, acquiring corresponding virtual camera coordinates, and acquiring mapping information between the real camera coordinates and the virtual camera coordinates;
acquiring a virtual world coordinate system, and acquiring a real world coordinate system according to the mapping information;
for any object in the virtual world coordinate system, respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point in the real world coordinate, projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses, and enabling the object in the visual field to be located at the real coordinate point.
The invention has the beneficial effects that: the virtual camera coordinate and the virtual world coordinate system are obtained according to the real camera coordinate, and then the real world coordinate system is obtained by means of the virtual world coordinate system, so that objects in the virtual coordinate system can be easily displayed in the real world coordinate system.
In addition, the video penetration mixed reality method provided by the invention can also have the following additional technical characteristics:
further, the real environment world coordinate system and the virtual environment world coordinate system are both three-dimensional coordinates.
Further, the mapping information includes a position and an orientation of the camera.
Further, the step of obtaining a real world coordinate system according to the mapping information includes:
acquiring a virtual origin of a virtual world coordinate system, and acquiring a real origin according to the mapping relation;
acquiring the length of a connecting line between the virtual camera coordinate and the virtual origin and included angles between the connecting line and a virtual x axis, a virtual y axis and a virtual z axis of the virtual world coordinate system so as to acquire a real x axis, a real y axis and a real z axis;
and taking the real origin as a starting point and the real x axis, the real y axis and the real z axis as coordinate axes to obtain the real world coordinate system.
Further, the object in the virtual world coordinate system is a three-dimensional model, and the step of projecting the object into the field of view comprises:
acquiring the orientation of the camera and the distance from the virtual origin to the virtual camera, so as to acquire the image of the object shot from the virtual camera coordinate in real time;
projecting the image into the field of view.
Further, the step of projecting the image into the field of view further comprises:
acquiring current time, and acquiring shadow length and azimuth according to the current time and the height of the object;
establishing a shadow of the object with the shadow length and the azimuth.
Further, the step of projecting the image into the field of view further comprises:
decomposing the image into a plurality of sub image blocks;
acquiring the distance between the virtual camera coordinate and the object, and judging whether the distance is greater than a preset value;
if yes, when any obstacle is located between the real camera coordinates and the virtual object, acquiring an obstacle image, overlapping the obstacle image in the field of view, and hiding the sub-image blocks in the image, which are overlapped with the obstacle image.
Another objective of the present invention is to provide a video penetration type mixed reality system applied to mixed reality glasses, including:
the mapping acquisition module is used for acquiring real camera coordinates, acquiring corresponding virtual camera coordinates and acquiring mapping information between the real camera coordinates and the virtual camera coordinates;
the coordinate system acquisition module is used for acquiring a virtual world coordinate system and acquiring a real world coordinate system according to the mapping information;
the projection module is used for respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point of the real world coordinate for any object in the virtual world coordinate system, projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses, and enabling the object in the visual field to be located at the real coordinate point.
The invention also proposes a readable storage medium on which computer instructions are stored, which instructions, when executed by a processor, implement the method as described above.
The invention also proposes an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above-mentioned method when executing the program.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a video penetration mixed reality method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a video-through blending implementation of a first embodiment of the present invention;
fig. 3 is a block diagram of a video penetration mixed reality system according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. Several embodiments of the invention are presented in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Referring to fig. 1, a first embodiment of the invention provides a video transmissive mixed reality method applied to mixed reality glasses, including the following steps.
S1, acquiring real camera coordinates, acquiring corresponding virtual camera coordinates, and acquiring mapping information between the real camera coordinates and the virtual camera coordinates.
In the embodiment, the camera is arranged on the glasses, and other cameras are not required to be additionally arranged.
Specifically, the mapping information includes a position and an orientation of the camera.
And S2, acquiring a virtual world coordinate system, and acquiring a real world coordinate system according to the mapping information.
In order to reflect the position of the thing more accurately, both the real-environment world coordinate system and the virtual-environment world coordinate system are three-dimensional coordinates.
And S3, respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point of the real world coordinate for any object in the virtual world coordinate system, and projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses to enable the object in the visual field to be located at the real coordinate point.
Specifically, the step of obtaining the real world coordinate system according to the mapping information includes:
s31, acquiring a virtual origin of a virtual world coordinate system, and acquiring a real origin according to the mapping relation;
s32, acquiring the length of a connecting line between the virtual camera coordinate and the virtual origin and included angles between the connecting line and a virtual x axis, a virtual y axis and a virtual z axis of the virtual world coordinate system so as to acquire a real x axis, a real y axis and a real z axis;
and S33, taking the real origin as a starting point, and taking the real x axis, the real y axis and the real z axis as coordinate axes to obtain the real world coordinate system.
Preferably, the object in the virtual world coordinate system is a three-dimensional model, and the step of projecting the object into the field of view includes:
s34, acquiring the orientation of the camera and the distance between the virtual camera and the virtual origin, so as to acquire an image of the object shot from the virtual camera coordinate in real time;
s35, projecting the image into the visual field.
It is understood that the orientation of the camera can be understood as the viewing angle, and the position and size of the corresponding object in the view can be obtained according to the viewing angle and the link angle.
Still further, the step of projecting the image into the field of view further comprises:
s351, obtaining the current time, and obtaining the shadow length and the azimuth angle according to the current time and the height of the object;
s352, establishing the shadow of the object according to the shadow length and the azimuth angle.
It should be noted that this function can be activated when the user is outdoors.
In the embodiment, when the current time is 6-19 points, the shadow length is obtained according to the solar altitude, and the shadow is not displayed at other times, so that the virtual objects are more real in the sun.
Still further, the step of projecting the image into the field of view further comprises:
s353, decomposing the image into a plurality of sub image blocks;
s354, acquiring the distance between the virtual camera coordinate and the object, and judging whether the distance is greater than a preset value;
and S355, if so, when any obstacle is positioned between the real camera coordinate and the virtual object, acquiring an obstacle image, overlapping the obstacle image in the field of view, and hiding the sub-image block overlapped with the obstacle image in the image.
In the present embodiment, the preset value is 1 m.
It should be noted that, since the virtual objects are projected in the visual field of the glasses, in general, even if there is an obstacle blocking part of the visual field, the whole virtual objects are still displayed, for example, the virtual objects are used for extending right in front of the glasses by hands to try to block part of the virtual objects, which is not done in the conventional method, and the images of the virtual objects still cover the hands in the visual field, which causes an uncomfortable feeling.
In the embodiment, the image is divided into a plurality of sub image blocks, the image of the obstacle in front is obtained through the camera, and part of the sub image blocks are hidden according to the image of the obstacle, so that the virtual object can be partially blocked, and the image looks more real.
The invention has the advantages that the virtual camera coordinate and the virtual world coordinate system are obtained according to the real camera coordinate, and then the real world coordinate system is obtained by the virtual world coordinate system, so that objects in the virtual coordinate system can be easily displayed in the real world coordinate system.
Referring to fig. 3, a second embodiment of the present invention provides a video transmissive mixed reality system applied to mixed reality glasses, including:
the mapping acquisition module is used for acquiring real camera coordinates, acquiring corresponding virtual camera coordinates and acquiring mapping information between the real camera coordinates and the virtual camera coordinates;
the coordinate system acquisition module is used for acquiring a virtual world coordinate system and acquiring a real world coordinate system according to the mapping information;
the projection module is used for respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point of the real world coordinate for any object in the virtual world coordinate system, projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses, and enabling the object in the visual field to be located at the real coordinate point.
In the embodiment, the virtual camera coordinate and the virtual world coordinate system are obtained according to the real camera coordinate, and then the real world coordinate system is obtained by means of the virtual world coordinate system, so that objects in the virtual coordinate system can be easily displayed in the real world coordinate system.
A third embodiment of the invention is directed to a readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method of the claims.
A fourth embodiment of the present invention proposes an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above method when executing the program.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A video penetration type mixed reality method is applied to mixed reality glasses and is characterized by comprising the following steps:
acquiring real camera coordinates, acquiring corresponding virtual camera coordinates, and acquiring mapping information between the real camera coordinates and the virtual camera coordinates;
acquiring a virtual world coordinate system, and acquiring a real world coordinate system according to the mapping information;
for any object in the virtual world coordinate system, respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point in the real world coordinate, projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses, and enabling the object in the visual field to be located at the real coordinate point.
2. The video penetration mixed reality method of claim 1, wherein the real environment world coordinate system and the virtual environment world coordinate system are both three-dimensional coordinates.
3. The video penetration mixed reality method of claim 1, wherein the mapping information comprises a position and an orientation of the camera.
4. The video penetration mixed reality method of claim 1, wherein the step of obtaining a real-world coordinate system according to the mapping information comprises:
acquiring a virtual origin of a virtual world coordinate system, and acquiring a real origin according to the mapping relation;
acquiring the length of a connecting line between the virtual camera coordinate and the virtual origin and included angles between the connecting line and a virtual x axis, a virtual y axis and a virtual z axis of the virtual world coordinate system so as to acquire a real x axis, a real y axis and a real z axis;
and taking the real origin as a starting point and the real x axis, the real y axis and the real z axis as coordinate axes to obtain the real world coordinate system.
5. The video penetration mixed reality method of claim 1, wherein the object in the virtual world coordinate system is a three-dimensional model, and the step of projecting the object into the field of view comprises:
acquiring the orientation of the camera and the distance from the virtual origin to the virtual camera, so as to acquire the image of the object shot from the virtual camera coordinate in real time;
projecting the image into the field of view.
6. The video penetration mixed reality method of claim 5, wherein the step of projecting the image into the field of view further comprises:
acquiring current time, and acquiring shadow length and azimuth according to the current time and the height of the object;
establishing a shadow of the object with the shadow length and the azimuth.
7. The video penetration mixed reality method of claim 5, wherein the step of projecting the image into the field of view further comprises:
decomposing the image into a plurality of sub image blocks;
acquiring the distance between the virtual camera coordinate and the object, and judging whether the distance is greater than a preset value;
if yes, when any obstacle is located between the real camera coordinates and the virtual object, acquiring an obstacle image, overlapping the obstacle image in the field of view, and hiding the sub-image blocks in the image, which are overlapped with the obstacle image.
8. A video penetration type mixed reality system is applied to mixed reality glasses and is characterized by comprising:
the mapping acquisition module is used for acquiring real camera coordinates, acquiring corresponding virtual camera coordinates and acquiring mapping information between the real camera coordinates and the virtual camera coordinates;
the coordinate system acquisition module is used for acquiring a virtual world coordinate system and acquiring a real world coordinate system according to the mapping information;
the projection module is used for respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point of the real world coordinate for any object in the virtual world coordinate system, projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses, and enabling the object in the visual field to be located at the real coordinate point.
9. A readable storage medium having stored thereon computer instructions, characterized in that the instructions, when executed by a processor, implement the method of any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the program.
CN202011064045.7A 2020-09-30 2020-09-30 Video penetration type mixed reality method, system, readable storage medium and electronic equipment Pending CN112233172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011064045.7A CN112233172A (en) 2020-09-30 2020-09-30 Video penetration type mixed reality method, system, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011064045.7A CN112233172A (en) 2020-09-30 2020-09-30 Video penetration type mixed reality method, system, readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112233172A true CN112233172A (en) 2021-01-15

Family

ID=74119982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011064045.7A Pending CN112233172A (en) 2020-09-30 2020-09-30 Video penetration type mixed reality method, system, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112233172A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320364A (en) * 2023-05-25 2023-06-23 四川中绳矩阵技术发展有限公司 Virtual reality shooting method and display method based on multi-layer display

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320364A (en) * 2023-05-25 2023-06-23 四川中绳矩阵技术发展有限公司 Virtual reality shooting method and display method based on multi-layer display

Similar Documents

Publication Publication Date Title
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
CN107018336B (en) The method and apparatus of method and apparatus and the video processing of image procossing
TWI397317B (en) Method for providing output image in either cylindrical mode or perspective mode
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
CN108830894A (en) Remote guide method, apparatus, terminal and storage medium based on augmented reality
CN109598796A (en) Real scene is subjected to the method and apparatus that 3D merges display with dummy object
CN110971678A (en) Immersive visual campus system based on 5G network
CN111047506A (en) Environmental map generation and hole filling
CN108553895A (en) User interface element and the associated method and apparatus of three-dimensional space model
CN113706373A (en) Model reconstruction method and related device, electronic equipment and storage medium
CN109448117A (en) Image rendering method, device and electronic equipment
CN110599432A (en) Image processing system and image processing method
CN112233172A (en) Video penetration type mixed reality method, system, readable storage medium and electronic equipment
KR20190061783A (en) Method and program for generating virtual reality contents
US10909752B2 (en) All-around spherical light field rendering method
CN114401362A (en) Image display method and device and electronic equipment
CN111932446B (en) Method and device for constructing three-dimensional panoramic map
CN116112761B (en) Method and device for generating virtual image video, electronic equipment and storage medium
CN109816765B (en) Method, device, equipment and medium for determining textures of dynamic scene in real time
CN109427094B (en) Method and system for acquiring mixed reality scene
CN108986228B (en) Method and device for displaying interface in virtual reality
CN109949396A (en) A kind of rendering method, device, equipment and medium
CN112529769B (en) Method and system for adapting two-dimensional image to screen, computer equipment and storage medium
JP5649842B2 (en) Information providing apparatus, information providing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination