CN111277808B - Virtual reality enhancement equipment and method - Google Patents

Virtual reality enhancement equipment and method Download PDF

Info

Publication number
CN111277808B
CN111277808B CN202010211396.XA CN202010211396A CN111277808B CN 111277808 B CN111277808 B CN 111277808B CN 202010211396 A CN202010211396 A CN 202010211396A CN 111277808 B CN111277808 B CN 111277808B
Authority
CN
China
Prior art keywords
image
projection
processing system
area
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010211396.XA
Other languages
Chinese (zh)
Other versions
CN111277808A (en
Inventor
韩玉争
刘晓东
杨永栋
罗春明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Optofidelity High Tech Zhuhai Ltd
Original Assignee
Optofidelity High Tech Zhuhai Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Optofidelity High Tech Zhuhai Ltd filed Critical Optofidelity High Tech Zhuhai Ltd
Priority to CN202010211396.XA priority Critical patent/CN111277808B/en
Publication of CN111277808A publication Critical patent/CN111277808A/en
Application granted granted Critical
Publication of CN111277808B publication Critical patent/CN111277808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides virtual reality enhancement equipment and a virtual reality enhancement method which are strong in immersion, wide in visual angle and simple in structure. The equipment comprises a display imaging device (1), a camera (2), a vision processing system (3), a projection device (4) and a power supply, wherein the display imaging device (1) comprises a visual area (11) and a calibration area (12), and the visual area (11) and the calibration area (12) are switched between a transparent state and a non-transparent state respectively; the method comprises the following steps: in the transparent state, the camera collects a real image, the real image is projected to a non-transparent real imaging device to form a projected image after being processed by the vision processing system, the projected image and the real image are kept consistent through comparison, the vision processing system continuously calibrates and stores the image in a calibration area, the stored image is projected to a visual area to be displayed when needed, and in the period, the calibration area continuously changes the state. The invention is used in the field of virtual reality intelligent equipment.

Description

Virtual reality enhancement equipment and method
Technical Field
The invention relates to the field of virtual reality intelligent equipment, in particular to virtual reality enhancement equipment and a virtual reality enhancement method.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, and is a new technology for seamlessly integrating real world information and virtual world information, and the technology aims to sleeve a virtual world in a real environment on a screen and perform interaction. This technique was first proposed in 1990. Along with the improvement of the operational capability of portable electronic products, the application of augmented reality is wider and wider.
The existing implementation methods mainly include a display-based enhancement technology and an optical transmission-based enhancement technology. As shown in fig. 1, the display-based augmentation technology mainly uses a camera 104 to capture an object 103 in real-time in a real-world environment, and combines the object with a virtual image (such as a "fruit" image) in a vision processing system, and outputs the combined image onto a device display screen 105, and a user 106 finally sees a combined image 101 on the device display screen 105. This implementation is relatively simple. However, the method has the problem of poor immersion, and the display and the virtual have a large difference, so that the display and the virtual cannot be integrated into a whole, and the feeling of obviously separating the virtual from the real is given to people.
As shown in fig. 2, in the optical transmission-based augmentation technology, the camera 204 still needs to acquire the object 203 in the real environment in real time, but the acquired image is only used for identifying the image and acquiring the orientation of the object, and is not combined with the virtual object in the vision processing system. Based on the content recognized by the camera 204, the corresponding virtual object (e.g., the "fruit" image shown in the figure) is projected by the projection device 205 onto a semi-transparent optical device 206, which reflects the virtual image into the eyes of the user 207 and transmits the actual object 203 in front of the object to the eyes of the person by transmission, thus achieving a composite superposition of the virtual object and the actual object, such as 201, seen in the eyes of the user 207. This method is not delayed because it sees an actual object, which is vivid. However, due to the limited area of the semi-transparent optics 206, the range over which a real object can enter the human eye at the same time is limited, i.e. the angle of view of the human eye is limited. In order to observe surroundings, the head of a person is rotated to obtain a wider viewing angle. However, since the superimposition of the virtual object is a relatively large technical difficulty, when the head of a person rotates rapidly, it is still difficult for the existing technology to synchronously reflect the virtual image into the eyes of the person and accurately combine and superimpose the virtual image with the real object transmitted into the eyes of the person in a follow-up manner, and the existing equipment cannot achieve a relatively wide viewing angle. In addition, the volume of the equipment in the prior art is often bigger, and miniaturization is difficult to achieve.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides the virtual reality enhancement equipment with strong immersion, wide visual angle and simple structure and the method for enhancing the virtual reality by using the equipment.
The technical scheme adopted by the virtual reality augmentation equipment is as follows: the virtual reality augmentation equipment comprises a display imaging device, a camera, a vision processing system, a projection device and a power supply, wherein the power supply supplies power to the whole equipment,
the display imaging device comprises a visual area and a calibration area, the calibration area is positioned at the periphery of the visual area, the visual area and the calibration area are respectively in electric signal connection with the visual processing system, under the control of the visual processing system, the visual area and the calibration area are respectively switched between a transparent state and a non-transparent state, the camera acquires real images of a real environment through the visual area and/or the calibration area in the transparent state, and the visual area and/or the calibration area in the non-transparent state receives projection images from the projection device and reflects the projection images into a lens of the camera;
the camera is used for acquiring a real image of a real environment through the display imaging device and a projection image reflected by the projection device after being projected to the display imaging device and uploading the projection image to the visual processing system;
the projection device is used for projecting the image output by the vision processing system onto the display imaging device;
the visual processing system is used for receiving the real images and the projection images uploaded by the camera, calibrating the projection images to be consistent with the real images, fusing virtual images generated by the system to the calibrated projection images, storing and outputting the fused projection images to the projection device, and finally projecting the fused projection images to the display imaging device.
According to the scheme, the display imaging device, the camera, the vision processing system, the projection device and the power supply are arranged, the structure is simple, the display imaging device is provided with the visual area and the calibration area, the visual area and the calibration area are respectively connected with the vision processing system through electric signals, and the visual area and the calibration area can be respectively controlled to be in a transparent state or a non-transparent state; in the visual area, the camera collects the real image in the real environment corresponding to the area through the visual area in the transparent state and uploads the real image to the visual processing system, the visual processing system projects the received real image to the visual area in the non-transparent state through the projection device, at the moment, the visual area in the non-transparent state has a reflection function, the camera collects the projection image reflected by the visual area and uploads the projection image to the visual processing system, the visual processing system compares the previously received real image with the currently received projection image, calculates the difference between the two images, corrects the projection image to keep the projection image consistent with the real image, and performs superposition processing on the projection image and the virtual image generated by the visual processing system to obtain a synthetic image, and then outputs the synthetic image to the projection device and projects the synthetic image onto the visual area for display, therefore, the synthetic image output by the visual processing system can be the same as the real image as much as possible, and the immersion of the equipment is enhanced; in addition, a calibration area positioned at the periphery of the visible area is distributed at the periphery of the visible area of human eyes, when the calibration area works normally, the calibration area is in a calibration state, in the state, a camera collects a real image in a real environment corresponding to the calibration area in a transparent state through the calibration area and uploads the real image to a visual processing system, the visual processing system projects the received real image to the calibration area in a non-transparent state through a projection device, at the moment, the calibration area in the non-transparent state has a reflection function, the camera collects a projection image reflected by the calibration area and uploads the projection image to the visual processing system, the visual processing system compares the previously received real image with the currently received projection image, the calculated difference between the two images is used for correcting the projection image to keep the projection image consistent with the real image, and carrying out superposition processing on the projection image and a virtual image generated by the vision processing system to obtain a synthetic image and store the synthetic image, wherein when the head of a person moves, the pose of the whole equipment changes, the vision processing system detects the rotation angle and the direction of the equipment and rapidly and directly projects the stored synthetic image in the direction to a visual area for display, and at the moment, the visual angle can be approximately equal to that of a camera, so that the condition of small visual angle is eliminated.
Further, the visual area and the calibration area are seamlessly interfaced, and when the visual area is in the non-transparent state, the visual area receives the projected image from the projection device and simultaneously reflects the projected image into the eyes of the observer. Therefore, the visual area and the calibration area are in seamless butt joint, and the transition part between the visual area and the calibration area is fuzzified, so that the discontinuous feeling of the boundary when the virtual image and the real image are synthesized and the transparent state and the non-transparent state are switched is avoided, and the visual area and the calibration area are ensured to have better immersion feeling; the visual zone is in all the time with in the received projection image feedback observer's eyes under the non-transparent state to guarantee that the image that gets into observer's eyes is the state that virtual reality strengthens all the time, realize the enhancement function of equipment.
Still further, the display imaging device is made of a liquid crystal film material. The material of the liquid crystal film can control the transparency or the opacity of the liquid crystal film by electrifying the liquid crystal film, and the effect is obvious, so that the structure of the equipment is greatly simplified, the cost is reduced, and the stability and the reliability of the operation of the equipment are improved.
Still further, the frequency at which the viewable area and the calibration area switch between transparent and non-transparent states is greater than 30 Hz. Therefore, the switching frequency of the two states is set to be larger than 30Hz, so that the discontinuous feeling caused by the virtual image, the real image and the state switching is avoided, and the immersion feeling of the equipment is ensured.
Still further, a gyroscope, an accelerometer, a magnetometer, a microphone and a loudspeaker are connected to the vision processing system. The gyroscope is used for measuring the rotation direction of the pose change of the whole equipment, the accelerometer is used for detecting the acceleration of the pose change of the equipment, and the magnetometer is used for detecting the magnetic direction of the equipment, so that the accurate pose acquisition of the equipment is ensured, the display image can continuously enter the vision processing system through the calibration area without dead angles, the projection image is finally formed and stored, and the large-view-angle effect is realized; and the microphone and the loudspeaker provide voice input and output functions for a user, and the audio experience of the equipment is ensured.
Further, the projection device is a projector. Therefore, the smooth output of the projection image is ensured, and the projector is a mature market product, so that the overall cost of the equipment is relatively reduced.
Further, the visual processing system is a computer processing system built in the virtual reality augmentation device; or the visual processing system is an external computer connected through a signal line. Therefore, the smoothness and the reliability of the operation of the whole equipment can be ensured by a computer processing system or an external computer.
In addition, the method for enhancing the virtual reality by using the virtual reality enhancing equipment comprises the following steps:
a. the power supply is powered on, and the visible area and the calibration area are both in a transparent state;
b. in the visible area, the camera collects a real image in the real world corresponding to the visible area in a transparent state through the visible area and uploads the real image to the visual processing system for storage;
c. the vision processing system outputs a received real image to the projection device, the projection device projects the real image to the visual area, the visual area is converted into a non-transparent state, the area reflects a projected image obtained by projection into a lens of the camera, the camera uploads the received projected image to the vision processing system, the vision processing system compares the received projected image with a stored real image to obtain a difference between the projected image and the real image, the vision processing system calculates to obtain a difference between two images, corrects the projected image to keep the projected image consistent with the real image to obtain a projection calibration coefficient, calibrates the projected image by using the projection calibration coefficient to obtain a calibrated projected image, and a virtual image generated by the vision processing system is superposed on the calibrated projected image to form a synthetic image, outputting the synthesized image to the projection device and projecting the synthesized image to the visible area for displaying, wherein the visible area is in a non-transparent state;
d. meanwhile, in the calibration area, the camera acquires a real image in the real world corresponding to the area through the calibration area in a transparent state and uploads the real image to the visual processing system for storage;
e. the vision processing system outputs a received real image to the projection device, the projection device projects the real image to the calibration area, the calibration area is converted into a non-transparent state at the moment, the projection image obtained by projection is reflected to the lens of the camera by the area, the camera uploads the received projection image to the vision processing system, the vision processing system compares the received projection image with a stored real image to obtain a difference between the projection image and the real image, the vision processing system calculates to obtain a difference between two images, corrects the projection image to ensure that the projection image is consistent with the real image to obtain a projection calibration coefficient, calibrates the projection image by using the projection calibration coefficient to obtain a calibrated projection image, and a virtual image generated by the vision processing system is superposed on the calibrated projection image to form a synthetic image, then storing the synthesized image;
f. the calibration area is continuously switched between a transparent state and a non-transparent state, in the transparent state, the camera collects a real image in the real world corresponding to the calibration area and uploads the real image to the visual processing system for storage, in the non-transparent state, the calibration area serving as a reflection surface reflects a projection image projected by the projection device to the camera, and the calibration state of the step e is continuously entered;
g. when the pose of the whole equipment is changed, the vision processing system detects the rotation angle and the direction of the equipment and directly projects the stored synthetic image on the pose to the visual area for displaying, and at the moment, the visual area is in a non-transparent state;
h. until the device is taken out of service.
The scheme is that in the method, a visual area on a display imaging device is in a transparent state, a camera collects a real image in a peripheral real environment through the visual area and uploads the real image to a visual processing system for storage, the visual processing system outputs the received real image to a projection device, the projection device projects the real image onto the visual area, the visual area is converted into a non-transparent state at the moment, the area reflects a projected image obtained by projection into a lens of the camera, the camera uploads the received projected image to the visual processing system, the visual processing system compares the received projected image with the stored real image to obtain a difference between the projected image and the real image, the visual processing system calculates a difference between the two images to correct the projected image so that the projected image is consistent with the real image, obtaining a projection calibration coefficient, calibrating a projection image by using the projection calibration coefficient to obtain a calibrated projection image, superposing a virtual image generated by a vision processing system to the calibrated projection image to form a synthetic image, outputting the synthetic image to a projection device, and projecting the synthetic image to a visual area for displaying; therefore, the influence of virtual reality enhancement which is consistent with the real environment and added with the virtual image is obtained, and the immersion feeling of the equipment is improved; in addition, for the calibration area, similarly, the camera collects the real images of the real environment of the area outside the human eye visible area in the transparent state, and uploads the real images to the vision processing system for storage, the vision processing system outputs the received real images to the projection device and projects the real images onto the calibration area, at the moment, the calibration area is converted into the non-transparent state, the projected images obtained by projection in the area are reflected into the lens of the camera, the camera uploads the received projected images to the vision processing system, the vision processing system compares the received projected images with the stored real images to obtain the difference between the projected images and the real images, the vision processing system calculates the difference between the two images to obtain the difference between the two images, the projected images are corrected to keep the projected images consistent with the real images to obtain the projection calibration coefficient, calibrating the projection image by using the projection calibration coefficient to obtain a calibrated projection image, superposing a virtual image generated by the vision processing system to the calibrated projection image to form a synthetic image, and storing the synthetic image; the calibration area is continuously switched between a transparent state and a non-transparent state, in the transparent state, the camera collects a real image in the real world corresponding to the calibration area and uploads the real image to the visual processing system for storage, and in the non-transparent state, the calibration area is used as a reflection surface to reflect a projection image projected by the projection device to the camera; when the head of a person rotates to drive the whole equipment to rotate or the pose of the whole equipment changes, the visual processing system can rapidly project the stored synthetic image into a visual area for projection display, the non-difference transition of the visual effect is formed, at the moment, the visual angle can be approximately equal to the visual angle of the camera, so that the condition of small visual angle is eliminated, at the moment, the real image is continuously collected in the calibration area, the image calibration of the calibration area is carried out through the camera and the visual processing system, the preparation is made for the subsequent equipment pose change, and the immersion of the equipment and the continuity and immersion of the projection image are maintained.
Further, in the step c and the step e, the vision processing system calculates a difference between the two images, corrects the projection image to make the projection image consistent with the real image, and obtains a projection calibration coefficient by the specific steps of:
1) firstly, performing HSI conversion on 2 pictures of a projection image and a real image in the vision processing system, obtaining HSI parameters of the two pictures, setting the parameters obtained by the acquired display image as given parameters, and virtually projecting the parameters obtained by the projection image acquired by a camera as feedback parameters;
2) the camera continuously acquires virtual projection image information, and HSI parameters are automatically calibrated through a prediction PI algorithm in the vision processing system until the acquired data are consistent with given signal parameters, wherein the PI parameters of the prediction PI algorithm are related to the selected projection device, the display imaging device and the camera, the parameters set for leaving a factory are not changed in actual use, and the HSI parameters in the image need to be calibrated.
According to the scheme, the images are processed by adopting a mature algorithm, so that the reliability and stability of equipment operation are ensured, and the maintenance cost and the maintenance difficulty are relatively reduced.
Drawings
FIG. 1 is a simplified schematic diagram of a prior art display-based enhancement technique;
FIG. 2 is a simplified schematic diagram of a prior art optical transmission based enhancement technique;
FIG. 3 is a simplified schematic of the present invention;
FIG. 4 is a simplified schematic of the viewing area and calibration area;
FIG. 5 is a signal flow diagram of an image for HSI calibration;
in the figure, the marker 5 is a real image, the marker 6 is an observer, and the marker 7 is a projected image.
Detailed Description
As shown in fig. 3 and 4, the virtual reality augmenting apparatus of the present invention includes a display imaging device 1, a camera 2, a vision processing system 3, a projection device 4, and a power supply, wherein the power supply supplies power to the entire apparatus. The display imaging device 1 comprises a visual area 11 and a calibration area 12, the calibration area 12 is located at the periphery of the visual area 11, the visual area 11 and the calibration area 12 are respectively connected with the vision processing system 3 through electric signals, under the control of the vision processing system 3, the visual area 11 and the calibration area 12 are respectively switched between a transparent state and a non-transparent state, and in the embodiment, the frequency of switching between the transparent state and the non-transparent state of the visual area 11 and the calibration area 12 is more than 30 Hz. Therefore, the discontinuous feeling is avoided, and the immersion feeling of the image is guaranteed. The camera 2 acquires a real image of a real environment through the visible region 11 and/or the calibration region 12 in a transparent state, and the visible region 11 and/or the calibration region 12 in a non-transparent state receives a projection image from the projection device 4 and reflects the projection image into a lens of the camera 2. The camera 2 is configured to capture a real image of a real environment through the display imaging device 1 and a projection image reflected by the projection device 4 after being projected onto the display imaging device 1, and upload the captured projection image to the vision processing system 3. The projection device 4 is used for projecting the image output by the vision processing system 3 onto the display imaging device 1. The vision processing system 3 is configured to receive the real image and the projection image uploaded by the camera 2, calibrate the projection image to be consistent with the real image, fuse a virtual image generated by the system to the calibrated projection image, store and output the fused projection image to the projection device 4, and finally project the fused projection image to the display imaging device 1.
Specifically, the viewing area 11 and the calibration area 12 are seamlessly interfaced, and when the viewing area 11 is in the non-transparent state, it receives the projected image from the projection device 4 and reflects the projected image into the eyes of the viewer. Here, a position to be observed by the observer 6 is provided in the apparatus, and the observer 6 can directly see the image displayed in the visible region through the observation position. The display imaging device 1 is made of a liquid crystal film material. Liquid crystal films are a relatively new type of material, which is being increasingly used in a number of fields or industries. The transparent state or the non-transparent state can be quickly and reliably realized through power-on and power-off control. The vision processing system 3 is also connected with a gyroscope, an accelerometer, a magnetometer, a microphone and a loudspeaker. The projection device 4 is a projector. The vision processing system 3 is a computer processing system built in the virtual reality augmentation equipment; alternatively, the vision processing system 3 is an external computer connected through a signal line. The choice of the processing system is set according to the specific application environment of the device, for example, when the device is used indoors and has a wider indoor space, the image can be processed by connecting an external computer to a signal line. If the device of the present invention is applied in a field environment, the computer processing system built in the virtual reality augmenting device is used.
The method for enhancing the virtual reality by using the equipment comprises the following steps:
a. when the power supply is powered on, the visible area 11 and the calibration area 12 are both in a transparent state.
b. In the visible area 11, the camera 2 acquires a real image in the real world corresponding to the visible area 11 in a transparent state, and uploads the real image to the vision processing system 3 for storage.
c. The vision processing system 3 outputs the received real image to the projection device 4, the projection device 4 projects the real image to the visible area 11, at this time, the visible area 11 is converted into a non-transparent state, the projected image obtained by projection is reflected to the lens of the camera 2 by the area, the camera 2 uploads the received projected image to the vision processing system 3, the vision processing system 3 compares the received projected image with the stored real image to obtain the difference between the projected image and the real image, the vision processing system 3 calculates the difference between the two images to correct the projected image, so that the projected image and the real image are kept consistent to obtain a projection calibration coefficient, the projection image is calibrated by using the projection calibration coefficient to obtain a calibrated projection image, the virtual image generated by the vision processing system 3 is superimposed on the calibrated projection image to form a composite image, and the composite image is output to the projection device 4 and projected onto the visible area 11 for display, where the visible area 11 is in a non-transparent state.
d. Meanwhile, in the calibration area 12, the camera 2 acquires a real image in the real world corresponding to the calibration area 12 in a transparent state through the calibration area and uploads the real image to the vision processing system 3 for storage.
e. The vision processing system 3 outputs the received real image to the projection device 4, the projection device 4 projects the real image to the calibration area 12, at this time, the calibration area 12 is converted into a non-transparent state, the area reflects the projected image obtained by projection into the lens of the camera 2, the camera 2 uploads the received projected image to the vision processing system 3, the vision processing system 3 compares the received projected image with the stored real image to obtain the difference between the projected image and the real image, the vision processing system 3 calculates the difference between the two images to correct the projected image so that the projected image and the real image are consistent to obtain a projection calibration coefficient, the projection image is calibrated by using the projection calibration coefficient to obtain a calibrated projection image, the virtual image generated by the vision processing system 3 is superimposed on the calibrated projection image to form a composite image, and the composite image is stored.
f. The calibration area 12 is continuously switched between a transparent state and a non-transparent state, in the transparent state, the camera 2 acquires a real image in the real world corresponding to the calibration area 12 and uploads the real image to the vision processing system 3 for storage, in the non-transparent state, the calibration area 12 reflects a projection image projected by the projection device 4 as a reflection surface to the camera 2, and the calibration state of step e is continued.
g. When the pose of the whole device is changed, the vision processing system 3 detects the rotation angle and the direction of the device, and directly projects the stored synthetic image on the pose to the visible area 11 for display, and at the moment, the visible area 11 is in a non-transparent state.
h. Until the device is taken out of service.
As shown in fig. 5, in step c and step e, the vision processing system 3 calculates a difference between the two images, corrects the projected image to make the projected image consistent with the real image, and obtains a projection calibration coefficient by the following specific steps:
1) in the vision processing system 3, firstly, HSI conversion is performed on 2 pictures of the projection image and the real image, HSI parameters (chromaticity, saturation and brightness) of the two pictures are obtained, parameters obtained by the acquired display image are set as given parameters, and parameters obtained by virtually projecting the projection image acquired by the camera are feedback parameters.
2) The camera 2 continuously acquires virtual projection image information, and automatically calibrates the HSI parameters through a predictive PI algorithm inside the vision processing system 3 until the acquired data are consistent with given signal parameters, wherein the PI parameters of the predictive PI algorithm are related to the selected projection device 4, the display imaging device 1 and the camera 2, are factory-set parameters, and are not changed in actual use, and the HSI parameters in the image need to be calibrated.
By the way, after the calibration parameter is confirmed, the projection image is calibrated by using the projection calibration coefficient, which is actually a reverse process, the HIS parameter value of the projection image is changed by using the obtained HSI parameter, so that the HIS parameter of the projection image is consistent with the HIS parameter of the real image, the calibrated projection image is obtained, finally, the virtual image generated by the vision processing system 3 is superposed on the calibrated projection image to form a synthetic image, and the synthetic image is stored to prepare for subsequent influencing projection.
Compared with the prior art, the display imaging device is set to be in a structure with a visible area and a calibration area, the transition between the two areas is seamless butt joint, the visible area and the calibration area are switched between a transparent state and a non-transparent state, so that the display imaging device can acquire a real image or form a reflecting film to reflect the image, the camera is used for acquiring the real image and a projection image, the two images are compared through a vision processing system to obtain a parameter for calibrating the projection image, the parameter is used for calibrating the projection image, the consistency of the projection image and the real image is ensured, and the immersion of the image is ensured. The image that utilizes the calibration region to gather to the environment outside the non-visual region is calibrated and is saved the synthetic image that obtains, and when the visual angle of equipment rotated new real environment region, visual processing system can show on projecting the visual region with the synthetic image of this new region fast, has avoided the delay in time, has guaranteed the sense of immersion, has also increased the visual angle in other words, has promoted observer's experience and has experienced widely.

Claims (10)

1. A virtual reality augmenting apparatus, characterized in that: it comprises a display imaging device (1), a camera (2), a vision processing system (3), a projection device (4) and a power supply, wherein the power supply supplies power to the whole equipment,
the display imaging device (1) comprises a viewing area (11) and a calibration area (12), the calibration area (12) is located at the periphery of the visual area (11), the visual area (11) and the calibration area (12) are respectively connected with the visual processing system (3) through electric signals, under the control of the vision processing system (3), the visual area (11) and the calibration area (12) are each switched between a transparent state and a non-transparent state, respectively, the camera (2) acquires real images of a real environment through the visible region (11) and/or the calibration region (12) in a transparent state, and the visible region (11) and/or the calibration region (12) in a non-transparent state receives projected images from the projection device (4) and reflects the projected images into a lens of the camera (2);
the camera (2) is used for acquiring a real image of a real environment through the display imaging device (1) and a projection image reflected by the projection device (4) after being projected to the display imaging device (1) and uploading the projection image to the vision processing system (3);
the projection device (4) is used for projecting the image output by the vision processing system (3) onto the display imaging device (1);
the vision processing system (3) is used for receiving the real images and the projection images uploaded by the camera (2), calibrating the projection images to be consistent with the real images, fusing virtual images generated by the system to the calibrated projection images, storing and outputting the fused projection images to the projection device (4), and finally projecting the fused projection images to the display imaging device (1).
2. The virtual reality augmentation apparatus of claim 1, wherein: the visual area (11) and the calibration area (12) are seamlessly interfaced, and when the visual area (11) is in a non-transparent state, it receives the projected image from the projection device (4) and simultaneously reflects into the eyes of the observer.
3. The virtual reality augmentation apparatus of claim 1, wherein: the display imaging device (1) is made of a liquid crystal film material.
4. The virtual reality augmentation apparatus of claim 1, wherein: the frequency at which the visible area (11) and the calibration area (12) switch between a transparent state and a non-transparent state is greater than 30 Hz.
5. The virtual reality augmentation apparatus of claim 1, wherein: the vision processing system (3) is also connected with a gyroscope, an accelerometer, a magnetometer, a microphone and a loudspeaker.
6. The virtual reality augmentation apparatus of claim 1, wherein: the projection device (4) is a projector.
7. The virtual reality augmentation apparatus of claim 1, wherein: the visual processing system (3) is a computer processing system which is built in the virtual reality augmentation equipment.
8. The virtual reality augmentation apparatus of claim 1, wherein: the vision processing system (3) is an external computer connected through a signal line.
9. A method of virtual reality augmentation using the virtual reality augmentation apparatus of claim 1, comprising:
a. the power supply is powered on, and the visual area (11) and the calibration area (12) are both in a transparent state;
b. in the visual area (11), the camera (2) acquires a real image in the real world corresponding to the visual area (11) in a transparent state through the visual area and uploads the real image to the visual processing system (3) for storage;
c. the vision processing system (3) outputs the received real image to the projection device (4), the projection device (4) projects the real image to the visible area (11), at the moment, the visible area (11) is converted into a non-transparent state, the projected image obtained by projection is reflected to the lens of the camera (2) by the area, the camera (2) uploads the received projected image to the vision processing system (3), the vision processing system (3) compares the received projected image with the stored real image to obtain the difference between the projected image and the real image, the vision processing system (3) calculates to obtain the difference between the two images, corrects the projected image to keep the difference between the projected image and the real image consistent to obtain a projection calibration coefficient, and the projection image is calibrated by using the projection calibration coefficient, obtaining a calibrated projection image, superposing a virtual image generated by the vision processing system (3) to the calibrated projection image to form a synthetic image, outputting the synthetic image to the projection device (4) and projecting the synthetic image onto the visual area (11) for display, wherein the visual area (11) is in a non-transparent state;
d. meanwhile, in the calibration area (12), the camera (2) acquires a real image in the real world corresponding to the calibration area (12) in a transparent state through the calibration area and uploads the real image to the vision processing system (3) for storage;
e. the vision processing system (3) outputs the received real image to the projection device (4), the projection device (4) projects the real image to the calibration area (12), at the moment, the calibration area (12) is converted into a non-transparent state, the area reflects the projected image obtained by projection into a lens of the camera (2), the camera (2) uploads the received projected image to the vision processing system (3), the vision processing system (3) compares the received projected image with the stored real image to obtain the difference between the projected image and the real image, the vision processing system (3) calculates the difference between the two images to correct the projected image so that the projected image is consistent with the real image to obtain a projection calibration coefficient, and the projection image is calibrated by using the projection calibration coefficient, obtaining a calibrated projection image, superposing a virtual image generated by the vision processing system (3) to the calibrated projection image to form a synthetic image, and storing the synthetic image;
f. the calibration area (12) is continuously switched between a transparent state and a non-transparent state, in the transparent state, the camera (2) collects real images in the real world corresponding to the calibration area (12) and uploads the real images to the vision processing system (3) for storage, in the non-transparent state, the calibration area (12) reflects projection images projected by the projection device (4) to the camera (2) as a reflection surface, and the calibration state of the step e is continuously entered;
g. when the pose of the whole equipment is changed, the vision processing system (3) detects the rotation angle and the direction of the equipment, and directly projects the stored synthetic image on the pose to the visual area (11) for displaying, and at the moment, the visual area (11) is in a non-transparent state;
h. until the device is taken out of service.
10. The method according to claim 9, wherein in step c and step e, the vision processing system (3) calculates the difference between the two images, corrects the projected image to make it consistent with the real image, and obtains the projection calibration coefficients by the following steps:
1) in the vision processing system (3), firstly, performing HSI conversion on the projection image and 2 pictures of the real image, obtaining HSI parameters of the two pictures, setting the parameters obtained by the acquired display image as given parameters, and virtually projecting the parameters obtained by the projection image acquired by the camera as feedback parameters;
2) the camera (2) continuously acquires virtual projection image information, and HSI parameters are automatically calibrated through a prediction PI algorithm in the vision processing system (3) until the acquired data are consistent with given signal parameters, wherein the PI parameters of the prediction PI algorithm are related to the selected projection device (4), the display imaging device (1) and the camera (2), the parameters set for factory leaving do not change in actual use, and the HSI parameters in the image need to be calibrated.
CN202010211396.XA 2020-03-24 2020-03-24 Virtual reality enhancement equipment and method Active CN111277808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010211396.XA CN111277808B (en) 2020-03-24 2020-03-24 Virtual reality enhancement equipment and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010211396.XA CN111277808B (en) 2020-03-24 2020-03-24 Virtual reality enhancement equipment and method

Publications (2)

Publication Number Publication Date
CN111277808A CN111277808A (en) 2020-06-12
CN111277808B true CN111277808B (en) 2021-02-05

Family

ID=71000793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010211396.XA Active CN111277808B (en) 2020-03-24 2020-03-24 Virtual reality enhancement equipment and method

Country Status (1)

Country Link
CN (1) CN111277808B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773683A (en) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 Character display method and device based on mobile terminal
CN113318424B (en) * 2020-12-23 2023-07-21 广州富港生活智能科技有限公司 Novel game device and control method
CN113318426B (en) * 2020-12-23 2023-05-26 广州富港生活智能科技有限公司 Novel game system
CN113318425B (en) * 2020-12-23 2023-07-21 广州富港生活智能科技有限公司 Novel game device and control method
CN114624005A (en) * 2022-01-21 2022-06-14 欧拓飞科技(珠海)有限公司 AR and VR high-precision testing equipment and detection method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103499880A (en) * 2013-10-23 2014-01-08 卫荣杰 Head-mounted see through display
CN107003523A (en) * 2014-10-24 2017-08-01 埃马金公司 Immersion based on micro-display wears view device
CN108027517A (en) * 2016-07-18 2018-05-11 法国圣戈班玻璃厂 For showing the head-up display system of image information for observer

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2941653C (en) * 2014-03-05 2021-08-24 Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3d augmented reality display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103499880A (en) * 2013-10-23 2014-01-08 卫荣杰 Head-mounted see through display
CN107003523A (en) * 2014-10-24 2017-08-01 埃马金公司 Immersion based on micro-display wears view device
CN108027517A (en) * 2016-07-18 2018-05-11 法国圣戈班玻璃厂 For showing the head-up display system of image information for observer

Also Published As

Publication number Publication date
CN111277808A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN111277808B (en) Virtual reality enhancement equipment and method
US11030975B2 (en) Information processing apparatus and information processing method
CN107439011B (en) Retinal location in later period re-projection
US20110234475A1 (en) Head-mounted display device
WO2017173735A1 (en) Video see-through-based smart eyeglasses system and see-through method thereof
US6389153B1 (en) Distance information generator and display device using generated distance information
US8982245B2 (en) Method and system for sequential viewing of two video streams
US20080106489A1 (en) Systems and methods for a head-mounted display
WO2014171142A1 (en) Image processing method and image processing device
US11119567B2 (en) Method and apparatus for providing immersive reality content
WO2016159164A1 (en) Image display system and image display method
JP2010153983A (en) Projection type video image display apparatus, and method therein
CN113035010A (en) Virtual and real scene combined visual system and flight simulation device
CN108616752A (en) Support the helmet and control method of augmented reality interaction
US11366315B2 (en) Image processing apparatus, method for controlling the same, non-transitory computer-readable storage medium, and system
Li et al. Mixed reality tunneling effects for stereoscopic untethered video-see-through head-mounted displays
Itoh et al. OST Rift: Temporally consistent augmented reality with a consumer optical see-through head-mounted display
US11749141B2 (en) Information processing apparatus, information processing method, and recording medium
US20220067878A1 (en) Method and device for presenting ar information based on video communication technology
Luo et al. Development of a three-dimensional multimode visual immersive system with applications in telepresence
CN108028038A (en) Display device
Syawaludin et al. Hybrid camera system for telepresence with foveated imaging
CN111736692A (en) Display method, display device, storage medium and head-mounted device
Elvins Augmented reality: “The future's so bright, I gotta wear (see-through) shades”
CN113589523B (en) MR glasses with high accuracy motion tracking locate function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant