CN113660477A - VR glasses and image presentation method thereof - Google Patents

VR glasses and image presentation method thereof Download PDF

Info

Publication number
CN113660477A
CN113660477A CN202110940842.5A CN202110940842A CN113660477A CN 113660477 A CN113660477 A CN 113660477A CN 202110940842 A CN202110940842 A CN 202110940842A CN 113660477 A CN113660477 A CN 113660477A
Authority
CN
China
Prior art keywords
glasses
image
focus point
information
pupil center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110940842.5A
Other languages
Chinese (zh)
Inventor
吕良方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110940842.5A priority Critical patent/CN113660477A/en
Publication of CN113660477A publication Critical patent/CN113660477A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses VR glasses and an image presenting method thereof, wherein the image presenting method of the VR glasses comprises the following steps: acquiring operation information of a user for operating VR glasses; generating a corresponding virtual operation presentation requirement according to the operation information; feeding back a virtual operation presentation requirement to a server to acquire virtual operation presentation information corresponding to the virtual operation presentation requirement, wherein the server comprises a communication system and/or a GPS system and/or a processor; presenting the virtual operation presentation information in an image mode, wherein the virtual operation presentation information at least comprises virtual picture information corresponding to a real picture acquired in front of a current VR glasses lens. Also realize the user when wearing VR glasses, the picture that shows combines together with the reality picture in the VR glasses for the virtual picture that the VR glasses not only can use in the middle of the recreation demonstration recreation, still can show the reality picture, with the virtualization of reality picture and present in the VR glasses, increase the usage of VR glasses, and promote the intellectuality of VR glasses, increase the diversification of VR glasses.

Description

VR glasses and image presentation method thereof
Technical Field
The invention relates to the technical field of VR glasses, in particular to VR glasses and an image presenting method thereof.
Background
VR glasses, also known as VR head-mounted display or virtual reality head-mounted display, use head-mounted display to seal the human vision and hearing from the outside, and guide the user to create a sensation of being in a virtual environment. The display principle is that the left and right eye screens respectively display images of the left and right eyes, and the human eyes generate stereoscopic impression in the brain after acquiring the information with the difference.
VR glasses picture among the prior art is direct presentation, and virtual picture in the VR glasses does not combine together with the reality picture for in the middle of the VR glasses only are used for the recreation, limited the use scene of VR glasses, reduced the use of VR glasses.
Disclosure of Invention
The embodiment of the invention provides VR glasses and an image presenting method thereof, and aims to solve the technical problem that a virtual picture and a real picture in VR glasses are not combined in the prior art.
In order to solve the above problem, an embodiment of the present invention provides a VR glasses, where an image presenting method of the VR glasses includes:
acquiring operation information of a user for operating VR glasses;
generating a corresponding virtual operation presentation requirement according to the operation information;
feeding back a virtual operation presentation requirement to a server to acquire virtual operation presentation information corresponding to the virtual operation presentation requirement, wherein the server comprises a communication system and/or a GPS system and/or a processor;
presenting the virtual operation presentation information in an image mode, wherein the virtual operation presentation information at least comprises virtual picture information corresponding to a real picture acquired in front of a current VR glasses lens.
Preferably, the step of feeding back the virtual operation presentation requirement to the server to obtain the virtual operation presentation information corresponding to the virtual operation presentation requirement includes:
feeding back the virtual operation presentation requirement to the server;
acquiring a communication signal and/or a GPS positioning signal and/or a processing data signal fed back by a server;
and matching the communication signal and/or the GPS positioning signal and/or the processing data signal with the virtual operation presentation requirement, and generating virtual operation presentation information corresponding to the virtual operation presentation requirement.
Preferably, the step of acquiring operation information of the user operating the VR glasses includes:
determining a pupil center point, and determining an original focus point transmitted on VR glasses when a user is in use for emmetropia according to the pupil center point;
performing color transmission by taking the original focus point as a center and presenting a display image;
acquiring the offset information of the pupil center point;
calculating a second focus point emitted to VR glasses by the pupil sight according to the offset information of the pupil center point;
moving the image rendered with color transmission centered at the original focus point to color transmission centered at the second focus point and rendering the image.
Preferably, the step of calculating a second focus point emitted to VR glasses by the pupil line of sight according to the shift information of the pupil center point includes:
acquiring the distance between the original focus point and the pupil center point;
acquiring a spatial track of pupil center point offset according to the offset information of the pupil center point;
calculating the space trajectory of the distance and the pupil center point offset to obtain the moving trajectory of the original focus point;
and moving the original focus point according to the moving track of the original focus point and obtaining a second focus point.
Preferably, after the step of moving the image represented by color transmission centered on the original focus point to the step of color transmission centered on the second focus point and representing the image, the VR glasses further comprise
Switching operation modes and determining a dynamic main body;
recognizing the motion posture of the dynamic body;
and adjusting the spatial orientation of the displayed image according to the motion posture, or operating the function keys in the displayed image according to the motion posture.
Preferably, after the step of operating the function key in the display image according to the motion gesture, the VR glasses further include:
determining an operated function key;
acquiring a function operation instruction corresponding to a function key;
and executing the functional operation instruction and presenting a display image corresponding to the functional operation instruction.
Preferably, the step of moving the image rendered with color transmission centered at the original focus point to color transmission centered at the second focus point and rendering the image comprises:
acquiring a shortest path track from an original focus point to a second focus point;
moving the original focus point to a second focus point according to the shortest path track;
judging whether the actual moving track from the original focus point to the second focus point deviates the shortest path track;
if the deviation exists, adjusting the angular velocity and the acceleration during moving until the actual moving track is superposed with the shortest path track;
if not, continuing to execute the step of moving the original focus point to the second focus point according to the shortest path track.
Preferably, the step of determining the pupil center point comprises:
obtaining eye visual information;
filtering and separating the eye muscle data;
determining the current eyeball state according to the eye muscle data;
comparing the current eyeball state with a pre-stored eyeball state database, and acquiring current eyeball information corresponding to the current eyeball state;
and determining the pupil center point according to the current eyeball information.
Color transmission mode
To solve the above problem, an embodiment of the present invention further provides VR glasses for performing the image rendering method of the VR glasses, where the VR glasses include:
the visual acquisition module is used for acquiring visual images and converting the visual images into visual image data for transmission;
the processor is used for processing the data collected and transmitted to the visual image, determining a pupil center point and outputting image data to be displayed according to the pupil center point;
and the transparent display screen is a lens and is used for displaying the image data output by the processor.
Preferably, a power module is arranged in the mirror frame and provides power for the processor and the transparent display screen, and the output end of the power module is respectively connected with the processor and the transparent display screen.
Preferably, a gyroscope is further installed in a frame of the VR glasses, the gyroscope is connected in series between the processor and the transparent display screen, and is used for detecting and collecting the shaking data of the transparent display screen and transmitting the shaking data to the processor for shaking data processing, and the processor generates corresponding anti-shaking operation according to the shaking data and converts the anti-shaking operation into anti-shaking data for output.
In this embodiment, operation information of a user operating VR glasses is acquired, a corresponding virtual operation presentation requirement is generated according to the operation information, and the virtual operation presentation requirement is fed back to a server to acquire virtual operation presentation information corresponding to the virtual operation presentation requirement, where the server includes a communication system and/or a GPS system and/or a processor and presents the virtual operation presentation information in an image manner. Also realize the user when wearing VR glasses, the picture that shows combines together with the reality picture in the VR glasses for the virtual picture that the VR glasses not only can use in the middle of the recreation demonstration recreation, still can show the reality picture, with the virtualization of reality picture and present in the VR glasses, increase the usage of VR glasses, and promote the intellectuality of VR glasses, increase the diversification of VR glasses.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic block diagram of embodiment 1 of the present invention;
FIG. 2 is a schematic structural view of embodiment 1 of the present invention;
FIG. 3 is a first flowchart of embodiment 2 of the present invention;
numbering: VR glasses 1, transparent display 11, processor 12, power module 13, gyroscope 14.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another.
Example one
As shown in fig. 1-2, in the present application a VR glasses 1 is proposed, comprising:
the visual acquisition module is used for acquiring visual images and converting the visual images into visual image data for transmission;
the processor 12 is used for processing the data collected and transmitted to the visual image, determining a pupil center point and outputting image data to be displayed according to the pupil center point;
and the transparent display screen 11 is a lens and is used for displaying the image data output by the processor 12.
In order to increase flexibility of image display, in this embodiment, the processor 12 processes the visual image data acquired by the visual acquisition module and determines a pupil center point, and a presentation image presented around the pupil center point as a center is correspondingly presented according to a position of the pupil center point, wherein the transparent display screen 11 is used as a display carrier to present the presentation image.
Specifically, glasses include picture frame, transparent lens, and this picture frame can also be ordinary myopia glasses picture frame for VR glasses 1 special picture frame, and transparent lens is installed in the picture frame, and wherein, transparent lens embedding transparent display screen 11, the intermediate level that acts as transparent lens by transparent display screen 11 is presented the image by transparent display screen 11. The side lamp beads are arranged at the positions, embedded with the transparent lenses, of the picture frame in a surrounding mode, the side lamp beads are used for assisting the transparent display screen 11 to display images, the visual acquisition module is arranged at the nose wing of the picture frame, the visual acquisition module acquires visual images of eyes, the power module 13, specifically a lithium battery, is arranged at the positions of the mirror legs of the picture frame, the power module 13 is electrically connected with the visual acquisition module, the transparent display screen 11, the processor 12 and the side lamp beads, the power module 13 is used as the visual acquisition module, the transparent display screen 11, the processor 12 and the side lamp beads to supply power, and certainly, grooves are formed in the positions of the mirror legs and used for placing the processor 12; the glasses legs are further provided with an operating switch which can be an inductive switch, a button switch or a voice control switch, and the operating switch is connected with the processor 12, so that when the operating switch is turned on, the processor 12 receives and processes operating information in the operating switch, wherein the operating information comprises turning on, azimuth adjustment, focusing, screenshot, voice communication, video communication and GPS navigation.
The vision collection module includes miniature camera or infrared sensor, the visual image of user's eye is gathered through miniature camera or infrared sensor specifically, the central point of pupil is confirmed in the visual image of follow user's eye to the discernment, of course, still can predetermine the movement trend of the pupil central point of user's eye according to the visual image of user's eye, discern the muscle distribution of user's eye in the visual image of follow user's eye, monitor the muscle stretch-draw of user's eye, judge the eye motion that user's eye will carry out through the muscle stretch-draw degree of user's eye, and then predetermine pupil central point position in advance and continuously track the verification, for example: when the eyeball rotates downward, the inferior rectus muscle and the inferior oblique muscle are torn first, while the superior rectus muscle and the superior oblique muscle are opened.
The processor 12 is mainly configured to process the visual image data acquired by the visual acquisition module, including identifying a pupil center point position and a pupil center point position after eye movement from the visual image data, and the processor 12 is further configured to perform display position adjustment and focus processing on the received displayed image data. The processor 12 is mainly a CPU, and in cooperation with it: the system comprises a network interface, a user interface, a memory, a communication bus and a control unit, wherein the communication bus is used for realizing connection communication among the components; the user interface may include a connection transparent display 11, an input unit such as a Keyboard (Keyboard), and a remote controller, and the optional user interface may further include a standard wired interface and a wireless interface, and the network interface may optionally include a standard wired interface and a wireless interface; a memory (non-volatile memory), such as a disk memory, which may optionally be a storage device separate from the processor 12.
The transparent display screen 11 mainly includes two types, namely a self-luminous type and an external light source, and currently, the following types are mainly used: LED transparent screen, LED's demonstration principle is through lamp pearl matrix adjustment light, and transparent LED is adjustment lamp pearl distance, and is transparent between the interval, therefore remote watching just can see the image clearly. Transparent screen of photon chip: the transparent medium is adjusted by a multilayer nanoscale structure, any transparent medium can be changed into a high-definition beautiful display, and the display is also divided into a transparent flexible screen and a transparent glass screen, wherein the transparent flexible screen is arranged on the surface of any transparent medium in a flexible attaching mode, so that the transparent characteristic of the medium is reserved, the installation and operation are simple, and the original glass does not need to be removed; the transparent glass screen is a hard display screen integrating a photonic crystal chip, can select materials such as glass and acrylic according to customer requirements, and has high-definition, bright, virtual and real display effects with high integration. 45-degree reflecting mirror: one or more layers of dielectric films or metal films are plated on the optical element or the substrate to change the transmission direction of the light wave, so that the structure is easy to use and convenient to focus. Scattering film/holographic film: the scattering particles are sprayed on the transparent material, the principle is close to ground glass, the display effect is realized by the principle of sacrificing transparency, and the higher the haze of the material is, the clearer the display effect is.
In addition, an anti-shake function is added to the VR glasses 1, a gyroscope 14 is further installed in a frame of the VR glasses 1, the gyroscope 14 is connected in series between the processor 12 and the transparent display screen 11, and is used for detecting and collecting shake data of the transparent display screen 11 and transmitting the shake data to the processor 12 for shake data processing, and the processor 12 generates corresponding anti-shake operation according to the shake data and converts the shake operation into anti-shake data for output. Wherein the processing for anti-shaking comprises: the optical anti-shake is mainly to detect the tiny movement through the gyroscope 14, then transmit the signal to the processor 12, the processor 12 calculates the displacement amount to be compensated, and then compensate through the compensation lens group, so as to overcome the problem that the image is blurred due to the vibration generated when the presented image moves along with the pupil center point; the electronic anti-shake is realized by analyzing and processing an image to be displayed on the transparent display screen 11 mainly through a program and compensating a fuzzy part by utilizing an edge image. In this embodiment, the optical anti-shake and the electronic anti-shake are applied simultaneously.
In this embodiment, a visual image is acquired by a visual acquisition module and converted into visual image data for transmission, the processor 12 processes the acquired and transmitted visual image data, determines a pupil center point and outputs image data to be displayed according to the pupil center point, and the transparent display screen 11 displays the image data output by the processor 12. Also realize the user when wearing VR glasses 1, the picture that shows combines together with the reality picture in the VR glasses 1 for VR glasses 1 not only can use the virtual picture that shows the recreation in the middle of the recreation, still can show the reality picture, with the reality picture virtualization and present in VR glasses 1, increase VR glasses 1's usage, and promote VR glasses 1's intellectuality, increase VR glasses 1's diversification.
Example two
As shown in fig. 3, according to a first embodiment, the present application further provides an image presenting method for VR glasses, where the image presenting method for VR glasses includes:
step S10, acquiring operation information of a user operating VR glasses;
when the user uses the VR glasses, the VR glasses are turned on and transmitted to the processor according to operation information corresponding to the operated button switch or inductive switch or touch switch, the button switch or inductive switch or touch switch is turned on, orientation adjustment, focusing, screenshot, voice communication, video communication, GPS navigation and the current picture obtained by the VR glasses camera, the button switch or inductive switch or touch switch is respectively provided with the functions of turning on, orientation adjustment, focusing, screenshot, voice communication, video communication, GPS navigation and shooting, in particular to a turning-on switch, an orientation adjustment switch, a focusing switch, a screenshot switch, a voice communication switch, a video communication switch, a GPS navigation switch and a camera switch, the turning-on switch turns on the VR glasses, and the orientation adjustment switch adjusts the three-dimensional orientation or two-dimensional orientation of the picture displayed in the VR glasses, the focusing switch adjusts the focus position of a display picture in VR glasses, the screen capture switch captures the display picture in the VR glasses, the voice communication of the VR glasses is started by the voice communication switch, a receiver and a microphone are arranged in the VR glasses, the video communication of the VR glasses is started by the video communication switch, the video communication switch and the voice communication switch can be started simultaneously, the voice communication and/or the video communication can be obtained after the VR glasses are connected with a communication base station in a communication system, after the VR glasses are started, data in the voice communication and data in the video communication are integrated and displayed in the VR glasses at the same frequency by the processor, the GPS navigation switch is used for starting the VR glasses to be connected with a GPS system, the GPS positioning of the current position is obtained by the VR glasses, and a picture in front of lenses of the current VR glasses is obtained by the camera switch.
Step S20, generating a corresponding virtual operation presentation requirement according to the operation information; the operation information generated by operating the button switch or the inductive switch or the touch switch includes a virtual operation presentation requirement corresponding to the operation information, and specifically, the virtual operation presentation requirement at least includes one of starting, azimuth adjustment, focusing, screenshot, voice communication, video communication, GPS navigation and shooting.
Step S30, feeding back the virtual operation presentation requirement to a server to obtain virtual operation presentation information corresponding to the virtual operation presentation requirement, wherein the server comprises a communication system and/or a GPS system and/or a processor;
specifically, the virtual operation presentation requirement is fed back to the server, the communication signal and/or the GPS positioning signal and/or the processing data signal fed back by the server are acquired, the communication signal and/or the GPS positioning signal and/or the processing data signal are matched with the virtual operation presentation requirement, and virtual operation presentation information corresponding to the virtual operation presentation requirement is generated. The virtual operation presentation information includes: the method comprises the following steps of orientation adjustment operation process and operation results, focusing operation process and operation results, screenshot operation process and operation results, audio content of voice communication, picture content of video communication, position and display picture in GPS navigation, and picture shooting in front of current VR glasses.
In step S40, the virtual operation presentation information is presented in the form of an image. And displaying virtual operation presentation information in the transparent display screen, wherein the virtual operation presentation information at least comprises one of azimuth icon plane information, focus icon plane information, voice communication icon plane information, video communication icons and video picture information, GPS navigation icons and picture information, current position picture information acquired by current VR glasses, and virtual picture information corresponding to a real picture acquired in front of the current VR glasses lenses.
Specifically, the step of obtaining the operation information of the user operating the VR glasses may further include: determining a pupil center point, and determining an original focus point transmitted on VR glasses when a user is in use for emmetropia according to the pupil center point;
in the embodiment, the visual image is mainly acquired through the visual acquisition module and converted into visual image data to be transmitted to the processor, and the processor analyzes and processes the visual image data to determine the pupil center point. The vision acquisition module comprises a micro camera or an infrared sensor, specifically acquires a vision image of the eyes of a user through the micro camera or the infrared sensor, identifies and determines the center point of a pupil from the vision image of the eyes of the user, naturally, the movement trend of the pupil center point of the eyes of the user can be predicted according to the vision image of the eyes of the user, the muscle distribution of the eyes of the user is distinguished from the vision image of the eyes of the user, the muscle tension of the eyes of the user is monitored, the eye movement to be performed by the eyes of the user is judged according to the muscle tension degree of the eyes of the user, and then the position of the pupil center point is predicted in advance and continuously tracked and verified.
The processor receives the obtained eye vision information, carries out filtering processing on the eye vision information, removes edge images or fuzzy images in the eye vision information, separates out eye muscle data, determines eye movement of a current eye according to the tension degree of eye muscles in the eye muscle data, and pre-judges a current eyeball state and a corresponding pre-judged current pupil center position, wherein the method for pre-judging the current pupil center position comprises the following steps: comparing the current eyeball state with a pre-stored eyeball state database, acquiring current eyeball information corresponding to the current eyeball state, determining a pupil center point according to the current eyeball information, and storing the eyeball state corresponding to the eye movement and the corresponding pupil center point position in the preset eyeglass state database, so that after the pre-stored eyeball state database is compared, the acquired current eyeball information contains the corresponding pupil center point. Of course, the processor can directly search the eye visual information conveniently, and the pupil center point can be analyzed and determined according to the special state of the pupil center point.
Performing color transmission by taking the original focus point as a center and presenting a display image; specifically, an original focus point is used as a center to diffuse and present an image, and certainly, the color can also be organized around the original focus point as the center, and the light of the color is emitted into the eyeball, so that the eye is imaged.
In this embodiment, the image display is mainly performed through a transparent display screen, wherein the transparent display screen is mainly divided into two types, namely a self-luminous type and an external light source, and at present, the following types are mainly used: LED transparent screen, LED's demonstration principle is through lamp pearl matrix adjustment light, and transparent LED is adjustment lamp pearl distance, and is transparent between the interval, therefore remote watching just can see the image clearly. Transparent screen of photon chip: the transparent medium is adjusted by a multilayer nanoscale structure, any transparent medium can be changed into a high-definition beautiful display, and the display is also divided into a transparent flexible screen and a transparent glass screen, wherein the transparent flexible screen is arranged on the surface of any transparent medium in a flexible attaching mode, so that the transparent characteristic of the medium is reserved, the installation and operation are simple, and the original glass does not need to be removed; the transparent glass screen is a hard display screen integrating a photonic crystal chip, can select materials such as glass and acrylic according to customer requirements, and has high-definition, bright, virtual and real display effects with high integration. 45-degree reflecting mirror: one or more layers of dielectric films or metal films are plated on the optical element or the substrate to change the transmission direction of the light wave, so that the structure is easy to use and convenient to focus. Scattering film/holographic film: the scattering particles are sprayed on the transparent material, the principle is close to ground glass, the display effect is realized by the principle of sacrificing transparency, and the higher the haze of the material is, the clearer the display effect is.
Acquiring the offset information of the pupil center point; specifically, the vision acquisition module captures eye changes and generates eye change vision image data to be transmitted to the processor, the processor analyzes eye movements (including upper glance sideways at, lower glance sideways at, upper strabismus, lower strabismus, eyeball rotation, blinking and the like) and corresponding pupil center point deviation tracks according to the eye change vision image data, the processor analyzes the shifted vision image information, and the moved pupil center point is determined after analysis.
Calculating a second focus point emitted to VR glasses by the pupil sight according to the offset information of the pupil center point; specifically, the distance between the original focus point and the pupil center point is obtained according to the offset information of the pupil center point, the offset spatial track of the pupil center point is obtained according to the offset information of the pupil center point, the offset angle and the offset direction of the pupil center point are calculated according to the offset spatial track of the pupil center point, the specific position of the second focus point is calculated according to the distance, the offset angle and the offset direction, the moving track from the original focus point to the second focus point is generated according to the specific position of the second focus point and the specific position of the original focus point, and the transmitted picture center point is moved from the original focus point to the second focus point according to the moving track of the original focus point.
Moving the image rendered with color transmission centered at the original focus point to color transmission centered at the second focus point and rendering the image.
After the specific position of the second focus point is obtained, a shortest path track from the original focus point to the second focus point is established, the central point of the image to be presented is moved from the original focus point to the second focus point according to the shortest path track, and then the image to be presented by the original focus point is changed into the image to be presented by the second focus point, namely, the movement of the central point of the pupil is synchronized to the movement of the central point of the image to be presented, and it needs to be noted that the central point of the displayed image is the focus point of the image to be presented, and is the clearest picture.
In order to check whether deviation occurs in the moving process of the central point of the displayed image, whether the shortest path track deviates from the actual moving track from the original focusing point to the second focusing point is judged, if the deviation occurs, the angular speed and the acceleration during moving are adjusted until the actual moving track is coincident with the shortest path track; if not, continuing to execute the step of moving the original focus point to the second focus point according to the shortest path track.
Switching operation modes and determining a dynamic main body; in this embodiment, the VR glasses are used with position sensors worn on four limbs, the position sensors are installed in the wearable device, the wearable device collects data of the movement of the four limbs, the data of the movement of the four limbs are wirelessly transmitted to the processor, and the processor processes the data of the movement of the four limbs, for example, when a hand simulates a left-handed sliding movement or a right-handed sliding movement, the position sensors acquire position information and movement speed of the movement of the hand, and then judge information to be expressed by the current movement of the hand.
Recognizing the motion posture of the dynamic body; the processor analyzes and identifies the motion gestures of the dynamic subject from the four-limb action data, wherein the motion gestures comprise leftward sliding, rightward sliding, upward sliding, downward sliding, clicking and the like, the operation gestures correspond to operations on the rendered image, and the operations on the rendered image are also leftward sliding, rightward sliding, upward sliding, downward sliding, clicking and the like.
And adjusting the spatial orientation of the displayed image according to the motion posture, or operating the function keys in the displayed image according to the motion posture. In the display process, the orientation of the display image on the three-dimensional space is adjusted according to the movement posture, the display image moves in the three-dimensional space, the displayed image also comprises simulated function keys, the function keys are determined by clicking of a user, the processor obtains corresponding function operation instructions prestored in the function keys, and the function operation instructions are executed, so that the function operation is displayed in the display image.
In this embodiment, operation information of a user operating VR glasses is acquired, a corresponding virtual operation presentation requirement is generated according to the operation information, and the virtual operation presentation requirement is fed back to a server to acquire virtual operation presentation information corresponding to the virtual operation presentation requirement, where the server includes a communication system and/or a GPS system and/or a processor and presents the virtual operation presentation information in an image manner. Also realize the user when wearing VR glasses, the picture that shows combines together with the reality picture in the VR glasses for the virtual picture that the VR glasses not only can use in the middle of the recreation demonstration recreation, still can show the reality picture, with the virtualization of reality picture and present in the VR glasses, increase the usage of VR glasses, and promote the intellectuality of VR glasses, increase the diversification of VR glasses.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a cloud server, VR glasses, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An image presenting method of VR glasses, the image presenting method of VR glasses comprising:
acquiring operation information of a user for operating VR glasses;
generating a corresponding virtual operation presentation requirement according to the operation information;
feeding back a virtual operation presentation requirement to a server to acquire virtual operation presentation information corresponding to the virtual operation presentation requirement, wherein the server comprises a communication system and/or a GPS system and/or a processor;
presenting the virtual operation presentation information in an image mode, wherein the virtual operation presentation information at least comprises virtual picture information corresponding to a real picture acquired in front of a current VR glasses lens.
2. The image rendering method for VR glasses as claimed in claim 1, wherein the step of feeding back the virtual operation rendering requirement to the server to obtain virtual operation rendering information corresponding to the virtual operation rendering requirement includes:
feeding back the virtual operation presentation requirement to the server;
acquiring a communication signal and/or a GPS positioning signal and/or a processing data signal fed back by a server;
and matching the communication signal and/or the GPS positioning signal and/or the processing data signal with the virtual operation presentation requirement, and generating virtual operation presentation information corresponding to the virtual operation presentation requirement.
3. The image rendering method of VR glasses according to claim 1, wherein the step of acquiring operation information of the user operating the VR glasses includes:
determining a pupil center point, and determining an original focus point transmitted on VR glasses when a user is in use for emmetropia according to the pupil center point;
performing color transmission by taking the original focus point as a center and presenting a display image;
acquiring the offset information of the pupil center point;
calculating a second focus point emitted to VR glasses by the pupil sight according to the offset information of the pupil center point;
moving the image rendered with color transmission centered at the original focus point to color transmission centered at the second focus point and rendering the image.
4. The method for displaying images on VR glasses according to claim 3, wherein the step of calculating the second focusing point emitted from the pupil to the VR glasses with the pupil line of sight according to the shift information of the pupil center point includes:
acquiring the distance between the original focus point and the pupil center point;
acquiring a spatial track of pupil center point offset according to the offset information of the pupil center point;
calculating the space trajectory of the distance and the pupil center point offset to obtain the moving trajectory of the original focus point;
and moving the original focus point according to the moving track of the original focus point and obtaining a second focus point.
5. The method of claim 3, wherein after the step of moving the color-transmission-rendered image centered at the original focus point to color-transmission-rendered image centered at the second focus point, the VR glasses further comprise
Switching operation modes and determining a dynamic main body;
recognizing the motion posture of the dynamic body;
and adjusting the spatial orientation of the displayed image according to the motion posture, or operating the function keys in the displayed image according to the motion posture.
6. The method of rendering an image of VR glasses as in claim 5, wherein after the step of operating a function key in the displayed image according to the motion gesture, the VR glasses further comprise:
determining an operated function key;
acquiring a function operation instruction corresponding to a function key;
and executing the functional operation instruction and presenting a display image corresponding to the functional operation instruction.
7. The method of rendering an image of VR glasses according to claim 3, wherein the step of moving the image rendered with color transmission centered at the original focus point to color transmission centered at the second focus point and rendering the image comprises:
acquiring a shortest path track from an original focus point to a second focus point;
moving the original focus point to a second focus point according to the shortest path track;
judging whether the actual moving track from the original focus point to the second focus point deviates the shortest path track;
if the deviation exists, adjusting the angular velocity and the acceleration during moving until the actual moving track is superposed with the shortest path track;
if not, continuing to execute the step of moving the original focus point to the second focus point according to the shortest path track.
8. The image rendering method of VR glasses as in claim 3, wherein the step of determining the pupil center point comprises:
obtaining eye visual information;
filtering and separating the eye muscle data;
determining the current eyeball state according to the eye muscle data;
comparing the current eyeball state with a pre-stored eyeball state database, and acquiring current eyeball information corresponding to the current eyeball state;
and determining the pupil center point according to the current eyeball information.
9. VR glasses for performing the method for image rendering of VR glasses as claimed in any one of claims 1 to 8, the VR glasses comprising:
the visual acquisition module is used for acquiring visual images and converting the visual images into visual image data for transmission;
the processor is used for processing the data collected and transmitted to the visual image, determining a pupil center point and outputting image data to be displayed according to the pupil center point;
and the transparent display screen is a lens and is used for displaying the image data output by the processor.
10. The VR glasses of claim 9, wherein a gyroscope is further installed in a frame of the VR glasses, and the gyroscope is connected in series between the processor and the transparent display for detecting and collecting jitter data of the transparent display and transmitting the jitter data to the processor for processing the jitter data, and the processor generates a corresponding anti-jitter operation according to the jitter data and converts the anti-jitter operation into anti-jitter data for output.
CN202110940842.5A 2021-08-16 2021-08-16 VR glasses and image presentation method thereof Pending CN113660477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110940842.5A CN113660477A (en) 2021-08-16 2021-08-16 VR glasses and image presentation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110940842.5A CN113660477A (en) 2021-08-16 2021-08-16 VR glasses and image presentation method thereof

Publications (1)

Publication Number Publication Date
CN113660477A true CN113660477A (en) 2021-11-16

Family

ID=78480429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110940842.5A Pending CN113660477A (en) 2021-08-16 2021-08-16 VR glasses and image presentation method thereof

Country Status (1)

Country Link
CN (1) CN113660477A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120302289A1 (en) * 2011-05-27 2012-11-29 Kang Heejoon Mobile terminal and method of controlling operation thereof
CN104216508A (en) * 2013-05-31 2014-12-17 中国电信股份有限公司 Method and device for operating function key through eye movement tracking technique
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN107562199A (en) * 2017-08-31 2018-01-09 北京金山安全软件有限公司 Page object setting method and device, electronic equipment and storage medium
US20180129278A1 (en) * 2014-07-27 2018-05-10 Alexander Luchinskiy Interactive Book and Method for Interactive Presentation and Receiving of Information
CN109271030A (en) * 2018-09-25 2019-01-25 华南理工大学 Blinkpunkt track various dimensions comparative approach under a kind of three-dimensional space
CN112507799A (en) * 2020-11-13 2021-03-16 幻蝎科技(武汉)有限公司 Image identification method based on eye movement fixation point guidance, MR glasses and medium
CN112835445A (en) * 2019-11-25 2021-05-25 华为技术有限公司 Interaction method, device and system in virtual reality scene

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120302289A1 (en) * 2011-05-27 2012-11-29 Kang Heejoon Mobile terminal and method of controlling operation thereof
CN104216508A (en) * 2013-05-31 2014-12-17 中国电信股份有限公司 Method and device for operating function key through eye movement tracking technique
US20180129278A1 (en) * 2014-07-27 2018-05-10 Alexander Luchinskiy Interactive Book and Method for Interactive Presentation and Receiving of Information
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN107562199A (en) * 2017-08-31 2018-01-09 北京金山安全软件有限公司 Page object setting method and device, electronic equipment and storage medium
CN109271030A (en) * 2018-09-25 2019-01-25 华南理工大学 Blinkpunkt track various dimensions comparative approach under a kind of three-dimensional space
CN112835445A (en) * 2019-11-25 2021-05-25 华为技术有限公司 Interaction method, device and system in virtual reality scene
CN112507799A (en) * 2020-11-13 2021-03-16 幻蝎科技(武汉)有限公司 Image identification method based on eye movement fixation point guidance, MR glasses and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈明: "增强现实虚实交互的若干关键问题研究", 硕士电子期刊, 15 January 2011 (2011-01-15), pages 95 *

Similar Documents

Publication Publication Date Title
US9898075B2 (en) Visual stabilization system for head-mounted displays
JP6393367B2 (en) Tracking display system, tracking display program, tracking display method, wearable device using them, tracking display program for wearable device, and operation method of wearable device
CN108170279B (en) Eye movement and head movement interaction method of head display equipment
US9311718B2 (en) Automated content scrolling
WO2022066578A1 (en) Touchless photo capture in response to detected hand gestures
US11487354B2 (en) Information processing apparatus, information processing method, and program
US11575877B2 (en) Utilizing dual cameras for continuous camera capture
CN111630477A (en) Apparatus for providing augmented reality service and method of operating the same
US11506902B2 (en) Digital glasses having display vision enhancement
KR20180004112A (en) Eyeglass type terminal and control method thereof
US11656471B2 (en) Eyewear including a push-pull lens set
US20240110807A1 (en) Navigation assistance for the visually impaired
US11982814B2 (en) Segmented illumination display
US20220375172A1 (en) Contextual visual and voice search from electronic eyewear device
US20230185090A1 (en) Eyewear including a non-uniform push-pull lens set
CN113660477A (en) VR glasses and image presentation method thereof
US20230168522A1 (en) Eyewear with direction of sound arrival detection
US20240103684A1 (en) Methods for displaying objects relative to virtual surfaces
WO2021044732A1 (en) Information processing device, information processing method, and storage medium
US20240233288A1 (en) Methods for controlling and interacting with a three-dimensional environment
US20240103685A1 (en) Methods for controlling and interacting with a three-dimensional environment
US20240103677A1 (en) User interfaces for managing sharing of content in three-dimensional environments
CN115981481A (en) Interface display method, device, equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination