CN114327032A - Virtual reality equipment and VR (virtual reality) picture display method - Google Patents

Virtual reality equipment and VR (virtual reality) picture display method Download PDF

Info

Publication number
CN114327032A
CN114327032A CN202110184636.6A CN202110184636A CN114327032A CN 114327032 A CN114327032 A CN 114327032A CN 202110184636 A CN202110184636 A CN 202110184636A CN 114327032 A CN114327032 A CN 114327032A
Authority
CN
China
Prior art keywords
value
display
user
virtual reality
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110184636.6A
Other languages
Chinese (zh)
Inventor
郑美燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110184636.6A priority Critical patent/CN114327032A/en
Publication of CN114327032A publication Critical patent/CN114327032A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a virtual reality device and a VR picture display method, which can receive control instructions input by a user in real time. After the user inputs a control instruction for setting the display area, a user-defined area specified by the user can be extracted from the control instruction, and the displayed picture is zoomed according to the user-defined area, so that the final picture is displayed in the display according to the user-defined area. The method can support the user to set the display area in a self-defined manner, so that the finally displayed picture adapts to the watching habits of different users, the picture watched by the user is clearer, and the user experience is enhanced.

Description

Virtual reality equipment and VR (virtual reality) picture display method
Technical Field
The application relates to the technical field of virtual reality, in particular to virtual reality equipment and a VR picture display method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving a person a sense of environmental immersion. A virtual reality device is a device that uses virtual display technology to present a virtual picture to a user. Generally, a virtual reality device includes two display screens for presenting virtual picture contents, corresponding to left and right eyes of a user, respectively. When the contents displayed by the two display screens are respectively from the images of the same object from different visual angles, the stereoscopic viewing experience can be brought to the user.
The virtual reality equipment is worn on the face of a user, so that the two display screens are positioned in front of the eyes of the user. Because the display screen is closer to the user's eyes, the virtual reality device places optical components in the area between the screen and the user's eyes in order to allow the user to see clearly what is displayed in the screen. The optical assembly consists of a plurality of lenses, and the plurality of lenses can modify the angle of the light source of the crystalline lens to enable the light source to be read by human eyes again, so that the effects of increasing the visual angle, amplifying the picture and enhancing the three-dimensional effect are achieved.
Taking a convex lens as an example, the convex lens is mostly in a shape with thin edges and thick center. A lens of this shape will cause distortion of the picture viewed by the user through the lens, i.e. distortion of the picture near the edge, during use. To improve the viewing experience for the user, the virtual reality device may display a fixed display area of the image delivered to the screen on the screen, while hiding the edge area to overcome the distortion effect. However, since the distance between the eyes of different users is different, the picture of the fixed display area is not clear and is uncomfortable to some users.
Disclosure of Invention
The application provides virtual reality equipment and a VR picture display method, and aims to solve the problem that the traditional fixed display area method is unclear for part of user watching experience.
In one aspect, the present application provides a virtual reality device comprising a display and a controller. The display is used for presenting a user interface; the controller is configured to perform the following program steps:
receiving a control instruction which is input by a user and used for setting a display area;
responding to the control instruction, and extracting a custom area specified in the control instruction;
according to the self-defined area, zooming the picture to be displayed to obtain a self-defined area image;
and controlling the display to display the self-defined area image.
On the other hand, the application also provides a VR picture display method applied to the virtual reality device, and the VR picture display method includes the following steps:
receiving a control instruction which is input by a user and used for setting a display area;
responding to the control instruction, and extracting a custom area specified in the control instruction;
according to the self-defined area, zooming the picture to be displayed to obtain a self-defined area image;
and controlling the display to display the self-defined area image.
According to the technical scheme, the virtual reality equipment and the VR picture display method can receive the control instruction input by the user in real time. After the user inputs a control instruction for setting the display area, a user-defined area specified by the user can be extracted from the control instruction, and the displayed picture is zoomed according to the user-defined area, so that the final picture is displayed in the display according to the user-defined area. The method can support the user to set the display area in a self-defined manner, so that the finally displayed picture adapts to the watching habits of different users, the picture watched by the user is clearer, and the user experience is enhanced.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a display system including a virtual reality device in an embodiment of the present application;
FIG. 2 is a schematic diagram of a VR scene global interface in an embodiment of the application;
FIG. 3 is a schematic diagram of a recommended content area of a global interface in an embodiment of the present application;
FIG. 4 is a schematic diagram of an application shortcut operation entry area of a global interface in an embodiment of the present application;
FIG. 5 is a schematic diagram of a suspension of a global interface in an embodiment of the present application;
FIG. 6 is a schematic diagram of rendering a scene in an embodiment of the present application;
FIG. 7 is a schematic view of a display area in an embodiment of the present application;
FIG. 8 is a flowchart illustrating a VR frame display method according to an embodiment of the present application;
FIG. 9 is a schematic flow chart illustrating the generation of a custom region in the embodiment of the present application;
FIG. 10 is a schematic diagram of an input interface in an embodiment of the present application;
FIG. 11 is a diagram illustrating an effect of inputting vertex coordinates in an embodiment of the present application;
FIG. 12 is a flowchart illustrating an embodiment of determining whether an input error value is out of an error value range;
fig. 13 is a schematic flowchart illustrating a process of performing scaling on a to-be-displayed image according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In the embodiment of the present application, the virtual Reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including but not limited to VR glasses, Augmented Reality (AR) devices, VR game devices, mobile computing devices, other wearable computers, and the like. The technical solutions of the embodiments of the present application are described by taking VR glasses as an example, and it should be understood that the provided technical solutions can be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or may be connected to other intelligent display devices as an external device, where the display devices may be smart televisions, computers, tablet computers, servers, and the like.
The virtual reality device 500 may be worn behind the face of the user, and display a media image to provide close-range images for the eyes of the user, so as to provide an immersive experience. To present the asset display, virtual reality device 500 may include a number of components for displaying the display and facial wear. Taking VR glasses as an example, the virtual reality device 500 may include a housing, temples, an optical system, a display assembly, a posture detection circuit, an interface circuit, and the like. In practical application, the optical system, the display component, the posture detection circuit and the interface circuit can be arranged in the shell to present a specific display picture; the two sides of the shell are connected with the temples so as to be worn on the face of a user.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to a processing element such as a controller, and the processing element can adjust specific picture content in the display assembly according to the detected gesture data.
It should be noted that the manner in which the specific screen content is presented varies according to the type of the virtual reality device 500. For example, as shown in fig. 1, for a part of thin and light VR glasses, a built-in controller generally does not directly participate in a control process of displaying content, but sends gesture data to an external device, such as a computer, and the external device processes the gesture data, determines specific picture content to be displayed in the external device, and then returns the specific picture content to the VR glasses, so as to display a final picture in the VR glasses.
In some embodiments, the virtual reality device 500 may access the display device 200, and a network-based display system is constructed between the virtual reality device 500 and the server 400, so that data interaction may be performed among the virtual reality device 500, the display device 200, and the server 400 in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific picture content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display apparatus 200 may provide a broadcast receiving television function and may additionally provide an intelligent network television function of a computer support function, including but not limited to a network television, an intelligent television, an Internet Protocol Television (IPTV), and the like.
The display device 200 and the virtual reality device 500 also perform data communication with the server 400 by a plurality of communication methods. The display device 200 and the virtual reality device 500 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
In the course of data interaction, the user may operate the display apparatus 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 in a direct wireless connection manner or in an indirect connection manner. That is, in some embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display device 200 through a direct connection manner such as bluetooth, infrared, etc. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may directly transmit the control command data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display apparatus 200 through a wireless router to establish indirect connection communication with the display apparatus 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also use the mobile terminal 300 and the remote controller 100 to directly interact with the virtual reality device 500, for example, the mobile terminal 300 and the remote controller 100 may be used as a handle in a virtual reality scene to implement functions such as somatosensory interaction.
In some embodiments, the display components of the virtual reality device 500 include a display screen and drive circuitry associated with the display screen. In order to present a specific picture and bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because the user can observe the picture content by the left and right eyes, the user can observe a display picture with strong stereoscopic impression when wearing the glasses.
The optical system in the virtual reality device 500 is an optical module consisting of a plurality of lenses. The optical system is arranged between the eyes of a user and the display screen, and can increase the optical path through the refraction of the lens on the optical signal and the polarization effect of the polaroid on the lens, so that the content displayed by the display assembly can be clearly displayed in the visual field range of the user. Meanwhile, in order to adapt to the eyesight of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance between the lenses is changed, the optical path is changed, and the definition of a picture is adjusted.
The interface circuit of the virtual reality device 500 may be configured to transmit interactive data, and in addition to the above-mentioned transmission of the gesture data and the display content data, in practical applications, the virtual reality device 500 may further connect to other display devices or peripherals through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so as to output a displayed screen to the display device in real time for display. As another example, the virtual reality device 500 may also be connected to a handle via an interface circuit, and the handle may be operated by a user's hand, thereby performing related operations in the VR user interface.
Wherein the VR user interface may be presented as a plurality of different types of UI layouts according to user operations. For example, the user interface may include a global UI, as shown in fig. 2, after the AR/VR terminal is started, the global UI may be displayed in a display screen of the AR/VR terminal or a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut operation entry area 3, and a suspended matter area 4.
The recommended content area 1 is used for configuring the TAB columns of different classifications; media resources, special subjects and the like can be selected and configured in the column; the media assets can include services with media asset contents such as 2D movies, education courses, tourism, 3D, 360-degree panorama, live broadcast, 4K movies, program application, games, tourism and the like, and the columns can select different template styles and can support simultaneous recommendation and arrangement of the media assets and the titles, as shown in FIG. 3.
In some embodiments, a status bar may be further disposed at the top of the recommended content area 1, and a plurality of display controls may be disposed in the status bar, including common options such as time, network connection status, and power amount. The content included in the status bar may be customized by the user, for example, content such as weather, user's head portrait, etc. may be added. The content contained in the status bar may be selected by the user to perform the corresponding function. For example, when the user clicks on the time option, the virtual reality device 500 may display a time device window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the virtual reality device 500 may display a WiFi list on the current interface or jump to the network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of a specific item. For example, the time control may be directly displayed as specific time text information, and display different text at different times; the power control may be displayed as different pattern styles according to the current power remaining condition of the virtual reality device 500.
The status bar is used to enable the user to perform common control operations, enabling rapid setup of the virtual reality device 500. Since the setup program for the virtual reality device 500 includes many items, all commonly used setup options are typically not displayed in their entirety in the status bar. To this end, in some embodiments, an expansion option may also be provided in the status bar. After the expansion option is selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further set in the expansion window for implementing other functions of the virtual reality device 500.
For example, in some embodiments, after the expansion option is selected, a "quick center" option may be set in the expansion window. After the user clicks the shortcut center option, the virtual reality device 500 may display a shortcut center window. The shortcut center window may include "screen capture", "screen recording", and "screen projection" options for waking up corresponding functions, respectively.
The service class extension area 2 supports extension classes configuring different classes. And if the new service type exists, supporting the configuration of an independent TAB and displaying the corresponding page content. The expanded classification in the service classification expanded area 2 can also perform sequencing adjustment and offline service operation on the expanded classification. In some embodiments, the service class extension area 2 may include the content of: movie & TV, education, tourism, application, my. In some embodiments, the business category extension area 2 is configured to expose a large business category TAB and support more categories for configuration, which is illustrated in support of configuration, as shown in fig. 3.
The application shortcut operation entry area 3 can specify that pre-installed applications are displayed in front for operation recommendation, and support to configure a special icon style to replace a default icon, wherein the pre-installed applications can be specified in a plurality. In some embodiments, the application shortcut operation entry area 3 further includes a left-hand movement control and a right-hand movement control for moving the option target, for selecting different icons, as shown in fig. 4.
The suspended matter region 4 may be configured above the left oblique side or above the right oblique side of the fixed region, may be configured as an alternative character, or is configured as a jump link. For example, the flotage jumps to an application or displays a designated function page after receiving the confirmation operation, as shown in fig. 5. In some embodiments, the suspension may not be configured with jump links, and is used solely for image presentation.
In some embodiments, the global UI further comprises a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the icon is selected by the handheld controller, the icon displays a character prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after the search icon is selected, the search icon displays the characters including "search" and the original icon, and after the icon or the characters are further clicked, the search icon jumps to a search page; for another example, clicking the favorite icon jumps to the favorite TAB, clicking the history icon default location display history page, clicking the search icon jumps to the global search page, clicking the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a return button; a main page key, and the long press of the main page key can realize the reset function; volume up-down buttons; and the touch area can realize the functions of clicking, sliding, pressing and holding a focus and dragging.
In some embodiments, the virtual reality device 500 obtains the screen content corresponding to the interface by rendering a scene. The rendering scene refers to a virtual scene constructed by a rendering engine of the virtual reality device 500 through a rendering program. For example, as shown in fig. 6, a unity3D scene may be constructed by the virtual reality device 500 based on a unity3D rendering engine when a display screen is presented. In a unity3D scene, various virtual objects and functional controls may be added to render a particular usage scene. For example, when the global UI interface is displayed, the virtual reality device 500 may add multiple UI interface controls such as a recommended content area 1, a business classification extension area 2, an application shortcut operation entry area 3, and a suspended matter area 4, and a background pattern corresponding to the UI interface, such as a sky box, in the rendering scene.
Similarly, for other interfaces, the virtual reality device 500 may add other controls in the rendered scene. For example, when playing a multimedia asset, a display panel can be added to the unity3D scene, and the display panel is used for presenting a multimedia asset picture. Meanwhile, virtual object models such as seats, sound equipment and characters can be added in the unity3D scene, and therefore the cinema effect is created.
The virtual reality apparatus 500 may also set a virtual camera in the unity3D scene in order to output the rendered screen. For example, the virtual reality apparatus 500 may set a left-eye camera and a right-eye camera in the unity3D scene according to the positional relationship of the two eyes of the user, and the two virtual cameras may simultaneously capture an object in the unity3D scene, so as to output rendered pictures to the left display and the right display, respectively. For a better immersive experience, the angles of the two virtual cameras in the unity3D scene may be adjusted in real-time with the pose sensor of the virtual reality device 500, so that rendered pictures in the unity3D scene at different viewing angles may be output in real-time as the user acts wearing the virtual reality device 500.
The pictures output by the rendered scene can be sent to a display for display. In some embodiments, because the optical components are disposed in the virtual reality device 500, the optical components may be used to increase the optical distance from the display to the eyes of the user, thereby enabling the user to clearly see the content of the display displayed on the display. The optical assembly can be composed of a plurality of convex lenses, and light rays emitted by the display are refracted through the convex lenses, so that a user can clearly see displayed picture contents and obtain immersion experience.
However, the thickness of the convex lens in the middle area is large, and the thickness of the convex lens in the edge area is small, so that the display picture viewed by the user and close to the edge of the lens is deformed, namely distorted. Therefore, for the partial virtual reality device 500, a display area can be set, the picture in the display area can be displayed normally, and the picture outside the display area can be hidden, so as to alleviate the influence of distortion on the edge picture. For example, as shown in fig. 7, the virtual reality apparatus 500 may set an initial display area to be within a rectangular area of 3840 × 2160 for a display size based on picture content obtained from a rendered scene, and the display area may be determined according to a statistical result of an experience effect, and has a certain experience. And hiding the picture outside the display area according to the set shape of the display area, namely in the display, the picture content of the middle area is visible, and the edge area is displayed as a pure black pattern.
The default display area of the system when the virtual reality device 500 outputs the image picture becomes the initial display area, and when the image picture is displayed according to the initial display area, the requirements of most users can be met, but due to different distances between two eyes of different users, facial features of some users are not suitable for watching the display picture according to the initial display area, so that pictures watched by the users are not clear, and user experience is reduced. Therefore, in order to adapt to facial features of different users, in some embodiments of the present application, a virtual reality device and a VR screen display method are provided. Wherein, the virtual reality device 500 comprises a display and a controller, the display is used for displaying various user interfaces; the controller is used to process various usage parameters and interaction data of the virtual reality device 500 to control the presentation of different screen content in the display, and as shown in fig. 8, the controller may be further configured to perform the VR screen display method, including:
s1: and receiving a control instruction which is input by a user and used for setting the display area.
When the user uses the virtual reality device 500, various control instructions can be input as required to realize the interaction between the user and the virtual reality device 500. When the user inputs a control instruction for setting the display region, the virtual reality device 500 may run a program related to setting the display region according to a VR screen display program preset in the operating system.
Among them, the control command for setting the display area may be input in different ways according to the hardware configuration and the operating system of the virtual reality device 500. For example, when the user uses the virtual reality device 500 for the first time or for the first time after restoring factory settings, the virtual reality device 500 may default to the user inputting a control instruction for setting the display region.
And for the virtual reality device 500 in normal use, the display area related program can be set up by a specific portal initiation. For example, an adjustment display area control may be set in a status bar of the setting interface or other UI interfaces, and a user may execute a control instruction for setting the display area by clicking the adjustment display area control, that is, interactively inputting a control instruction for setting the display area through the interface UI, in use.
The user can also input a control instruction for setting the display area through a specific interactive action of the shortcut key. The virtual reality device 500 may set a shortcut key policy according to a key condition carried by itself. For example, by clicking the power key of the virtual reality device 500 three times in succession, the set display region correlation program is triggered. Namely, the control command for setting the display area is a shortcut key command for clicking the power key three times continuously.
For the partial virtual reality device 500, the user may also complete the input of the control instruction by means of other interactive devices or interactive systems. For example, an intelligent voice system may be built in the virtual reality device 500, and the user may input voice information, such as "set display area", "i see no clear picture", and the like, through an audio input device such as a microphone. The intelligent voice system identifies the meaning of the voice information by converting, analyzing, processing and other modes of the voice information of the user, and generates a control instruction according to an identification result.
S2: and responding to the control instruction, and extracting the custom area specified in the control instruction.
When the user inputs the control instruction, the user-defined area can be specified in the control instruction. The virtual reality device 500 may extract a user-designated custom region from a control instruction input by a user after receiving the control instruction for setting the display region. For example, when the user inputs a control instruction for setting the display area, an input interface may be displayed in the display, and the user may click on four vertices of the display area to be set in the input interface, thereby generating the custom area from the four vertex coordinates.
The user may also specify the custom region in other ways, for example, the display may include a text box or a numerical scroll bar in the input interface, and the user may input the binocular distance (interpupillary distance) through the text box, so that the virtual reality apparatus 500 is self-matched to a custom region according to the numerical value input by the user. Or the user directly inputs the width and the height of the custom area, and the custom area is determined by the input width and the input height.
In some embodiments, a plurality of options may be preset in the virtual reality device 500, each option corresponds to one custom region, and when a user inputs a control instruction for setting a display region, an appropriate custom region may be selected from the set options. For example, when a user inputs a control command for setting a display area, an area selection interface may be displayed, and the area selection interface may include a plurality of options set according to conditions such as age, sex, and region, and each option corresponds to a custom area under the condition.
S3: and according to the self-defined area, executing scaling processing on the picture to be displayed so as to obtain a self-defined area image.
According to the user-defined region designated by the user when inputting the control instruction, the virtual reality device 500 may perform scaling processing on the to-be-displayed picture to obtain a user-defined region image. When the range of the user-defined area designated by the user is larger than the range of the initial display area, the image to be displayed can be amplified; when the user-defined area range specified by the user is smaller than the initial display area range, the reduction processing can be carried out on the picture to be displayed.
However, since the area defined by the user usually has no fixed aspect ratio, when the image to be displayed is zoomed, the width and the height of the area defined by the user can be compared with the width and the height of the initial display area respectively to determine the zoom ratio, so that the image to be displayed is zoomed in an equal ratio according to the determined zoom ratio.
S4: and controlling the display to display the self-defined area image.
After performing the scaling process on the screen to be displayed, the virtual reality device 500 may send the processed screen image to the display for display. Because the image picture received in the display is subjected to zooming processing, the picture finally presented in the display can be displayed according to the user-defined area, namely the displayed picture conforms to the facial features of the current user, so that the user can watch a clear picture and the uncomfortable viewing experience in the viewing process is relieved.
As shown in fig. 9, in some embodiments, in the step of receiving a control instruction for setting the display area input by the user in order to obtain the custom area, the controller is further configured to:
s110: extracting vertex coordinates of a display area input by a user;
s120: calculating a width value according to an abscissa value in the vertex coordinates, and calculating a height value according to an ordinate value in the vertex coordinates;
s130: and generating a self-defined area according to the width value and the height value.
After receiving the control instruction, the virtual reality apparatus 500 may control the display to sequentially display an input interface for prompting to input coordinates of each vertex, and record the coordinates of the vertices input in the input interface by the user. The vertex coordinates comprise an upper left corner point coordinate, a lower right corner point coordinate and an upper right corner point coordinate.
For example, after the device is used for the first time (including after factory reset) or the user clicks an icon for adjusting the display area, the virtual reality device 500 may start an application for adjusting the display area. After the application is started, a grid map as shown in fig. 10 is displayed to facilitate the user to select a suitable comfort zone. To guide the user through the input of the vertex coordinates, the virtual reality device 500 may first prompt the user to click on a point that feels comfortable in the upper left corner and record coordinate information of the point (X1, Y1); then prompting the user to click a comfortable point at the upper right corner and recording the coordinate information of the point (X2, Y2); then prompting the user to click a comfortable point at the lower left corner, and recording the coordinate information of the point (X3, Y3); finally, the user is prompted to click on a comfortable point at the lower right corner, and coordinate information of the point is recorded (X4, Y4). With four prompts, the user can be guided to complete the input and obtain the coordinate information of the appropriate area according to the user input, as shown in fig. 11.
After the user inputs the vertex coordinates, the width and the height of the user-defined area can be calculated according to the vertex coordinates, namely, the width value is calculated according to the abscissa value in the vertex coordinates, and the height value is calculated according to the ordinate value in the vertex coordinates. For example, width value W1 ═ X2-X1 is calculated from the abscissa of the top left and top right vertices; and calculating a height value H1-Y3-Y1 according to the ordinate of the top left corner and the bottom left corner.
And finally, obtaining a width value and a height value according to calculation, and generating a user-defined area. That is, the custom region may be a rectangular region with a width W1 and a height H1. Obviously, the width value and the height value of the custom area can also be calculated by other vertex coordinates, for example, the width value W2 is calculated by the abscissa of the vertex at the lower left corner and the lower right corner, namely X4-X3; and calculating height values H2-Y4-Y2 according to the vertical coordinates of the top right corner and the bottom right corner, and generating a custom area according to the width value W2 and the height value H2.
Similarly, other combinations of width and height values may be used as the width and height values that are ultimately used to generate the custom region. For example, the width value W1 and the height value H2 may be used as the width and height of the custom region, and the width value W2 and the height value H1 may also be used as the width and height of the custom region. The specifically adopted combination of the width and the height may be determined according to the input manner of the vertex coordinates, or according to the input result of the vertex coordinates.
Because the user inputs the four vertex coordinates in the input interface, when the user-defined width and height are calculated, the top width and the bottom width can be calculated respectively, and the left side height and the right side height can be calculated. In some embodiments, the difference between the abscissa in the upper left and upper right coordinates may be calculated to obtain a first width value, W1, and the difference between the abscissa in the lower left and lower right coordinates may be calculated to obtain a second width value, W2. Similarly, calculating the difference between the ordinate in the upper left corner coordinate and the ordinate in the lower left corner coordinate to obtain a first height value, i.e. H1; and calculating the difference between the ordinate of the upper-right coordinate and the ordinate of the lower-right coordinate to obtain a second height value, i.e., H2. Calculating an average width value according to the first width value and the second width value, wherein the average width value is used as the width of the self-defined area, namely W is (W1+ W2)/2; and calculating an average height value as the height of the custom region according to the first height value and the second height value, namely H ═ H1+ H2)/2.
Because the user inputs the vertex coordinates by clicking the input interface, the input is limited by the user operation process, and the input vertex coordinates may be too irregular, so that a proper custom area cannot be identified. In this regard, after the user inputs the vertex coordinates, the coordinate position may be detected. That is, as shown in fig. 12, in some embodiments, the step of generating the custom region according to the width value and the height value further includes:
s131: calculating an input error value according to the vertex coordinates;
s132: if the input error value exceeds the set error value range, controlling the display to display an input interface for prompting the user to input the vertex coordinates again;
s133: and if the input error value does not exceed the range of the set error value, calculating an average width value and an average height value according to the vertex coordinates to generate a self-defined area.
Wherein the input error value includes a width error value calculated from the abscissa and a height error value calculated from the ordinate. The virtual reality device 500 may generate the width error value by calculating the difference between W1 and W2. When the width error value exceeds the set error range, the area selected by the user is not suitable, and the setting is invalid, so that the user can be prompted to reselect the displayed comfortable area; when the width error value is within the set error range, the average width value W of the selected area is calculated as (W1+ W2)/2 according to W1 and W2 as the width of the custom area.
Similarly, the virtual reality device 500 may further generate a height error value according to the first height value and the second height value, that is, calculate an error between H1 and H2, and when the height error value exceeds a set error range, it indicates that the region selected by the user is not appropriate, and the setting is invalid, so that the user may be prompted to reselect the displayed comfort region; when the height error value is within the set error range, calculating the average height value H of the selected area as (H1+ H2)/2 according to H1 and H2, and taking the average height value as the height of the custom area.
As can be seen, in this embodiment, the virtual reality device 500 may customize the display area by displaying the input interface and prompting the user to sequentially input the points that are deemed appropriate. And the width and the height of the user-defined area are determined by detecting the vertex coordinates input by the user, so that the user can conveniently input the user-defined area which accords with the display area specification.
After obtaining the width and the height of the custom region, the virtual reality device 500 may determine the size of the custom region according to the calculated width and height information, and further needs to set the position of the custom region in order to finally determine the display region, so in some embodiments, the step of generating the custom region according to the width value and the height value further includes: calculating the coordinates of the central point according to the vertex coordinates; and generating a self-defined area by taking the central point coordinate as a reference.
The central point coordinate can be obtained by calculating the horizontal coordinate value and the vertical coordinate value in the vertex coordinate respectively. For example, in the center point coordinates (X ', Y'), the abscissa X1 '═ X1+ X2)/2, or X2' ═ X3+ X4)/2; the ordinate Y1 ═ Y1+ Y3)/2, or Y2 ═ Y1+ Y3)/2. And the coordinate average value can be obtained according to the two calculations and used as the coordinate of the central point. I.e., X ' ═ (X1 ' + X2 ')/2; y ═ (Y1 '+ Y2')/2.
After the central point coordinate is obtained through calculation, the virtual reality device 500 may generate the customized region based on the central point coordinate, that is, the center of the customized region is located on the central point coordinate, so that the customized region can retain more main contents in the original video frame.
After the user-defined region is extracted, the virtual reality device 500 performs scaling processing on the to-be-displayed picture to obtain a user-defined region image. To this end, as shown in fig. 13, in some embodiments, the step of performing scaling processing on the to-be-displayed screen further includes:
s310: acquiring an initial display area;
s320: comparing the initial display area with the custom area to generate a scaling;
s330: and executing zooming processing on the picture to be displayed according to the zooming proportion.
The virtual reality device 500 may extract an initial display area of the to-be-displayed picture, that is, a default display area of the system, after extracting the custom area. And comparing the initial display area with the user-defined area, thereby generating the scaling according to the comparison result between the initial display area and the user-defined area. The scaling can be generated according to the comparison result of the parameters such as the width, the height, the area and the like of the initial display area and the user-defined area, and is used for performing scaling processing on the picture to be displayed according to the scaling.
For example, the scaling may be achieved by calculating a ratio of the initial display region Width to the custom region Width to generate a first scale value, i.e., calculating a ratio of the Width W set by the VR application to the user-defined region Width W, to obtain the first scale value Rat io1 as W/Width. And then calculating the Ratio of the Height of the initial display area to the Height of the user-defined area to generate a second proportional value, namely calculating the Ratio of the Height set by the VR application to the width H of the user-defined area, and obtaining the second proportional value Ratio2 which is H/Height.
To ensure that the width-to-height ratio of the content displayed by the VR application is unchanged, the width and the height need to use the same scaling ratio, and the first scaling value and the second scaling value are compared to determine the scaling value for performing scaling in the first scaling value and the second scaling value. To ensure that all the contents are displayed within the comfort region set by the user, the smaller of the Ratio1 and Ratio2 is taken to be determined as the zoom Ratio. That is, if the first Ratio value Ratio1 is greater than or equal to the second Ratio value Ratio2, the second Ratio value Ratio2 is determined to be a scaling; if the first scale value Ratio1 is less than the second scale value Ratio2, the first scale value Ratio1 is determined to be a scaling.
And finally, executing zooming processing on the picture to be displayed according to the zooming proportion. For example, the virtual reality device 500 may obtain the actual display Width value by calculating a product of the initial display region Width and the scaling Ratio, that is, calculating a Width value Wshow that VR applies to the actual display; the actual display Height value is obtained by calculating the product of the initial display area Height and the scaling Ratio, i.e. calculating the Height value Hshow that VR applies to the actual display. And executing pixel interpolation or merging algorithm on the image to be displayed according to the actual display width value and the actual display height value so as to generate a self-defined area image.
As can be seen, with the above embodiments, the virtual reality device 500 can support manual setting of the display area according to the viewing comfort level when the user uses the device, so as to achieve the best viewing effect. And generating a scaling by comparing the initial display area with the custom area, so that scaling processing is executed through the scaling to ensure that the width-height ratio of the VR application display content is unchanged.
Based on the virtual reality device 500, in some embodiments of the present application, a VR frame display method is further provided, which is applied to the virtual reality device 500, and the VR frame display method includes:
s1: receiving a control instruction which is input by a user and used for setting a display area;
s2: responding to the control instruction, and extracting a custom area specified in the control instruction;
s3: according to the self-defined area, zooming the picture to be displayed to obtain a self-defined area image;
s4: and controlling the display to display the self-defined area image.
Therefore, the VR picture display method provided by this embodiment may receive the control instruction input by the user in real time. After the user inputs a control instruction for setting the display area, a user-defined area specified by the user can be extracted from the control instruction, and the displayed picture is zoomed according to the user-defined area, so that the final picture is displayed in the display according to the user-defined area. The method can support the user to set the display area in a self-defined manner, so that the finally displayed picture adapts to the watching habits of different users, the picture watched by the user is clearer, and the user experience is enhanced.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display;
a controller configured to:
receiving a control instruction which is input by a user and used for setting a display area;
responding to the control instruction, and extracting a custom area specified in the control instruction;
according to the self-defined area, zooming the picture to be displayed to obtain a self-defined area image;
and controlling the display to display the self-defined area image.
2. The virtual reality device of claim 1, wherein the control instruction for setting the display area is input when receiving a control instruction for adjusting a display area control by first starting or clicking by a user; in the step of receiving a control instruction for setting a display area input by a user, the controller is further configured to:
extracting vertex coordinates of a display area input by a user;
calculating a width value according to an abscissa value in the vertex coordinates, and calculating a height value according to an ordinate value in the vertex coordinates;
and generating a self-defined area according to the width value and the height value.
3. The virtual reality device of claim 2, wherein in the step of generating a custom region according to the width value and the height value, the controller is further configured to:
calculating the coordinates of the central point according to the vertex coordinates;
and generating a custom area by taking the central point coordinate as a reference.
4. The virtual reality device of claim 2, wherein in the step of extracting the coordinates of the vertices of the display area input by the user, the controller is further configured to:
after the control instruction is received, controlling the display to sequentially display an input interface for prompting the input of each vertex coordinate;
and recording vertex coordinates input in the input interface by a user, wherein the vertex coordinates comprise an upper left corner point coordinate, a lower right corner point coordinate and an upper right corner point coordinate.
5. The virtual reality device of claim 4, wherein in the step of generating a custom region according to the width value and the height value, the controller is further configured to:
calculating an input error value according to the vertex coordinates, wherein the input error value comprises a width error value calculated according to the abscissa and a height error value calculated according to the ordinate;
if the input error value exceeds the set error value range, controlling the display to display an input interface for prompting the user to input the vertex coordinates again;
and if the input error value does not exceed the range of the set error value, calculating an average width value and an average height value according to the vertex coordinates to generate a self-defined area.
6. The virtual reality device of claim 5, wherein in the step of calculating an input error value from the vertex coordinates, the controller is further configured to:
calculating the difference between the abscissa of the coordinate at the upper left corner and the abscissa of the coordinate at the upper right corner to obtain a first width value, and calculating the difference between the abscissa of the coordinate at the lower left corner and the abscissa of the coordinate at the lower right corner to obtain a second width value;
generating a width error value according to the first width value and the second width value, wherein the width error value is equal to the difference value of the first width value and the second width value;
calculating the difference between the ordinate in the upper left corner coordinate and the ordinate in the lower left corner coordinate to obtain a first height value, and calculating the difference between the ordinate in the upper right corner coordinate and the ordinate in the lower right corner coordinate to obtain a second height value;
generating a height error value according to the first height value and the second height value, wherein the height error value is equal to the difference value of the first height value and the second height value.
7. The virtual reality device of claim 1, wherein in the step of performing scaling on the screen to be displayed, the controller is further configured to:
acquiring an initial display area;
comparing the initial display area with the custom area to generate a scaling;
and executing zooming processing on the picture to be displayed according to the zooming proportion.
8. The virtual reality device of claim 7, wherein in the step of comparing the initial display region with the custom region to generate a zoom scale, the controller is further configured to:
calculating the ratio of the width of the initial display area to the width of the user-defined area to generate a first proportional value;
calculating the ratio of the height of the initial display area to the height of the user-defined area to generate a second proportional value;
comparing the first proportion value with the second proportion value;
determining the second scale value as the scaling if the first scale value is greater than or equal to the second scale value;
determining the first scale value to be the scaling if the first scale value is less than the second scale value.
9. The virtual reality device of claim 7, wherein in the step of performing scaling processing on the screen to be displayed according to the scaling ratio, the controller is further configured to:
calculating an actual display width value, wherein the actual display width value is the product of the initial display area width and the scaling ratio;
calculating an actual display height value, the actual display height value being a product of the initial display area height and the scaling;
and executing pixel interpolation or merging algorithm on the image to be displayed according to the actual display width value and the actual display height value to generate a self-defined area image.
10. A VR picture display method applied to virtual reality equipment, wherein the virtual reality equipment comprises a display and a controller, and the VR picture display method comprises the following steps:
receiving a control instruction which is input by a user and used for setting a display area;
responding to the control instruction, and extracting a custom area specified in the control instruction;
according to the self-defined area, zooming the picture to be displayed to obtain a self-defined area image;
and controlling the display to display the self-defined area image.
CN202110184636.6A 2021-02-08 2021-02-08 Virtual reality equipment and VR (virtual reality) picture display method Pending CN114327032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110184636.6A CN114327032A (en) 2021-02-08 2021-02-08 Virtual reality equipment and VR (virtual reality) picture display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110184636.6A CN114327032A (en) 2021-02-08 2021-02-08 Virtual reality equipment and VR (virtual reality) picture display method

Publications (1)

Publication Number Publication Date
CN114327032A true CN114327032A (en) 2022-04-12

Family

ID=81044409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110184636.6A Pending CN114327032A (en) 2021-02-08 2021-02-08 Virtual reality equipment and VR (virtual reality) picture display method

Country Status (1)

Country Link
CN (1) CN114327032A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897004A (en) * 2017-02-27 2017-06-27 努比亚技术有限公司 A kind of method of adjustment of mobile terminal and display interface
CN106997234A (en) * 2016-01-22 2017-08-01 阿里巴巴集团控股有限公司 Virtual real mode processing method, equipment and system
CN109408011A (en) * 2018-09-14 2019-03-01 歌尔科技有限公司 Wear display methods, device and the equipment of display equipment
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen
US20190385371A1 (en) * 2018-06-19 2019-12-19 Google Llc Interaction system for augmented reality objects
US20200068190A1 (en) * 2018-08-21 2020-02-27 The Boeing Company System and method for foveated simulation
CN111103975A (en) * 2019-11-30 2020-05-05 华为技术有限公司 Display method, electronic equipment and system
CN111683281A (en) * 2020-06-04 2020-09-18 腾讯科技(深圳)有限公司 Video playing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997234A (en) * 2016-01-22 2017-08-01 阿里巴巴集团控股有限公司 Virtual real mode processing method, equipment and system
CN106897004A (en) * 2017-02-27 2017-06-27 努比亚技术有限公司 A kind of method of adjustment of mobile terminal and display interface
US20190385371A1 (en) * 2018-06-19 2019-12-19 Google Llc Interaction system for augmented reality objects
US20200068190A1 (en) * 2018-08-21 2020-02-27 The Boeing Company System and method for foveated simulation
CN109408011A (en) * 2018-09-14 2019-03-01 歌尔科技有限公司 Wear display methods, device and the equipment of display equipment
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen
CN111103975A (en) * 2019-11-30 2020-05-05 华为技术有限公司 Display method, electronic equipment and system
CN111683281A (en) * 2020-06-04 2020-09-18 腾讯科技(深圳)有限公司 Video playing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110636353B (en) Display device
CN110611787B (en) Display and image processing method
CN113064684B (en) Virtual reality equipment and VR scene screen capturing method
CN111970456B (en) Shooting control method, device, equipment and storage medium
US20210165481A1 (en) Method and system of interactive storytelling with probability-based personalized views
CN112866773B (en) Display equipment and camera tracking method in multi-person scene
CN112732089A (en) Virtual reality equipment and quick interaction method
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
CN112073770A (en) Display device and video communication data processing method
CN114302221A (en) Virtual reality equipment and screen-casting media asset playing method
CN112929750A (en) Camera adjusting method and display device
WO2022193931A1 (en) Virtual reality device and media resource playback method
CN114286077B (en) Virtual reality device and VR scene image display method
CN112073663A (en) Audio gain adjusting method, video chatting method and display equipment
CN114327032A (en) Virtual reality equipment and VR (virtual reality) picture display method
CN112399235B (en) Camera shooting effect enhancement method and display device of intelligent television
CN114363705A (en) Augmented reality equipment and interaction enhancement method
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
CN112905007A (en) Virtual reality equipment and voice-assisted interaction method
CN112732088B (en) Virtual reality equipment and monocular screen capturing method
WO2022111005A1 (en) Virtual reality (vr) device and vr scenario image recognition method
CN113587812B (en) Display equipment, measuring method and device
CN113645502B (en) Method for dynamically adjusting control and display device
WO2024131479A1 (en) Virtual environment display method and apparatus, wearable electronic device and storage medium
CN112667079A (en) Virtual reality equipment and reverse prompt picture display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination