CN111158483B - Display method and electronic equipment - Google Patents

Display method and electronic equipment Download PDF

Info

Publication number
CN111158483B
CN111158483B CN201911396591.8A CN201911396591A CN111158483B CN 111158483 B CN111158483 B CN 111158483B CN 201911396591 A CN201911396591 A CN 201911396591A CN 111158483 B CN111158483 B CN 111158483B
Authority
CN
China
Prior art keywords
input data
display content
user
condition
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911396591.8A
Other languages
Chinese (zh)
Other versions
CN111158483A (en
Inventor
李海岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911396591.8A priority Critical patent/CN111158483B/en
Publication of CN111158483A publication Critical patent/CN111158483A/en
Application granted granted Critical
Publication of CN111158483B publication Critical patent/CN111158483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a display method, which comprises the following steps: outputting first display content corresponding to a first direction of the virtual space; obtaining first input data and second input data; if the first input data does not meet a first condition and the second input data meets a second condition, obtaining and outputting second display content corresponding to a second direction of the virtual space; and if the first input data meets the first condition and the second input data meets the second condition, obtaining and outputting the first display content. The embodiment of the application also discloses the electronic equipment.

Description

Display method and electronic equipment
Technical Field
The present application relates to the field of electronics and information technologies, and in particular, to a display method and an electronic device.
Background
With the wide spread and application of electronic devices, the electronic devices can support more and more applications and have more and more powerful functions. In the related art, a user may change the display content of the electronic device in different display areas by operating hardware of the electronic device, for example, by rotating a knob in the electronic device or by pressing a physical key to move a display screen of the electronic device to the left, the right, the up, or the down, thereby changing the display content of the electronic device.
Disclosure of Invention
The embodiment of the application is expected to provide a display method and electronic equipment.
The technical scheme of the application is realized as follows:
a display method, comprising:
outputting first display content corresponding to a first direction of the virtual space;
obtaining first input data and second input data;
if the first input data does not meet a first condition and the second input data meets a second condition, obtaining and outputting second display content corresponding to a second direction of the virtual space;
and if the first input data meets the first condition and the second input data meets the second condition, outputting the first display content.
Alternatively, the display method is applied to an electronic device, wherein,
the first input data is data representing user behaviors obtained through an image acquisition device;
the second input data is data characterizing the motion of the electronic device obtained by a spatial acquisition device.
Optionally, the first input data includes behavior data of a user perceiving the body part of the first display content, and the second input data includes orientation data of the electronic device.
Optionally, if the first input data does not satisfy the first condition and the second input data satisfies the second condition, obtaining and outputting second display content corresponding to the second direction of the virtual space, including: if the first input data is not matched with the first target data and the second input data is matched with the second target data, obtaining and outputting the second display content;
accordingly, if the first input data satisfies the first condition and the second input data satisfies the second condition, obtaining and outputting the first display content includes: and if the first input data is matched with the first target data and the second input data is matched with the second target data, obtaining and outputting the first display content.
Optionally, the first input data does not match the first target data, indicating that the user can perceive the display content;
the first input data is matched with the first target data and represents the display content which is not sensed by the user.
Optionally, the first input data does not match the first target data, further characterizing: the user generates the eye closing action, and the duration of the eye closing action is less than a first threshold;
the first input data is matched with the first target data, and the first input data further represents that the user generates the eye closing action and the duration of the eye closing action is greater than or equal to the first threshold.
Optionally, the obtaining and outputting the first display content if the first input data matches the first target data and the second input data matches the second target data includes:
obtaining the first display content displayed at a first moment;
and if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting the first display content displayed at the first moment.
Optionally, the obtaining and outputting the first display content if the first input data matches the first target data and the second input data matches the second target data includes:
obtaining the first display content displayed at a first moment;
if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting the first display content at a second moment; the second time is after the first time and is separated from the first time by a first time length, and the first time length corresponds to the time length for obtaining the first input data.
An electronic device, comprising:
the display device is used for outputting first display content corresponding to a first direction of the virtual space;
the acquisition device is used for acquiring first input data and second input data;
processing means, configured to, if the first input data does not satisfy a first condition and the second input data satisfies a second condition, obtain second display content corresponding to a second direction of the virtual space, and enable the display to output the second display content;
the processing device is further configured to obtain the first display content and enable the display to output the first display content if the first input data meets the first condition and the second input data meets the second condition.
An electronic device, comprising:
the output module is used for outputting first display content corresponding to a first direction of the virtual space;
the obtaining module is used for obtaining first input data and second input data;
the output module is further configured to obtain and output second display content corresponding to a second direction of the virtual space if the first input data does not satisfy a first condition and the second input data satisfies a second condition;
the output module is further configured to output the first display content if the first input data satisfies the first condition and the second input data satisfies the second condition.
According to the display method and the electronic device, the first display content corresponding to the first direction of the virtual space is output; obtaining first input data and second input data; if the first input data does not meet the first condition and the second input data meets the second condition, obtaining and outputting second display content corresponding to a second direction of the virtual space; and if the first input data meets the first condition and the second input data meets the second condition, outputting the first display content. Therefore, based on whether the first input data meet the first condition or not and whether the second data meet the second condition or not, the display contents in different directions are output in the virtual space, so that the electronic equipment can automatically adjust the display contents in different directions based on the obtained first input data and the obtained second input data without manual adjustment of a user, and the problem that the user uses inconveniently due to the fact that the display contents are adjusted through hardware is avoided.
Drawings
FIG. 1 is a schematic diagram of a head orientation and display provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a display method according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of another head orientation and display provided by embodiments of the present application;
fig. 4 is a schematic flowchart of another display method provided in the embodiment of the present application;
fig. 5 is a schematic flowchart of another display method provided in the embodiment of the present application;
fig. 6 is a schematic flowchart of a display method according to another embodiment of the present application;
FIG. 7 is a schematic view of yet another head orientation and display provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be appreciated that reference throughout this specification to "an embodiment of the present application" or "an embodiment described previously" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in the embodiments of the present application" or "in the embodiments" in various places throughout this specification are not necessarily all referring to the same embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In a case where no specific description is given, the electronic device may execute any step in the embodiments of the present application, and the processor of the electronic device may execute the step. The embodiment of the present application does not limit the sequence of the steps executed by the electronic device. In addition, the data may be processed in the same way or in different ways in different embodiments. It should be further noted that any step in the embodiments of the present application may be executed by the electronic device independently, that is, when the electronic device executes any step in the following embodiments, the electronic device may not depend on the execution of other steps.
When a user uses a Virtual Reality (VR) device, if the user wants to see the display contents in different directions of a Virtual space, that is, if the user needs to change the viewing angle direction displayed by the VR device, the user needs to turn the head to a new direction. As shown in fig. 1, fig. 1 is a schematic diagram of a head orientation and display content provided by an embodiment of the present application. In fig. 1 (a), when the user wears the electronic apparatus and holds the head directly forward, the display contents of the automobile and the house can be viewed within the viewing angle range of the user. Then, if the user wants to see the display on the right side of the house, the user can turn his head to the right as shown in fig. 1 (b), and the user's head is oriented to the right, and the display contents of the house and the tree can be seen within the user's viewing angle. The user's head in the embodiment of the present application may or may not rotate the body while rotating.
However, if the user is in a state of having his head right for a long time, the user may have inconvenience in operating the electronic apparatus. The user can manually adjust the display contents in different directions by rotating a knob in the electronic equipment, but the hardware adjusting mode not only causes the inconvenience of the user and the easy damage of a hardware device, but also causes the incompatibility between the display contents in the electronic equipment and the body action, thereby causing the vertigo of the user.
For the above reasons, an embodiment of the present application provides a display method applied to an electronic device, as shown in fig. 2, the method includes the following steps:
s101, outputting first display content corresponding to a first direction of the virtual space.
The electronic equipment in this application embodiment can be virtual reality equipment or VR equipment, and VR equipment can be VR glasses, VR helmet or other VR wearable equipment. In another embodiment, the electronic device may not be a VR device, but only the display content of the electronic device may be changed based on the head rotation and/or the line-of-sight direction change of the user, which is not limited herein.
The electronic device may include a display. The display may be a head mounted display, which is a three-dimensional virtual reality (3DVR) graphics display and viewing device in a virtual reality application, that is separately connectable to the processor to accept 3DVR graphics signals from the processor. A display of the electronic device outputs first display content corresponding to a first direction of the virtual space.
Optionally, the electronic device may comprise a behavior first acquisition means. The first acquisition device may be an image acquisition device, which may include a camera or a gaze tracker, or the like. The first acquisition device is used for acquiring an eye image of the user and sending the eye image to the processor, and the processor can determine the sight line direction of the user based on the eye image. For example, if the electronic device determines that the gaze direction to the user is a first gaze direction, the electronic device outputs first display content corresponding to the first direction of the virtual space, where the first direction matches the first gaze direction. The directions of the different virtual spaces correspond to different directions of the user's gaze.
Optionally, the electronic device may further comprise a second acquisition device. The second acquisition device can be a space acquisition device, the space acquisition device can include one of orientation sensor, angle sensor, rotation sensor, gyroscope or triaxial acceleration sensor, etc., and the second acquisition device is used for gathering user's head orientation or head position. For example, when the electronic device acquires that the head orientation of the user is a first head orientation, the electronic device outputs first display content corresponding to a first direction of the virtual space, and the first direction is matched with the first head direction. The directions of the different virtual spaces correspond to different head directions of the user.
Alternatively, the electronic device may collectively determine the output first display content corresponding to a first direction of the virtual space based on the head orientation and the gaze direction of the user, the first direction matching both the head orientation and the gaze direction.
The first display content may be an image or a video, and the display content may characterize the display scene. It should be noted that, in this embodiment of the application, if the first display content is a video, the first display content is not a certain or fixed content, but a display content corresponding to the first direction in the virtual space, and the display content in the first direction may be changed continuously with time.
S102, first input data are obtained, and second input data are obtained.
A first acquisition device of the electronic equipment acquires first input data and transmits the first input data to the processor. And a second acquisition device of the electronic equipment acquires second input data and transmits the second input data to the processor.
The first input data may be data characterizing a user's behavior and the second input data may be data characterizing a movement of the electronic device. Alternatively, the data representing the behavior of the user may be behavior data of the user, such as eye movement data, nose movement data or other facial movement data, which can be acquired by the first acquisition device. The processor may derive a motion profile of the electronic device, such as motion information, rotation information, or tilt information of the electronic device, based on the data characterizing the motion of the electronic device.
S103, if the first input data do not meet the first condition and the second input data meet the second condition, second display content corresponding to the second direction of the virtual space is obtained and output.
After receiving the first input data and the second input data, the processor may determine whether the first input data satisfies a first condition, and determine whether the second input data satisfies a second condition. The first input data does not satisfy the first condition, and may include: the first input data does not match the first target data. The second input data satisfies a second condition, which may include: the second input data matches the second target data. The first target data may represent data of a target behavior of the user, and the second target data may represent data of a target motion of the electronic device.
If the electronic device determines that the first input data does not satisfy the first condition and the second input data satisfies the second condition, the electronic device may determine that the action of the user satisfies the corresponding condition and the gaze direction of the user changes, for example, the gaze direction is the second gaze direction, the electronic device obtains a second direction of the virtual space corresponding to the second gaze direction and obtains second display content corresponding to the second direction, and the second display content is output through the display.
For example, as shown in fig. 1 (a), the first display contents of the first direction of the virtual space corresponding to the head of the user facing right in front may be a car and a house, and as shown in fig. 1 (b), the second display contents of the second direction of the virtual space corresponding to the head of the user facing right may be a house and a tree.
It should be noted that fig. 1 is only for describing the content viewed from the user's perspective, and does not set any limit to the first display content and the second display content. The second display content in the embodiment of the present application is not a determined or fixed content, but corresponds to the display content in the second direction of the virtual space, and the display content in the second direction may be changed continuously with time.
And S104, if the first input data meet the first condition and the second input data meet the second condition, outputting the first display content.
The first input data satisfies a first condition, which may include: the first input data matches the first target data.
For example, as shown in fig. 3, fig. 3 is a schematic view of another head orientation and display content provided by the embodiment of the present application. In fig. 3 (a), the first display content in the first direction of the virtual space corresponding to the head of the user facing right ahead may be the automobile and the house, and as shown in fig. 3 (b), the first display content including the automobile and the house is still output by the electronic device corresponding to the first direction of the virtual space corresponding to the head of the user facing right.
If the second input data does not satisfy the second condition, the electronic device obtains and outputs second display content corresponding to a second direction of the virtual space regardless of whether the first input data satisfies the first condition.
According to the display method provided by the embodiment of the application, first display content corresponding to a first direction of a virtual space is output; obtaining first input data and second input data; if the first input data does not meet the first condition and the second input data meets the second condition, obtaining and outputting second display content corresponding to a second direction of the virtual space; and if the first input data meets the first condition and the second input data meets the second condition, outputting the first display content. Therefore, based on whether the first input data meet the first condition or not and whether the second data meet the second condition or not, the display contents in different directions are output in the virtual space, so that the electronic equipment can automatically adjust the display contents in different directions based on the obtained first input data and the obtained second input data without manual adjustment of a user, and the problem that the user uses inconveniently due to the fact that the display contents are adjusted through hardware is avoided.
Based on the foregoing embodiments, an embodiment of the present application provides a display method, as shown in fig. 4, where fig. 4 is a schematic flow chart of another display method provided in the embodiment of the present application, and the method includes the following steps:
s201, the electronic device outputs first display content corresponding to a first direction of the virtual space.
S202, the electronic equipment obtains first input data and second input data.
The first input data is data representing user behaviors obtained through an image acquisition device; the second input data is data characterizing the movement of the electronic device obtained by the spatial acquisition arrangement.
The first input data includes behavior data of a body part of the user perceiving the first display content, and the second input data includes orientation data of the electronic device. The user may have an action on a body part when sensing the first display content, for example, when the user senses the first display content, the eye may have a related action, for example, an eye rotation, an eye closing, an eye opening or an eye blinking, and the first input data may be eye action data of the user. For another example, when the user perceives the first display content, the facial expression may change as the first display content is continuously displayed. The first input data may be facial motion data. The first input data may be other data, and is not limited herein.
The type of the second input data obtained is also different based on the different sensors. For example, if the second acquisition device is an orientation sensor, the obtained second input data is orientation data, if the second acquisition device is an angle sensor, the obtained second input data is angle data, and if the second acquisition device is a three-axis acceleration sensor, the obtained second input data is three-axis acceleration data. The application does not limit the specific type of the second input data as long as the orientation data of the electronic device can be known through the second input data.
S203, if the first input data is not matched with the first target data and the second input data is matched with the second target data, the electronic equipment obtains and outputs second display content.
The first input data is not matched with the first target data, and the first input data represents that a user can sense the display content. The user can perceive the display content, and the representation shows that the user can know the content played by the picture within a time span. The time length may be valued based on actual conditions (e.g., the sensitivity of the electronic device), for example, the higher the sensitivity of the electronic device, the shorter the time length, the lower the sensitivity, and the longer the time length.
The first input data does not match the first target data, further characterizing: the user generates an eye closing action, and the duration of the eye closing action is less than a first threshold. In another embodiment, the first input data does not match the first target data, further characterizing: the user does not produce eye closing action. It should be appreciated that the user may perceive the display if the user's eyes are in a continuously open state for a length of time. If the user blinks at least once within a time span, the time span of one blink is very short, so that the user still can perceive the display content within the time span, the first threshold value is very small, which may be the time span generated by one blink, and the first threshold value may be in the range of 0.2-0.4 seconds, for example, the first threshold value is 0.2 seconds, 0.3 seconds or 0.4 seconds.
S204, if the first input data is matched with the first target data and the second input data is matched with the second target data, the electronic equipment outputs first display content.
The first input data is matched with the first target data to represent that the user does not perceive the display content. The user does not perceive the displayed content, indicating that the user cannot know the content played in the frame within a time span.
The first input data is matched with the first target data, and the first input data further represents that the user generates the eye closing action, and the duration of the eye closing action is larger than or equal to a first threshold. It should be appreciated that if the user's eyes are in a state of continuous closure for a length of time, the user cannot perceive the display.
If the electronic device determines that the first input data is matched with the first target data and the second input data is matched with the second target data, the electronic device may determine a current visual line direction of the user based on the first input data and the second input data, and match the current visual line direction with the first direction of the virtual space, and output the first display content.
In one embodiment, S204 may be implemented by: if the electronic device determines that the user closes the eyes first, turns the head and opens the eyes next based on the first input data and the second input data, the electronic device determines that the first input data is matched with the first target data and the second input data is matched with the second target data, so that the first display content is displayed. The eye-closing motion and the eye-opening motion may be determined based on first input data, and the head-turning motion may be determined based on second input data. Alternatively, if the electronic device determines that the user turns the head, the turned angle is smaller than the first angle, or if the electronic device determines that the included angle between the current visual line direction of the user and the initial visual line direction before the user does not act is smaller than the first angle, the electronic device may obtain and output second display content corresponding to the second direction of the virtual space. Therefore, the matching condition that the sight line direction of the user is changed with the virtual space direction when the angle of the user for turning the head is smaller is avoided, and the electronic equipment is more suitable for the use habit of the user.
Optionally, the electronic device may further determine whether the user has other actions to change the matching condition of the user's sight line direction and the virtual space direction, so that the sight line direction of the user after closing the eyes and rotating the eyes is generated to be matched with the virtual space direction of the electronic device before closing the eyes and rotating the eyes, for example, if data for representing user confirmation information (for example, nodding or related language) can be obtained, the electronic device may change the matching condition of the user's sight line direction and the virtual space direction, and if the data cannot be obtained, the matching condition is not changed.
Optionally, after S204, the electronic device may further reset the matching relationship between the user' S sight line direction and the virtual space direction, and re-match the first direction and the first sight line direction, so that the user can see the automobile and the house instead of the house and the tree when facing the orientation shown in fig. 1 (a) when the user re-uses the electronic device next time. The electronic device may reset the matching relationship between the gaze direction of the user and the virtual space direction based on some operations performed by the user on the related keys or the related screens of the electronic device, or the electronic device may reset the matching relationship between the gaze direction of the user and the virtual space direction after determining that the electronic device is powered off, which is not limited herein.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
According to the display method provided by the embodiment of the application, if the user does not sense the display content and determines that the head of the user generates the rotation operation based on the second input data, the first display content is still output to the user when the user senses the display content, so that the user does not need to change the display content of the virtual space in different directions through a physical knob of the electronic equipment, but automatically changes the display content of the virtual space in different directions based on the use habit of the user, and the use of the user is facilitated.
Based on the foregoing embodiments, an embodiment of the present application provides a display method, as shown in fig. 5, where fig. 5 is a schematic flow chart of another display method provided in the embodiment of the present application, and the method includes the following steps:
s301, the electronic equipment outputs first display content corresponding to the first direction of the virtual space.
S302, the electronic equipment obtains first input data and second input data.
After S302, the electronic device may obtain first display content displayed at a first time. The first time may be a time when the user's eye closure is detected. The obtaining time of the first display content at the first time may be obtaining the first display content when it is detected that the user closes his eyes, or obtaining the first display content when it is determined that the first input data matches the first target data.
And S303, if the first input data are not matched with the first target data and the second input data are matched with the second target data, the electronic equipment obtains and outputs second display content.
S304, if the first input data is matched with the first target data and the second input data is matched with the second target data, the electronic equipment outputs first display content displayed at the first moment.
In a scene in which a user uses the electronic device to play a video, if the user does not perceive the display content, the electronic device still continues to play the display content, so that the user cannot acquire the video display in a time period in which the user does not perceive the display content, and the display content is missed. Based on this, after determining that the user performs the operations related to closing eyes, turning head and opening eyes in sequence, the electronic device can continue to play the video picture from the display seen by the user before the operations related to the user. For example, referring to fig. 3, if the electronic device determines that the displayed content includes a first screen including a car and a house before the relevant operation, the electronic device continues to play the video from the first screen when it is determined that the user can perceive the displayed content or when it is determined that the eyes of the user are open, thereby enabling the user to continuously watch the video.
For example, in one implementation scenario, the head of the user faces north, the current frame of the displayed video is the first frame image, if the head needs to be changed but the display content does not change, the user sequentially closes the eyes, makes the head face south and opens the eyes, when the eyes are closed, the electronic device stops displaying, and when the eyes are opened, the user can see that the electronic device continues playing the video from the first frame image.
It will be appreciated that the solution provided in any of the embodiments of the present application is also applicable when the head of the user is turned up or down.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
According to the display method provided by the embodiment of the application, the electronic equipment continues to play at the content displayed before the user performs the related operations of closing eyes, turning heads and opening eyes in sequence when the user determines that the related operations are performed by the user, namely, the electronic equipment does not perform display or pause display when the user performs the related operations, so that the user can continuously watch videos even if the user does not sense the display during the related operations.
Based on the foregoing embodiments, an embodiment of the present application provides a display method, as shown in fig. 6, where fig. 6 is a schematic flow chart of a display method provided in another embodiment of the present application, and the method includes the following steps:
s401, the electronic equipment outputs first display content corresponding to a first direction of the virtual space.
S402, the electronic equipment obtains first input data and second input data.
After S402, the electronic device may obtain first display content displayed at a first time. The first time may be a time when the user's eye closure is detected.
And S403, if the first input data is not matched with the first target data and the second input data is matched with the second target data, obtaining and outputting second display content.
S404, if the first input data is matched with the first target data and the second input data is matched with the second target data, the electronic equipment outputs first display content at a second moment.
The second time is after the first time and is separated from the first time by a first time length, and the first time length corresponds to the time length for obtaining the first input data.
In the scene that the user watches live broadcast or plays games by adopting the electronic equipment, even if the user does not sense the display content, the electronic equipment continuously plays the display content, thereby ensuring the continuity of live broadcast or game play.
For example, referring to fig. 7, fig. 7 is a schematic diagram of another head orientation and display content provided by an embodiment of the present application, if the electronic device displays content before determining that the relevant operation is (a) in fig. 7, (a) in fig. 7 is a first picture including a car and a house, and the electronic device still does not stop playing the display content during the relevant operation of the user, the electronic device determines that the user can perceive the display content or determines that the eyes of the user are open, since the video is continuously played all the time, so that the user views a second display content corresponding to the first direction of the virtual space, for example, the user views (b) in fig. 7 after performing the relevant operation, i.e., views a car and a house smoking a fire.
For example, in one implementation scenario, the head of the user faces north, the current frame of the video displayed at this time is the first frame of image, if the head needs to be changed but the display content does not change, the user performs the operations of closing eyes, making the head face south and opening eyes in sequence, the electronic device continues to play the video when performing the operations of closing eyes, rotating the head and opening eyes, and the user can see that the video continues to play from the second frame of image after the user opens the eyes.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
In the display method provided by the embodiment of the application, although it is determined that the user performs the operations related to closing eyes, turning heads and opening eyes in sequence, the electronic device does not perform the action of pausing the display based on the operation of the user, but continuously performs the video playing all the time, so that the video playing is continuous.
Based on the foregoing embodiments, an embodiment of the present application provides an electronic device 5, as shown in fig. 8, fig. 8 is a schematic structural diagram of an electronic device provided in the embodiment of the present application, and the electronic device 5 may be applied to a display method provided in embodiments corresponding to fig. 2, 4, 5, and 6, and referring to fig. 8, the electronic device 5 may include:
and a display device 51 for outputting a first display content corresponding to a first direction of the virtual space.
And the acquisition device 52 is used for acquiring the first input data and acquiring the second input data. Alternatively, the acquisition device 52 may include an image acquisition device for obtaining the first input data and a spatial acquisition device for obtaining the second input data.
And a processing device 53, configured to, if the first input data does not satisfy the first condition and the second input data satisfies the second condition, obtain second display content corresponding to the second direction in the virtual space, and enable the display to output the second display content.
And the processing device 53 is further configured to obtain the first display content if the first input data satisfies the first condition and the second input data satisfies the second condition, and enable the display to output the first display content.
Optionally, the first input data is data characterizing the user's behavior obtained by the image capture device 52;
the second input data is data characterizing the movement of the electronic device obtained by the spatial acquisition arrangement 52.
Optionally, the first input data comprises behavioral data of a body part of the user perceiving the first display content, and the second input data comprises orientation data of the electronic device.
Optionally, the processing device 53 is further configured to obtain and output a second display content if the first input data does not match the first target data and the second input data matches the second target data;
accordingly, the processing device 53 is further configured to output the first display content if the first input data matches the first target data and the second input data matches the second target data.
Optionally, the first input data is not matched with the first target data, and the first input data represents that the user can sense the display content;
the first input data is matched with the first target data to represent that the user does not perceive the display content.
Optionally, the first input data does not match the first target data, further characterizing: the user generates an eye closing action, and the duration of the eye closing action is less than a first threshold;
the first input data is matched with the first target data, and the first input data further represents that the user generates the eye closing action, and the duration of the eye closing action is larger than or equal to a first threshold.
Correspondingly, the processing device 53 is further configured to obtain first display content displayed at the first time;
and if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting the first display content displayed at the first moment.
Correspondingly, the processing device 53 is further configured to obtain first display content displayed at the first time;
if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting first display content at a second moment; the second time is after the first time and is separated from the first time by a first time length, and the first time length corresponds to the time length for obtaining the first input data.
It should be noted that, for a specific implementation process of the steps executed by the processing apparatus in this embodiment, reference may be made to implementation processes in the display method provided in embodiments corresponding to fig. 2, 4, 5, and 6, and details are not described here again.
The electronic device provided by the embodiment of the application outputs first display content corresponding to a first direction of a virtual space; obtaining first input data and second input data; if the first input data does not meet the first condition and the second input data meets the second condition, obtaining and outputting second display content corresponding to a second direction of the virtual space; and if the first input data meets the first condition and the second input data meets the second condition, outputting the first display content. Therefore, based on whether the first input data meet the first condition or not and whether the second data meet the second condition or not, the display contents in different directions are output in the virtual space, so that the electronic equipment can automatically adjust the display contents in different directions based on the obtained first input data and the obtained second input data without manual adjustment of a user, and the problem that the user uses inconveniently due to the fact that the display contents are adjusted through hardware is avoided.
Based on the foregoing embodiments, an embodiment of the present application provides an electronic device 6, as shown in fig. 9, fig. 9 is a schematic structural diagram of another electronic device provided in the embodiment of the present application, and the electronic device 6 may be applied to a display method provided in the embodiments corresponding to fig. 2, 4, 5, and 6, and referring to fig. 9, the electronic device 6 may include:
the output module is used for outputting first display content corresponding to a first direction of the virtual space;
the obtaining module is used for obtaining first input data and second input data;
the output module is further used for obtaining and outputting second display content corresponding to a second direction of the virtual space if the first input data does not meet the first condition and the second input data meets the second condition;
the output module is further used for outputting the first display content if the first input data meets the first condition and the second input data meets the second condition.
Optionally, the first input data is data characterizing user behavior obtained by the image acquisition device;
the second input data is data characterizing the movement of the electronic device obtained by the spatial acquisition arrangement.
Optionally, the first input data comprises behavioral data of a body part of the user perceiving the first display content, and the second input data comprises orientation data of the electronic device.
Optionally, the output module is further configured to obtain and output second display content if the first input data does not match the first target data and the second input data matches the second target data;
and the output module is also used for acquiring and outputting the first display content if the first input data is matched with the first target data and the second input data is matched with the second target data.
Optionally, the first input data is not matched with the first target data, and the first input data represents that the user can sense the display content;
the first input data is matched with the first target data to represent that the user does not perceive the display content.
Optionally, the first input data does not match the first target data, further characterizing: the user generates an eye closing action, and the duration of the eye closing action is less than a first threshold;
the first input data is matched with the first target data, and the first input data further represents that the user generates the eye closing action, and the duration of the eye closing action is larger than or equal to a first threshold.
The output module is also used for obtaining first display content displayed at a first moment; and if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting the first display content displayed at the first moment.
The output module is also used for obtaining first display content displayed at a first moment; if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting first display content at a second moment; the second time is after the first time and is separated from the first time by a first time length, and the first time length corresponds to the time length for obtaining the first input data.
The electronic device provided by the embodiment of the application outputs first display content corresponding to a first direction of a virtual space; obtaining first input data and second input data; if the first input data does not meet the first condition and the second input data meets the second condition, obtaining and outputting second display content corresponding to a second direction of the virtual space; and if the first input data meets the first condition and the second input data meets the second condition, outputting the first display content. Therefore, based on whether the first input data meet the first condition or not and whether the second data meet the second condition or not, the display contents in different directions are output in the virtual space, so that the electronic equipment can automatically adjust the display contents in different directions based on the obtained first input data and the obtained second input data without manual adjustment of a user, and the problem that the user uses inconveniently due to the fact that the display contents are adjusted through hardware is avoided.
Based on the foregoing embodiments, embodiments of the present application may also provide a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the display method according to any one of the above.
The Processor and the Processing Device may be the same Device, that is, the Processor may also be referred to as a Processing Device, and the Processor or the Processing Device may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic device implementing the above-mentioned processor function may be other electronic devices, and the embodiments of the present application are not particularly limited.
The computer storage medium/Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing module, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A display method is applied to an electronic device and comprises the following steps:
outputting first display content corresponding to a first direction of the virtual space; the first direction is at least matched with a first sight line direction which is characterized by a user;
obtaining first input data and second input data;
if the first input data does not meet a first condition and the second input data meets a second condition, obtaining and outputting second display content corresponding to a second direction of the virtual space; the second direction matches at least a second gaze direction characteristic of the user;
if the first input data meets the first condition and the second input data meets the second condition, outputting the first display content; wherein the first condition is indicative of the user performing an eye closure action less than a time threshold or not performing an eye closure action; the second condition characterizes a change in orientation of the electronic device.
2. The method according to claim 1, the display method being applied to an electronic device, wherein,
the first input data is data representing user behaviors obtained through an image acquisition device;
the second input data is data characterizing the motion of the electronic device obtained by a spatial acquisition device.
3. The method of claim 1 or 2, wherein the first input data comprises behavioral data of a user perceiving the first display content body part, and the second input data comprises orientation data of the electronic device.
4. The method of claim 3, wherein the obtaining and outputting second display content corresponding to a second orientation of the virtual space if the first input data does not satisfy a first condition and the second input data satisfies a second condition comprises: if the first input data is not matched with the first target data and the second input data is matched with the second target data, obtaining and outputting the second display content;
accordingly, if the first input data satisfies the first condition and the second input data satisfies the second condition, obtaining and outputting the first display content includes: and if the first input data is matched with the first target data and the second input data is matched with the second target data, obtaining and outputting the first display content.
5. The method of claim 4, wherein the first input data does not match the first target data, indicating that the user can perceive the display;
the first input data is matched with the first target data and represents the display content which is not sensed by the user.
6. The method of claim 5, wherein the first input data does not match the first target data, further characterizing: the user generates the eye closing action, and the duration of the eye closing action is less than a first threshold;
the first input data is matched with the first target data, and the first input data further represents that the user generates the eye closing action and the duration of the eye closing action is greater than or equal to the first threshold.
7. The method of any of claims 4 to 6, wherein said obtaining and outputting said first display content if said first input data matches said first target data and said second input data matches said second target data comprises:
obtaining the first display content displayed at a first moment;
and if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting the first display content displayed at the first moment.
8. The method of any of claims 4 to 6, wherein said obtaining and outputting said first display content if said first input data matches said first target data and said second input data matches said second target data comprises:
obtaining the first display content displayed at a first moment;
if the first input data is matched with the first target data and the second input data is matched with the second target data, outputting the first display content at a second moment; the second time is after the first time and is separated from the first time by a first time length, and the first time length corresponds to the time length for obtaining the first input data.
9. An electronic device, comprising:
the display is used for outputting first display content corresponding to a first direction of the virtual space; the first direction is at least matched with a first sight line direction which is characterized by a user;
the acquisition device is used for acquiring first input data and second input data;
processing means, configured to, if the first input data does not satisfy a first condition and the second input data satisfies a second condition, obtain second display content corresponding to a second direction of the virtual space, and enable the display to output the second display content; the second direction matches at least a second gaze direction characteristic of the user;
the processing device is further configured to obtain the first display content and enable the display to output the first display content if the first input data meets the first condition and the second input data meets the second condition; wherein the first condition is indicative of the user performing an eye closure action less than a time threshold or not performing an eye closure action; the second condition characterizes a change in orientation of the electronic device.
10. An electronic device, comprising:
the output module is used for outputting first display content corresponding to a first direction of the virtual space; the first direction is at least matched with a first sight line direction which is characterized by a user;
the obtaining module is used for obtaining first input data and second input data;
the output module is further configured to obtain and output second display content corresponding to a second direction of the virtual space if the first input data does not satisfy a first condition and the second input data satisfies a second condition; the second direction matches at least a second gaze direction characteristic of the user;
the output module is further configured to output the first display content if the first input data satisfies the first condition and the second input data satisfies the second condition; wherein the first condition is indicative of the user performing an eye closure action less than a time threshold or not performing an eye closure action; the second condition characterizes a change in orientation of the electronic device.
CN201911396591.8A 2019-12-30 2019-12-30 Display method and electronic equipment Active CN111158483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911396591.8A CN111158483B (en) 2019-12-30 2019-12-30 Display method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911396591.8A CN111158483B (en) 2019-12-30 2019-12-30 Display method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111158483A CN111158483A (en) 2020-05-15
CN111158483B true CN111158483B (en) 2021-10-22

Family

ID=70559092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911396591.8A Active CN111158483B (en) 2019-12-30 2019-12-30 Display method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111158483B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593044A (en) * 2012-08-13 2014-02-19 鸿富锦精密工业(深圳)有限公司 Electronic device correction system and method
CN104133550B (en) * 2014-06-27 2017-05-24 联想(北京)有限公司 Information processing method and electronic equipment
CN104238983B (en) * 2014-08-05 2020-03-24 联想(北京)有限公司 Control method and electronic equipment
CN105320280B (en) * 2015-09-23 2018-07-03 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN105892630A (en) * 2015-11-02 2016-08-24 乐视致新电子科技(天津)有限公司 List content display method and device
CN105867619B (en) * 2016-03-28 2019-02-05 联想(北京)有限公司 A kind of information processing method and electronic equipment
WO2018129664A1 (en) * 2017-01-10 2018-07-19 深圳市柔宇科技有限公司 Display content adjustment method and system, and head-mounted display device
CN107515670B (en) * 2017-01-13 2019-06-07 维沃移动通信有限公司 A kind of method that realizing automatic page turning and mobile terminal
WO2018165278A1 (en) * 2017-03-07 2018-09-13 vGolf, LLC Mixed reality golf simulation and training system
CN106951316B (en) * 2017-03-20 2021-07-09 北京安云世纪科技有限公司 Virtual mode and real mode switching method and device and virtual reality equipment
CN110531859A (en) * 2019-09-02 2019-12-03 长沙理工大学 Man-machine interaction method and device based on VR aobvious identification user's operation movements

Also Published As

Publication number Publication date
CN111158483A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
EP3163422B1 (en) Information processing device, information processing method, computer program, and image processing system
US9123158B2 (en) Image processing device, image processing method, and image processing system
CN106873778B (en) Application operation control method and device and virtual reality equipment
US20210350762A1 (en) Image processing device and image processing method
KR20180073327A (en) Display control method, storage medium and electronic device for displaying image
CN109416562B (en) Apparatus, method and computer readable medium for virtual reality
CN110546601B (en) Information processing device, information processing method, and program
US20180357817A1 (en) Information processing method, program, and computer
EP3671408B1 (en) Virtual reality device and content adjusting method therefor
US10687051B1 (en) Movable display for viewing and interacting with computer generated environments
EP3528024B1 (en) Information processing device, information processing method, and program
US20220291744A1 (en) Display processing device, display processing method, and recording medium
EP3697086A1 (en) Information processing device, information processing method, and program
CN111544897A (en) Video clip display method, device, equipment and medium based on virtual scene
US11287881B2 (en) Presenting images on a display device
CN111158483B (en) Display method and electronic equipment
JP6580624B2 (en) Method for providing virtual space, program for causing computer to execute the method, and information processing apparatus for executing the program
CN106126148B (en) Display control method and electronic equipment
CN112119451A (en) Information processing apparatus, information processing method, and program
US20180239420A1 (en) Method executed on computer for providing virtual space to head mount device, program for executing the method on the computer, and computer apparatus
JP6718928B2 (en) Video output system
CN109561297B (en) Visual angle processing method and device based on virtual reality environment
JP2022015647A (en) Information processing apparatus and image display method
JP2020039012A (en) Program, information processing device, and method
US20230221794A1 (en) Head mounted display device and display content control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant