CN108932058B - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN108932058B
CN108932058B CN201810698654.4A CN201810698654A CN108932058B CN 108932058 B CN108932058 B CN 108932058B CN 201810698654 A CN201810698654 A CN 201810698654A CN 108932058 B CN108932058 B CN 108932058B
Authority
CN
China
Prior art keywords
image
viewer
scene
area
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810698654.4A
Other languages
Chinese (zh)
Other versions
CN108932058A (en
Inventor
佟妍妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201810698654.4A priority Critical patent/CN108932058B/en
Publication of CN108932058A publication Critical patent/CN108932058A/en
Priority to US16/457,342 priority patent/US11113857B2/en
Application granted granted Critical
Publication of CN108932058B publication Critical patent/CN108932058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a display method, a display device and electronic equipment, wherein a first image is displayed based on a first scene; detecting whether a first condition is met; displaying a second image based on at least satisfaction of the first condition; the first image is an image corresponding to a first part in the first scene, and the first part and the second part are different. Because the second image comprises the character image representing the viewer and the second local image in the first scene, the viewer can see the second image blended into the first scene, so that the viewer can see the scene of experiencing the virtual scene or the augmented reality scene, and the immersion of the viewer in the virtual scene is increased.

Description

Display method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a display method and apparatus, and an electronic device.
Background
At present, many electronic devices may present a virtual scene for a user based on a virtual Reality technology (VR) or an Augmented Reality technology (AR), and may be a television, a notebook computer, a desktop, a smart phone, or a wearable device.
The user can experience the virtual scene through the electronic equipment, but the user is less immersed in the virtual scene in the process of experiencing the virtual scene.
Disclosure of Invention
In view of the above, the present application provides a display method, a display device and an electronic device.
In order to achieve the above purpose, the present application provides the following technical solutions:
a display method, comprising:
displaying a first image based on a first scene;
detecting whether a first condition is met;
displaying a second image based on at least satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: an image of a person characterizing a viewer and an image of a second part in the first scene, the first part and the second part being different.
Wherein said displaying a second image based at least on satisfaction of the first condition comprises:
triggering an image switching operation in response to satisfaction of the first condition;
switching the first image of a first orientation in the first scene to the second local image of a second orientation in the first scene, wherein the first orientation is different from the second orientation;
obtaining a person image of the viewer;
and displaying the second image.
Wherein the first orientation and the second orientation are opposite orientations of the same location.
Wherein the detecting whether a first condition is satisfied includes:
detecting whether the input action of the viewer belongs to a preset action or not; or the like, or, alternatively,
detecting whether a viewer observes a virtual object set to be capable of reflecting light rays in the first image; or
Whether a viewer observes a virtual device having a function set to be capable of capturing and displaying in the first image is detected.
Wherein obtaining the person image of the viewer includes at least one of:
acquiring a real image of a viewer acquired by a camera to obtain the figure image;
or the like, or, alternatively,
acquiring a real image of a viewer acquired by a camera; correcting the real image to obtain the figure image;
or the like, or, alternatively,
obtaining an account image of the user account of the viewer to obtain the figure image;
or the like, or, alternatively,
obtaining an account image of the user account of the viewer; and correcting the account number image to obtain the figure image.
Wherein the displaying the second image comprises:
obtaining a character orientation for characterizing the viewer in the first scene;
and based on the character orientation and the first orientation, fusing the character image into the corresponding position of the image of the second local part of the first scene to obtain the second image.
Wherein the displaying the second image comprises:
and displaying the second image in a second area of the display area, wherein the second area is a full-screen area or a local area of the display area.
Wherein the displaying the second image in a second area of the display area comprises:
displaying the first image in a first area of the display area and displaying the second image in a second area of the display area; the first area and the second area are different local areas in the display area, or the second area is a local area in the first area, or the first area is a local area in the second area.
A display device, comprising:
a first display module for displaying a first image based on a first scene;
the detection module is used for detecting whether a first condition is met;
a second display module to display a second image based at least on satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: an image of a person characterizing a viewer and an image of a second part in the first scene, the first part and the second part being different.
An electronic device, comprising:
a display for displaying an image;
a memory for storing a program;
a processor configured to execute the program, the program specifically configured to:
controlling the display to display a first image based on a first scene;
detecting whether a first condition is met;
controlling the display to display a second image based at least on satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: an image of a person characterizing a viewer and an image of a second part in the first scene, the first part and the second part being different.
As can be seen from the foregoing technical solutions, compared with the prior art, the embodiment of the present application provides a display method, which displays a first image based on a first scene; detecting whether a first condition is met; displaying a second image based on at least satisfaction of the first condition; the first image is an image corresponding to a first part in the first scene, and the first part and the second part are different. Because the second image comprises the character image representing the viewer and the second local image in the first scene, the viewer can see the second image blended into the first scene, so that the viewer can see the scene of experiencing the virtual scene or the augmented reality scene, and the immersion of the viewer in the virtual scene is increased.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an implementation manner of a display method provided by an embodiment of the present application;
fig. 2a to fig. 2d are exemplary diagrams of an image related to a first scene according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a first scene experienced by a viewer carrying a wearable device according to an embodiment of the present application;
FIG. 4 is a flow chart of one implementation of displaying a second image based at least on satisfaction of the first condition as provided by an embodiment of the present application;
fig. 5a to 5b are another exemplary diagrams of a first scene-related image according to an embodiment of the present disclosure;
fig. 6a to fig. 6e are still another exemplary diagrams of a first scene-related image provided in an embodiment of the present application;
fig. 7 is a structural diagram of an implementation manner of a display device provided in an embodiment of the present application;
fig. 8 is a block diagram of an implementation manner of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the rapid development of Virtual Reality technology (VR) or Augmented Reality technology (AR), a user may experience a Virtual scene (the Virtual scene may be a Virtual scene added in a real image of a real world or a Virtual scene corresponding to the Virtual Reality technology), for example, the user may experience a south pole Virtual scene or a Virtual scene of the dinosaur century.
At present, a user can only experience a virtual scene, namely observe a scene or an environment in the virtual scene, but the user cannot observe a scene which experiences the virtual scene or an augmented reality scene. For example, if there is a virtual mirror in the virtual scene, the user cannot see himself in the virtual mirror, but only can see the mirror itself in the virtual scene (the effect of "looking into the mirror" in the real world cannot be achieved); for another example, the user cannot take a self-timer in the first scene, and even if the self-timer is taken, an image of the user in the real world is obtained instead of the image fused in the first scene by the user, so that the user cannot be completely immersed in the virtual scene.
In order to solve the above problem, an embodiment of the present application provides a display method, and the display method provided by the embodiment of the present application may be applied to an electronic device, and the electronic device may be a head-mounted display device, or a laptop computer, or a desktop computer, or a PAD, or a television, and the like.
As shown in fig. 1, a flowchart of an implementation manner of a display method provided in an embodiment of the present application is provided, where the method includes:
step S101: a first image is displayed based on a first scene.
The first image may be an augmented reality image, which is an image based on augmented reality technology after superimposing a virtual image onto a real image in the real world.
In an alternative embodiment, the electronic device comprises a display that can display a virtual image, the display not displaying a real image of the real world, the user being able to see the real image because the light of the real image of the real world is projected to the user's eyes.
In an alternative embodiment, the electronic device may include a camera that may record real images of the real world in real time, a scene generator that may generate virtual images, and a display that may display images of the virtual images superimposed on the real images in the real world.
The first image may be a virtual image, and the virtual image refers to an image displayed on a display screen of the electronic device based on a virtual reality technology. If the first image is a virtual image, the first scene may be pre-stored in the electronic device, and the electronic device may not include a camera for recording a real image of the real world in real time.
The virtual image is not typically a real image of the current real world.
In an alternative embodiment, the first image is an image corresponding to a first part of said first scene.
Step S102: it is detected whether a first condition is fulfilled.
In an alternative embodiment, the first condition may be the acquisition of information characterizing the need to show the viewer's blending into the background image.
The background image will be explained below.
In an alternative embodiment, it is assumed that the first image is an image of a first part of the first scene. In the embodiment of the present application, the other partial images except for the image of the first partial in the first scene are referred to as background images. Fig. 2a to 2d are schematic diagrams illustrating an example of an image related to a first scene according to an embodiment of the present disclosure.
Assuming that viewer 21 is observing the first local image in the first scene, i.e., the pond image, as shown in fig. 2a in the first scene, viewer 21 is currently only seeing the first image shown in fig. 2b (i.e., the pond image) to viewer 21. The viewer 21 cannot observe the background image oriented to the back of the viewer in the first scene as shown in fig. 2 c.
In summary, the background image may be an image in the first scene that cannot be seen by the viewer when the first image is currently displayed.
There are various implementations of "detecting whether the first condition is satisfied", and the embodiments of the present application provide, but are not limited to, the following.
First, the manner of detecting whether the first condition is satisfied includes: detecting whether the input action of the viewer belongs to the preset action or not.
Optionally, the first condition is that the input action of the viewer belongs to a preset action, and the first condition may be applied to an application scenario in which the viewer performs self-timer shooting in a first scenario. In this scenario, the preset action may be various, and the embodiments of the present application provide, but are not limited to, the following.
First preset action-the action of the viewer pretending to be holding the device.
It can be understood that in the real world, if a viewer takes a self-timer shot, the viewer needs to take out a device having a camera, such as a smartphone or a camera or a PAD, and hold the device with a hand to take a self-timer shot. By analogy with the real world, if the viewer needs to take a self-timer in the first scene, the viewer can make a fake act of holding the device.
Optionally, the detecting whether the first condition is satisfied includes:
acquiring action information of a viewer in the process of experiencing the first scene;
detecting whether the action information comprises actions of a fake holding device;
if the action information comprises the action of the fake holding equipment, determining that the first condition is met;
and if the action information does not comprise the action of the fake holding equipment, determining that the first condition is not met.
Optionally, there are two implementation manners of "obtaining the action information of the viewer in the process of experiencing the first scene", where the first implementation manner is obtained for the electronic device applied by the display method provided in the embodiment of the present application, and the second implementation manner is obtained for the other electronic devices and transmitted to the electronic device applied by the display method provided in the embodiment of the present application.
For the first, optionally, the electronic device applied to the display method provided in the embodiment of the present application may include a camera or a smart glove (a viewer needs to wear the smart glove during the first scene experience), and the camera or the smart glove may acquire the action information of the viewer during the first scene experience.
For example, the electronic device to which the display method provided by the embodiment of the application is applied may be a wearable device, and the wearable device may include a head-mounted display device and a smart glove, as shown in fig. 3, which is a schematic diagram of a viewer carrying the wearable device to experience a first scene provided by the embodiment of the application.
The wearable device in fig. 3 comprises a head-mounted display device 31 and a smart glove 32.
The head-mounted display device 31 may display an image in a first scene, and the smart glove may detect motion information of the viewer's hand.
For the second, optionally, the electronic device applied in the display method provided in the embodiment of the present application may not include a camera for acquiring the motion information of the viewer in the process of experiencing the first scene; the other electronic devices may include a camera that acquires motion information of the viewer during the process of experiencing the first scene, and transmits the motion information to the electronic device to which the display method provided in the embodiment of the present application is applied.
For example, the electronic device applied in the display method provided in the embodiment of the present application is a wearable device (if the wearable device does not have a smart glove or a camera), and the other electronic device may be a device having a camera, such as a laptop, a smart television, a PAD, or a smart phone.
Second preset action-the action of the viewer holding the virtual device.
Optionally, a virtual device configured to capture and display the capture function may be included in the first scenario. For example, the virtual device may be a virtual camera or a virtual smartphone or a virtual PAD or a virtual laptop or a virtual desktop or other virtual device with a camera and a display. The virtual device may also be a device with a camera and a display with a selfie stick.
If the viewer needs to take a self-timer, the viewer can hold the virtual device.
Optionally, the retrieving whether the first condition is satisfied includes:
determining that the first image has a virtual device set to be capable of acquiring and displaying an acquisition function;
acquiring action information of a viewer in the process of experiencing the first scene;
determining whether the action information includes an action to hold the virtual device based on the position information of the virtual device in the first scene and the action information of the viewer in the first scene.
The selectable action information includes: a virtual position of a body part (e.g. a hand or a foot) of the viewer performing the action in the first scene, and/or action posture information. For example, if the virtual position of the body part of the viewer performing the motion is the same as the virtual position of the virtual device in the first scene, and the motion posture is a posture of holding the device, it is determined that the motion information includes the motion of holding the virtual device.
Optionally, there are two implementation manners of "obtaining the motion information of the viewer in the process of experiencing the first scene", which are the same as the two manners corresponding to the first preset motion, and are not described herein again.
Third preset action-the action of the viewer holding a real object.
It is understood that in the real world a viewer may hold a device with a camera, such as a selfie stick or smartphone or PAD, to self-shoot. In the process that the viewer experiences the first scene, if the viewer needs to perform self-shooting, the viewer can hold a real object which really exists in the real world, the real object can be real equipment which really exists in the real world and is provided with a camera, or an object which really exists in the real world and is similar to the real equipment, for example, a stick can be analogized to a self-shooting stick, a pencil box can be analogized to a smart phone, and the like.
Optionally, the detecting whether the first condition is satisfied includes:
acquiring action information of a viewer in the process of experiencing the first scene;
detecting whether the action information comprises an action of the viewer holding a real object;
and if the action information comprises the action of holding the real object by the viewer, determining that the first condition is met, otherwise, determining that the first condition is not met.
Optionally, there are two implementation manners of "obtaining the motion information of the viewer in the process of experiencing the first scene", which are the same as the two manners corresponding to the first preset motion, and are not described herein again.
Fourth Preset action-Preset self-timer action by the viewer.
Alternatively, the viewer may set a self-timer action corresponding to self-timer in the electronic device in advance, the preset action may be an action frequently made by the viewer in self-timer, for example, an action of "biye" or "belly" or "cheek" or "jump" or the like, the viewer may set one or more of the actions frequently made by the viewer in self-timer as the self-timer action in advance.
Optionally, the detecting whether the first condition is satisfied includes:
acquiring action information of a viewer in the process of experiencing the first scene;
detecting whether the action information belongs to a preset self-timer action;
and if the action information belongs to the preset self-timer action, determining that the first condition is met, otherwise, not meeting the first condition.
Optionally, there are two implementation manners of "obtaining the motion information of the viewer in the process of experiencing the first scene", which are the same as the two manners corresponding to the first preset motion, and are not described herein again.
Optionally, the preset self-timer action may be an action that is not often performed by the viewer during self-timer, and may be any action, which is not described herein again.
Second, the manner of detecting whether the first condition is satisfied includes: and detecting whether a preset key is touched or not.
The preset key may be a physical key located in the real world and/or a virtual key located in the first scene.
Thirdly, the way of detecting whether the first condition is satisfied includes: whether a viewer observes a virtual object having a function set to reflect light in the first image is detected.
In an alternative embodiment, the virtual object configured to reflect light may be determined according to a specific first scene, for example, the virtual object may be a virtual mirror, or a virtual water (e.g., a river, a lake, an ocean, etc.).
Fourthly, the way of detecting whether the first condition is satisfied includes: detecting that a viewer observes a virtual object having a function set to reflect light in the first image; acquiring a virtual distance between a viewer and a virtual object in the first scene; based on the virtual distance, it is determined whether a first condition is satisfied.
Optionally, determining whether the first condition is satisfied based on the virtual distance may include: if the virtual distance is smaller than or equal to a preset threshold value, determining that a first condition is met; and if the virtual distance is larger than the preset threshold value, determining that the first condition is not met.
It is understood that, in a real scene, if a viewer is far away from a real object (e.g., a mirror) capable of reflecting light, the viewer cannot observe an image displayed by the real object. As the distance between the viewer and the real object gradually decreases, the viewer can see the image of the real object representation containing the viewer. By analogy with the real phenomenon, if the virtual distance between the viewer and the virtual object in the first scene is greater than the preset threshold (i.e. the virtual distance between the viewer and the virtual object in the first scene is far), even if the viewer observes that the first image has the virtual object, the first condition is not satisfied. If the virtual distance between the viewer and the virtual object in the first scene is less than or equal to the preset threshold value and the viewer observes the virtual object, the first condition is satisfied.
Fifth, the manner of detecting whether the first condition is satisfied includes: whether a viewer observes a virtual device having a function set to be capable of capturing and displaying in the first image is detected.
Optionally, the virtual device may be a virtual device having a camera and a display, such as a virtual camera, a virtual smart phone, a virtual PAD, a virtual laptop, or a virtual desktop computer. The virtual device may also be a device with a camera and a display with a selfie stick.
Step S103: displaying a second image based on at least satisfaction of the first condition.
Wherein the second image comprises: an image of a person characterizing a viewer and an image of a second part in the first scene, the first part and the second part being different.
Still taking fig. 2a to 2c as an example, fig. 2d is a second image.
It is to be understood that, although fig. 2d shows a schematic diagram including a whole body image of the person image, it is to be understood that the second image may include only a part of the person image, such as an image including only the head of the person image or the upper half of the person image.
The embodiment of the application provides a display method, which comprises the steps of displaying a first image based on a first scene; detecting whether a first condition is met; displaying a second image based on at least satisfaction of the first condition; the first image is an image corresponding to a first part in the first scene, and the first part and the second part are different. Because the second image comprises the character image representing the viewer and the second local image in the first scene, the viewer can see the second image blended into the first scene, so that the viewer can see the scene of experiencing the virtual scene or the augmented reality scene, and the immersion of the viewer in the virtual scene is increased.
As shown in fig. 4, a flowchart of one implementation manner for displaying a second image based on at least satisfaction of the first condition is provided in the embodiment of the present application, and the method includes:
step S401: triggering an image switching operation in response to satisfaction of the first condition.
Step S402: switching the first image of a first orientation in the first scene to the second local image of a second orientation in the first scene, wherein the first orientation is different from the second orientation.
There are various methods for determining the second orientation, and the embodiments of the present application provide, but are not limited to, the following methods.
The first, second orientation may be any orientation different from the first orientation.
For example, if the first orientation is in front of the viewer, the second orientation may be behind the viewer, or to the left of the viewer, or to the right of the viewer, or above the viewer, or below the viewer, etc.
Second, a second orientation is determined based on the first condition.
If the first condition is that the input action of the viewer belongs to the preset action. In an alternative embodiment, the preset action may include an action representing the second orientation, for example, if the preset action is an action of the viewer pretending to hold the device, the orientation of the palm of the viewer facing a part of the body (for example, the head or the feet) of the viewer is the second orientation in the first scene, or if the preset action is an action of the viewer holding the virtual device, the orientation of the display screen of the virtual device is the second orientation, or if the preset action is an action of the viewer holding the real object, the orientation of the virtual position of the real object in the first scene points to the virtual position of the viewer in the first scene is the second orientation. If the preset action is a self-timer action preset by the viewer, the second orientation may be any orientation except the first orientation.
Optionally, if the first condition is that the viewer touches the key, the second orientation may be any orientation other than the first orientation.
If the first condition is that the viewer observes the virtual equipment with the acquisition function set to be capable of acquiring and displaying the first image, the orientation direction of the camera acquiring the image in the virtual equipment is a second direction.
If the first condition is that the viewer observes a virtual object in the first image, which is set to have a function of reflecting light, the second orientation may be obtained based on a virtual position relationship between the virtual object and the viewer in the first scene.
Third, a second orientation is determined based on the first orientation.
The first orientation is opposite to the second orientation for the same location.
The "same position" will be described below.
In an alternative embodiment, "same location" refers to a virtual location of the object in the first scene.
The target object may be a virtual object configured to reflect light, or a virtual device configured to capture and display a capture function, or a real object held by a viewer, or a body part of the viewer performing a predetermined motion.
It will be appreciated that even at the same second orientation, the position of the object in the first scene is different and the resulting second image contains different amounts and/or sizes of content. For example, in the second local image corresponding to the second position in the first scene, the closer the object is to the target object, the larger the object is in the second image, and the farther the object is from the target object is in the second image, the smaller the object is.
It can be understood that, assuming that the target object is a virtual device, the virtual device includes a virtual camera, because the fov (field angle) of the virtual camera is limited, the range of the image in the second position in the first scene that can be covered by the virtual camera is limited, the image that can be covered by the virtual camera can be included in the virtual camera, the image that cannot be covered by the virtual camera cannot be included in the camera, the virtual position of the virtual camera in the first scene is different, the image in the second position in the first scene that can be covered by the virtual camera is different, and the obtained second image is different.
It can be understood that when a user looks into a mirror in the real world, the direction of the line of sight of the user observing the mirror is different, and the image seen from the mirror is different; the distance between the user and the mirror is different, and even if the user observes the mirror in the same sight line direction, images seen from the mirror are different; assuming that the target object is a virtual object, a second image is obtained by the principle that the virtual object simulates reflected light rays, and the virtual position of the virtual object in the first scene is different and the obtained second image is different in analogy with the example of 'looking at a mirror' in the real world.
Fig. 5a to 5b are diagrams illustrating another example of a first scene-related image according to an embodiment of the present disclosure.
The application scenario shown in fig. 5a is: the viewer observes the virtual mirror 51 in the first scene, and the virtual mirror 51 is a virtual object configured to reflect light.
The dots filled with a mesh shape in fig. 5a represent the virtual position of the virtual mirror 51 in the first scene, the first orientation may be the direction of the line of sight of the eyes of the viewer (referred to as the virtual viewpoint in this embodiment of the application) looking into the mirror (i.e. the orientation in which the virtual viewpoint points to the virtual object), and the second orientation and the first orientation are opposite orientations at the virtual mirror 51, as shown in fig. 5 a.
If the viewer is close to the virtual mirror 51, for example, the virtual distance between the viewer and the virtual mirror 51 in the first scene in fig. 5a is larger than the virtual distance between the viewer and the virtual mirror 51 in the first scene in fig. 5 b.
Since the virtual mirror 51 is close to the virtual distance of the viewer in the first scene, the feet of the viewer cannot be shown in the virtual mirror 41, and only part of the arms of the viewer can be shown.
Step S303: obtaining a person image of the viewer.
There are various ways to obtain the person image of the viewer, and the embodiments of the present application provide, but are not limited to, the following.
The first mode is as follows: and acquiring a real image of the viewer acquired by the camera to obtain the figure image.
If the display method provided by the embodiment of the application is applied to the head-mounted display device, the real image of the viewer acquired by the camera is the image of the viewer carrying the head-mounted display device. If the display method provided by the embodiment of the application is applied to a television, a notebook computer, a desktop computer or the like, the viewer does not carry the head-mounted display device in the real image of the viewer acquired by the camera.
The second mode is as follows: acquiring a real image of a viewer acquired by a camera; and correcting the real image to obtain the figure image.
If the display method provided by the embodiment of the application is applied to the head-mounted display device, the real image of the viewer acquired by the camera is the image of the viewer carrying the head-mounted display device. The image of the viewer carrying the head-mounted display device can be corrected to obtain the image of the viewer not carrying the head-mounted display device.
In an alternative embodiment, regardless of whether the viewer carries the head-mounted display device in the real image of the viewer captured by the camera, the viewer may also modify the ornamentation (e.g., one or more of glasses, necklaces, headtops, and hats) of the viewer in the real image, and/or modify the personality make-up (e.g., make-up and/or clothing wear, etc.) of the viewer in the real image.
The third mode is as follows: and obtaining an account number image of the user account number of the viewer to obtain the figure image.
The account image may be an image of the viewer itself or an image of another user, so that the viewer may also obtain a second image fused by the friend in the first scene.
The fourth mode is that: obtaining an account image of the user account of the viewer to obtain the figure image; and correcting the account number image to obtain the figure image.
In an alternative embodiment, the viewer may also alter the ornamentation (e.g., one or more of glasses, necklaces, crown, and hat) of the viewer in the account image, and/or alter the personalized make-up (e.g., make-up and/or clothing, etc.) of the viewer in the account image.
The fifth mode is as follows: and acquiring a pre-stored image to obtain the figure image.
The pre-stored image may be an image of the viewer himself or an image of another user. The other user may not be a viewer.
The sixth mode: and acquiring a pre-stored image, and correcting the image to obtain the figure image.
In an alternative embodiment, the viewer may also alter the ornamentation of the character in the image (e.g., one or more of glasses, necklaces, crown, and hat), and/or alter the personalized makeup of the character in the image (e.g., makeup and/or clothing, etc.).
Step S304: and displaying the second image.
In an alternative embodiment, the viewer may also save the second image so that the second image may be shared.
In an optional embodiment, further comprising: and previewing the second image, detecting an instruction for saving the second image, and storing the second image.
In an alternative embodiment, step S304 may include:
obtaining a character orientation for characterizing the viewer in the first scene; and based on the character orientation and the first orientation, fusing the character image into the corresponding position of the image of the second local part of the first scene to obtain the second image.
In an alternative embodiment, the electronic device may have a display screen. Displaying the second image may include:
the first image is switched to the second image, or the second image is displayed on the currently displayed first image in an overlapping manner, or the first image is displayed in a first area of a display area, and the second image is displayed in a second area of the display area.
In an alternative embodiment, displaying the second image superimposed on the currently displayed first image includes:
and displaying the first image in the display area in a full screen mode, and displaying the second image in a second area of the display area, wherein the second area is a local area of the display area. At this time, the second image blocks the partial image in the first image.
Or the like, or, alternatively,
and displaying the first image in the display area in a full screen mode, and displaying the second image in the display area in the full screen mode. At this time, the second image completely blocks the first image.
In an alternative embodiment, displaying the first image in a first area of a display area and the second image in a second area of the display area includes:
displaying a second image in a full screen mode in a display area, and displaying a first image in a first area of the display area; at this time, the whole display area is the second area, and the first area is a partial area of the second area.
Or the like, or, alternatively,
and displaying the second image in a second area of the display area, and displaying the first image in a first area of the display area, wherein the first area and the second area are both local areas of the display area, and the first area and the second area are different.
In an alternative embodiment, the "displaying the second image" further includes: determining a second area of the display area in which the second image is displayed; determining a range for displaying a second image, and displaying the second image with the corresponding range in the second area.
Alternatively, the "determining a second area in the display area in which the second image is displayed" includes:
determining a virtual object set to have a light reflection function in the first image, or determining that a virtual device set to have a collection function in the first image is located at a virtual position of the first scene;
determining a target display area for displaying an image in the virtual object or the virtual equipment in the first image;
determining the target display area as the second area.
Alternatively, "determining the range in which the second image is displayed" includes:
determining a range in which to display a second image based on a virtual positional relationship of the virtual object with a virtual viewpoint of a viewer in the first scene and/or a virtual position of the virtual viewpoint in the first scene.
Or the like, or, alternatively,
determining a range in which a second image is displayed based on a virtual field angle of a target display area of the virtual device in a first scene and a virtual position relationship of the virtual device and a viewer in the first scene.
In order to make the determination process of the second region and the determination process of the second image more understandable to those skilled in the art, the following description will be made by taking specific examples.
The application scenario shown in fig. 5a is: the viewer observes the virtual mirror 51 in the first scene, and the virtual mirror 51 is a virtual object configured to reflect light.
The target display area (outlined by a dotted line) for displaying the image in the virtual mirror 51 is the second area, i.e., the second image is displayed in the target display area.
When the viewer looks at the virtual mirror 51, it can be seen that the target display area in the virtual mirror 51 displays the second image including the viewer.
It can be seen from fig. 5a that the first image is still being displayed in other areas of the display area of the electronic device than the target display area. Namely, the first image is displayed in the display area in full screen, and the second image is displayed in the second area of the display area.
In summary, fig. 5a illustrates the process of determining the second area.
If the viewer is close to the virtual mirror 51, for example, the virtual distance between the viewer and the virtual mirror 51 in the first scene in fig. 5a is larger than the virtual distance between the viewer and the virtual mirror 51 in the first scene in fig. 5 b.
Due to the restriction of the angle of field of the target display region in the virtual mirror 51, the feet of the viewer cannot be shown in the virtual mirror 51, and only part of the arms of the viewer can be shown.
The change in the second image displayed by the virtual mirror 51 in fig. 5a and 5b indicates that the virtual object and the viewer have different virtual positions in the first scene, and the range in which the second image is displayed is different.
Fig. 6a to 6c are schematic diagrams illustrating still another implementation manner of an image related to a first scene according to an embodiment of the present application.
The application scenario shown in fig. 6a (corresponding to fig. 2a) is: the viewer 60 views in a first scene a virtual device 61 (assuming the virtual device is a virtual handset) set to capture and display the capture function, the virtual device 61 assuming the position shown in figure 6 a.
The target display area of the virtual device 61 is a second area, i.e., the second area displays a second image.
For the viewer, the first image observed by the viewer is shown in fig. 2 b.
Based on the field angle of the target display area of the virtual device 61, the virtual device 61 and the virtual position relationship of the viewer in the first scene determine the range in which the second image is displayed as shown in fig. 2 d.
Fig. 6b shows a schematic relationship between the first region 63 and the second region 62.
The second area 62 is a target display area of the virtual device 61 in the first scene, and the second area 62 shows a second image as shown in fig. 2 d. The first area 63 shows a first image as shown in fig. 2 b. Namely, the display area displays the first image in a full screen mode, the whole display area is the first area, and the second area is a local area of the first area.
In an alternative embodiment, the relationship of the first region 63 and the second region 62 is also as shown in fig. 6c to 6 e.
In fig. 6c, the display area displays the first image in full screen, the whole display area is the first area 63, and the second area 62 is any one of the local areas in the first area.
In fig. 6d, the display area displays the second image in full screen, the whole display area is the second area 62, and the first area 63 is any one of the partial areas in the second area 62.
In fig. 6e, the first area 63 and the second area 62 are both partial areas of the display area 64, and the first area and the second area are different.
The method is described in detail in the embodiments disclosed in the present application, and the method of the present application can be implemented by various types of apparatuses, so that an apparatus is also disclosed in the present application, and the following detailed description is given of specific embodiments.
Fig. 7 is a structural diagram of an implementation manner of a display device according to an embodiment of the present application. The display device includes:
a first display module 71, configured to display a first image based on a first scene;
a detection module 72 for detecting whether a first condition is satisfied;
a second display module 73 for displaying a second image based on at least satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: an image of a person characterizing a viewer and an image of a second part in the first scene, the first part and the second part being different.
Optionally, the second display module 63 includes:
a triggering unit configured to trigger an image switching operation in response to satisfaction of the first condition;
a switching unit configured to switch the first image of a first orientation in the first scene to the second local image of a second orientation in the first scene, wherein the first orientation is different from the second orientation;
a first acquisition unit configured to acquire a person image of the viewer;
and the first display unit is used for displaying the second image.
Optionally, the first orientation and the second orientation are opposite orientations of the same location.
Optionally, the detection module includes:
the first detection unit is used for detecting whether the input action of the viewer belongs to a preset action or not;
or the like, or, alternatively,
a second detection unit configured to detect whether or not a viewer observes a virtual object having a function set to reflect light in the first image;
or the like, or, alternatively,
and the third detection unit is used for detecting whether the viewer observes virtual equipment with the acquisition function set to be capable of acquiring and displaying in the first image.
Optionally, the first obtaining unit includes:
the first acquisition subunit is used for acquiring a real image of a viewer acquired by a camera to obtain the figure image;
or the like, or, alternatively,
the second acquisition subunit is used for acquiring a real image of the viewer acquired by the camera; correcting the real image to obtain the figure image;
or the like, or, alternatively,
the third acquisition subunit is used for acquiring an account image of the user account of the viewer to obtain the character image;
or the like, or, alternatively,
a fourth obtaining subunit, configured to obtain an account image of the user account of the viewer; and correcting the account number image to obtain the figure image.
Optionally, the first display unit includes:
a fifth obtaining subunit, configured to obtain a character orientation representing the viewer in the first scene;
and the image fusion subunit is used for fusing the character image into the corresponding position of the image of the second part of the first scene based on the character orientation and the first orientation to obtain the second image.
Optionally, the second display module includes:
and the second display unit is used for displaying the second image in a second area of a display area, wherein the second area is a full-screen area or a local area of the display area.
Optionally, the second display unit for displaying the second image in a partial area of the display area includes:
a display subunit, configured to display the first image in a first area of the display area, and display the second image in a second area of the display area; the first area and the second area are different local areas in the display area, or the second area is a local area in the first area, or the first area is a local area in the second area.
As shown in fig. 8, which is a structural diagram of an implementation manner of an electronic device provided in an embodiment of the present application, the electronic device includes:
a display 81 for displaying an image;
a memory 82 for storing programs;
a processor 83 configured to execute the program, the program being specifically configured to:
controlling the display to display a first image based on a first scene;
detecting whether a first condition is met;
controlling the display to display a second image based at least on satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: an image of a person characterizing a viewer and an image of a second part in the first scene, the first part and the second part being different.
The memory 82 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 83 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present application.
Optionally, the electronic device may further include a communication bus 84 and a communication interface 85, wherein the display 81, the memory 82, the processor 83, and the communication interface 85 complete communication with each other through the communication bus 84;
alternatively, the communication interface 85 may be an interface of a communication module, such as an interface of a GSM module.
An embodiment of the present invention further provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement each step included in any one of the display methods described above.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A display method, comprising:
displaying a first image based on a first scene;
detecting whether a first condition is met; the first condition is that information representing that a viewer needs to be shown to be fused into a background image is acquired;
displaying a second image based on at least satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: the image of the person representing the viewer and the image of the second part in the first scene enable the viewer to see the second image blended into the first scene, the first part and the second part being different.
2. The display method according to claim 1, wherein the displaying the second image based on at least satisfaction of the first condition comprises:
triggering an image switching operation in response to satisfaction of the first condition;
switching the first image of a first orientation in the first scene to the second local image of a second orientation in the first scene, wherein the first orientation is different from the second orientation;
obtaining a person image of the viewer;
and displaying the second image.
3. The display method according to claim 2, wherein the first orientation and the second orientation are opposite orientations of the same position.
4. The display method according to any one of claims 1 to 3, wherein the detecting whether the first condition is satisfied comprises:
detecting whether the input action of the viewer belongs to a preset action or not; or the like, or, alternatively,
detecting whether a viewer observes a virtual object set to be capable of reflecting light rays in the first image; or
Whether a viewer observes a virtual device having a function set to be capable of capturing and displaying in the first image is detected.
5. The display method according to claim 2, wherein obtaining the person image of the viewer includes at least one of:
acquiring a real image of a viewer acquired by a camera to obtain the figure image;
or the like, or, alternatively,
acquiring a real image of a viewer acquired by a camera; correcting the real image to obtain the figure image;
or the like, or, alternatively,
obtaining an account image of the user account of the viewer to obtain the figure image;
or the like, or, alternatively,
obtaining an account image of the user account of the viewer; and correcting the account number image to obtain the figure image.
6. The display method according to claim 2, wherein the displaying the second image comprises:
obtaining a character orientation for characterizing the viewer in the first scene;
and based on the character orientation and the first orientation, fusing the character image into the corresponding position of the image of the second local part of the first scene to obtain the second image.
7. The method of claim 1, wherein displaying the second image comprises:
and displaying the second image in a second area of the display area, wherein the second area is a full-screen area or a local area of the display area.
8. The method according to claim 7, wherein the displaying the second image in a second area of the display area comprises:
displaying the first image in a first area of the display area and displaying the second image in a second area of the display area; the first area and the second area are different local areas in the display area, or the second area is a local area in the first area, or the first area is a local area in the second area.
9. A display device, comprising:
a first display module for displaying a first image based on a first scene;
the detection module is used for detecting whether a first condition is met; the first condition is that information representing that a viewer needs to be shown to be fused into a background image is acquired;
a second display module to display a second image based at least on satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: the image of the person representing the viewer and the image of the second part in the first scene enable the viewer to see the second image blended into the first scene, the first part and the second part being different.
10. An electronic device, comprising:
a display for displaying an image;
a memory for storing a program;
a processor configured to execute the program, the program specifically configured to:
controlling the display to display a first image based on a first scene;
detecting whether a first condition is met; the first condition is that information representing that a viewer needs to be shown to be fused into a background image is acquired;
controlling the display to display a second image based at least on satisfaction of the first condition;
wherein the first image is an image corresponding to a first part of the first scene, and the second image comprises: the image of the person representing the viewer and the image of the second part in the first scene enable the viewer to see the second image blended into the first scene, the first part and the second part being different.
CN201810698654.4A 2018-06-29 2018-06-29 Display method and device and electronic equipment Active CN108932058B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810698654.4A CN108932058B (en) 2018-06-29 2018-06-29 Display method and device and electronic equipment
US16/457,342 US11113857B2 (en) 2018-06-29 2019-06-28 Display method and apparatus and electronic device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810698654.4A CN108932058B (en) 2018-06-29 2018-06-29 Display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108932058A CN108932058A (en) 2018-12-04
CN108932058B true CN108932058B (en) 2021-05-18

Family

ID=64447334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810698654.4A Active CN108932058B (en) 2018-06-29 2018-06-29 Display method and device and electronic equipment

Country Status (2)

Country Link
US (1) US11113857B2 (en)
CN (1) CN108932058B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11277685B1 (en) * 2018-11-05 2022-03-15 Amazon Technologies, Inc. Cascaded adaptive interference cancellation algorithms
US11650484B1 (en) 2019-08-07 2023-05-16 Apple Inc. Electronic device with camera status indicator
WO2021134575A1 (en) * 2019-12-31 2021-07-08 深圳市大疆创新科技有限公司 Display control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106406537A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Display method and device
CN106681503A (en) * 2016-12-19 2017-05-17 惠科股份有限公司 Display control method, terminal and display device
CN107329259A (en) * 2013-11-27 2017-11-07 奇跃公司 Virtual and augmented reality System and method for

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101812585B1 (en) * 2012-01-02 2017-12-27 삼성전자주식회사 Method for providing User Interface and image photographing apparatus thereof
US8548778B1 (en) * 2012-05-14 2013-10-01 Heartflow, Inc. Method and system for providing information from a patient-specific model of blood flow
KR102510395B1 (en) * 2015-12-01 2023-03-16 삼성디스플레이 주식회사 Display apparatus system
US20170289533A1 (en) * 2016-03-30 2017-10-05 Seiko Epson Corporation Head mounted display, control method thereof, and computer program
ITUA20162920A1 (en) * 2016-04-27 2017-10-27 Consiglio Nazionale Ricerche Method to correct and / or mitigate visual defects due to a degenerative retinal pathology and related system.
US10261749B1 (en) * 2016-11-30 2019-04-16 Google Llc Audio output for panoramic images
CN107517372B (en) * 2017-08-17 2022-07-26 腾讯科技(深圳)有限公司 VR content shooting method, related equipment and system
CN107589846A (en) * 2017-09-20 2018-01-16 歌尔科技有限公司 Method for changing scenes, device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329259A (en) * 2013-11-27 2017-11-07 奇跃公司 Virtual and augmented reality System and method for
CN106406537A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Display method and device
CN106681503A (en) * 2016-12-19 2017-05-17 惠科股份有限公司 Display control method, terminal and display device

Also Published As

Publication number Publication date
CN108932058A (en) 2018-12-04
US11113857B2 (en) 2021-09-07
US20200005507A1 (en) 2020-01-02

Similar Documents

Publication Publication Date Title
JP7247390B2 (en) user interface camera effect
JP6393367B2 (en) Tracking display system, tracking display program, tracking display method, wearable device using them, tracking display program for wearable device, and operation method of wearable device
US9886086B2 (en) Gesture-based reorientation and navigation of a virtual reality (VR) interface
US10284817B2 (en) Device for and method of corneal imaging
US20200159314A1 (en) Method for displaying user interface of head-mounted display device
CN108932058B (en) Display method and device and electronic equipment
CN110546601B (en) Information processing device, information processing method, and program
AU2021290132B2 (en) Presenting avatars in three-dimensional environments
KR20180073330A (en) Method and apparatus for creating user-created sticker, system for sharing user-created sticker
CN106020480B (en) A kind of virtual reality device and virtual reality image processing method
JP7459798B2 (en) Information processing device, information processing method, and program
CN110192169B (en) Menu processing method and device in virtual scene and storage medium
TWI680005B (en) Movement tracking method and movement tracking system
CN104156138B (en) Filming control method and imaging control device
CN111782053B (en) Model editing method, device, equipment and storage medium
KR20190129982A (en) Electronic device and its control method
TWI838710B (en) Method for determining two-handed gesture, host, and computer readable medium
US11874969B2 (en) Method for determining two-handed gesture, host, and computer readable medium
US20230351632A1 (en) Method for providing visual content, host, and computer readable storage medium
US20240103681A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Window Controls in Three-Dimensional Environments
US20240152245A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Window Controls in Three-Dimensional Environments
Faaborg et al. METHODS AND APPARATUS TO SCALE ANNOTATIONS FOR DESIRABLE VIEWING IN AUGMENTED REALITY ENVIRONMENTS
WO2024064015A1 (en) Representations of participants in real-time communication sessions
CN117336458A (en) Image processing method, device, equipment and medium
CN106125940A (en) virtual reality interactive interface management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant