WO2023035911A1 - Display method and electronic device - Google Patents

Display method and electronic device Download PDF

Info

Publication number
WO2023035911A1
WO2023035911A1 PCT/CN2022/113692 CN2022113692W WO2023035911A1 WO 2023035911 A1 WO2023035911 A1 WO 2023035911A1 CN 2022113692 W CN2022113692 W CN 2022113692W WO 2023035911 A1 WO2023035911 A1 WO 2023035911A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
area
offset
display screen
Prior art date
Application number
PCT/CN2022/113692
Other languages
French (fr)
Chinese (zh)
Inventor
李昱霄
毛春静
沈钢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023035911A1 publication Critical patent/WO2023035911A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present application relates to the field of electronic technology, in particular to a display method and electronic equipment.
  • VR technology is a means of human-computer interaction created with the help of computer and sensor technology.
  • VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other science and technology to create a virtual world. Users can immerse themselves in the virtual world by wearing VR wearable devices (eg, VR glasses, VR helmets, etc.).
  • the objects in the virtual world can be all fictitious objects, and can also include three-dimensional models of real objects, so that the virtual world seen by the user includes both fictitious objects and real objects, and the experience is more realistic.
  • a camera can be set on a VR wearable device to capture an image of a real object, and based on the image, a three-dimensional model of the real object can be constructed and displayed in the virtual world.
  • FIG. 1 is a schematic diagram of VR glasses.
  • the VR glasses include a camera and a display screen.
  • the camera will not be set at the position of the display screen, but is usually set at the position below the display screen, as shown in Figure 1.
  • This setting method will cause the angle of view direction of the human eye to be inconsistent with the angle of view direction (or called the shooting direction) of the camera.
  • the shooting direction of the camera is facing downward, and the viewing angle of the human eye is ahead. If the images captured by the camera are directly displayed to the human eyes through the display screen, it will give people a sense of discomfort, and if this is done for a long time, they will feel dizzy and the experience is poor.
  • the purpose of the present application is to provide a display method and an electronic device for improving VR experience.
  • a display method which is applied to a wearable device, and the wearable device includes at least one display screen and at least one camera; including: presenting a first image to the user through the display screen; displaying a first image on the first image
  • the first object is different from at least one of the display position or form of the first object on the second image, and the display of the second object on the first image is different from the display of the second object on the second image
  • the position and shape are the same;
  • the second image is an image collected by the camera; wherein, the first object is in the area where the user's gaze point is located, and the second object is in an area other than the area where the user's gaze point is located.
  • the wearable device may reconstruct the viewing angle of the area where the user's gaze point is located on the second image captured by the camera, and not perform viewing angle reconstruction for areas other than the area where the user's gaze point is located.
  • the wearable device may reconstruct the viewing angle of the area where the user's gaze point is located on the second image captured by the camera, and not perform viewing angle reconstruction for areas other than the area where the user's gaze point is located.
  • it can alleviate the feeling of vertigo (the position of the display screen and the camera causes the sense of vertigo caused by the shooting angle of the camera and the viewing angle of the human eye), and improve the VR experience.
  • the workload of viewing angle reconstruction in the area is less, and the probability or degree of picture distortion can be reduced.
  • the displacement offset between the first display position of the first object on the first image and the second display position of the first object on the second image is the same as that of the camera It is related to the distance between the display screens. For example, the greater the distance between the camera and the display screen, the greater the offset between the first object and the second object.
  • the displacement offset between the first display position and the second display position increases as the distance between the camera and the display screen increases. Decreases as the distance between the camera and the display screen decreases.
  • the displacement offset between the first display position and the second display position is the first displacement offset .
  • the displacement offset between the first display position and the second display position is the second displacement offset.
  • the first distance is greater than or equal to the second distance
  • the first displacement offset is greater than or equal to the second displacement offset.
  • the first displacement offset is smaller than the second displacement offset.
  • the offset direction between the first display position of the first object on the first image and the second display position of the first object on the second image is related to the camera and
  • the positional relationship between the display screens is related.
  • the camera is located on the left side of the display screen, and the first object on the second image shifts to the left to the position of the first object on the first image.
  • the offset direction between the first display position and the second display position changes as the direction between the camera and the display screen changes.
  • the offset direction between the second display position and the first display position is the first direction.
  • the offset direction between the second display position and the first display position is the second direction.
  • the position offset of the first object on the first image relative to the second object on the second image is a first offset
  • the position offset of the third object relative to the third object on the second image is the second offset
  • the third object is in the area where the user's gaze point is located, and is larger than the first object. Close to the edge of the area where the gaze point is located; the second offset is smaller than the first offset. That is to say, the offset of the first object at the center of the area where the user's gaze point is located is larger than that of the third object at the edge position, so that a smooth edge transition between the area where the user's gaze point is located and other areas can be achieved.
  • the position offset of the first object on the first image relative to the second object on the second image is a first offset
  • the position offset of the third object relative to the third object on the second image is the second offset
  • the third object is located in an area other than the area where the user's gaze point is located, and the area is surrounded by The edge of the area where the user's gaze point is located
  • the second offset is smaller than the first offset
  • the first object in the area where the user's gaze point is located is offset from the third object in the peripheral area (the area outside the area where the user's gaze point is located, and this area surrounds the edge of the area where the user's gaze point is located)
  • the amount is large, so that the edge of the area where the user's gaze point is located and other areas can be smoothly transitioned.
  • the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the
  • the morphological change degree of the third object on the second image is large; the third object is in the area where the user's gaze point is located, and the third object is closer to the area where the gaze point is located than the first object. edge. That is to say, from the center position to the edge position in the area where the user's gaze point is located, the degree of shape change of the object is smaller. In this way, a smooth edge transition between the area where the user's gaze point is located and other areas can be achieved.
  • the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the
  • the shape of the third object on the second image changes greatly; the third object is located in an area other than the area where the user's gaze point is located, and this area surrounds an edge of the area where the gaze point is located. That is to say, from the area where the user's gazing point is located outward to the peripheral area (the area outside the area where the user's gazing point is located, and the area surrounds the edge of the area where the gazing point is located), the smaller the degree of shape change of the object is. In this way, a smooth edge transition between the area where the user's gaze point is located and other areas can be achieved.
  • the position offset of the first object on the first image relative to the first object on the second image is a first offset;
  • the position offset of the third object relative to the third object on the second image is the second offset;
  • the third object is in the area where the user's gaze point is located, and the third object is in the Within the first direction range of the first object, the first direction range includes a position offset direction of the first object on the first image relative to the first object on the second image;
  • the second offset is greater than the first offset.
  • the position offset of the first object on the first image relative to the first object on the second image is a first offset;
  • the position offset of the third object relative to the third object on the second image is the second offset;
  • the third object is located in an area other than the area where the user's gaze point is located and the area surrounds the The edge of the area where the user's gaze point is located;
  • the third object is within the first direction range of the first object, and the first direction range includes the first object on the first image relative to the second
  • the position of the first object on the image is offset in a direction; the second offset is greater than the first offset.
  • the offset of objects in the lower left range in the area where the user's gaze point is located is smaller than the offset amount of objects in the lower left range in the peripheral area surrounding the area where the user's gaze point is located. That is to say, from the lower left direction of the area where the user's gaze point is located, the farther the object is, the larger the offset is, and the image in the upper right area can smoothly transition with other areas.
  • the first image includes a first pixel point, a second pixel point and a third pixel point, and the first pixel point and the second pixel point are located at the user's gaze point In the area, and the first pixel point is closer to the edge of the area where the user's gaze point is located than the second pixel point; the third image information is related to the area outside the area where the user's gaze point is located; the first The image information of the pixel is located between the image information of the second pixel and the image information of the third pixel.
  • the image information of the pixels (i.e. the first pixels) in the edge area in the first area is the image information of the pixels (i.e. the second pixels) in the center area and the image information of the pixels (i.e. the second pixels) in the area outside the first area ( That is, the intermediate value of the image information of the third pixel point), so that the transition between the first area and other areas can be smooth. For example, from an area outside the first area to within the first area, the color, brightness, resolution, etc. of the pixels gradually change.
  • the image information includes: at least one of resolution, color, brightness, and color temperature. It should be noted that the image information may also include more information, which is not limited in this embodiment of the present application.
  • the at least one camera includes a first camera and a second camera
  • the at least one display screen includes a first display screen and a second display screen
  • the first display screen is configured to display the the image collected by the first camera
  • the second display screen is configured to display the image of the second camera
  • the first The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the first camera
  • the second object on the image displayed on the first display screen is The display position and form of the second object on the second image captured by the object and the first camera are the same; if the positions of the second display screen and the second camera are different, the second The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the second camera, and the second object on the image displayed on the second display screen is The display position and shape of the object and the second object on the image captured by the
  • the shape of the first object on the first image is different from that of the first object on the second image, including: the edge profile of the first object on the second image is larger than the The edge contour of the first object on the first image is smoothed. Since the first object on the first image has undergone perspective reconstruction, the edge of the first object on the first image may be uneven, while the first object on the second image has not undergone perspective reconstruction, so the second Objects have smooth edges. Since the first object has been reconstructed from the angle of view, the user will not feel dizzy when seeing the first object when wearing the wearable device (the position of the display screen and the camera causes the dizziness caused by the shooting angle of the camera and the viewing angle of the human eye), improving VR experience.
  • a display method is also provided, which is applied to a wearable device, and the wearable device includes at least one display screen, at least one camera, and a processor; the camera is configured to transmit the image it collects to the processing displaying the image on the display screen via the processor, including: displaying the first image to the user through the display screen; the first object on the first image and the first object on the second image At least one of the display position or form of the object is different, and the display position and form of the second object on the first image and the second object on the second image are the same; the second image is The image collected by the camera; wherein, the first object is located in the area where the user's gaze point is located, and the second object is located in an area other than the area where the user's gaze point is located.
  • the wearable device may reconstruct the viewing angle of the area where the user's gaze point is located on the second image captured by the camera, and not perform viewing angle reconstruction for areas other than the area where the user's gaze point is located.
  • the wearable device may reconstruct the viewing angle of the area where the user's gaze is located on the second image captured by the camera, and not perform viewing angle reconstruction for areas other than the area where the user's gaze point is located.
  • another camera is set at the location of the camera, and the image collected by the other camera is the same as the image collected by the camera. That is to say, the image observed at the location of the camera (observed by a person or taken by other cameras) is the same as the image collected by the camera.
  • the displacement offset between the first display position of the first object on the first image and the second display position of the first object on the second image is the same as that of the camera It is related to the distance between the display screens.
  • the displacement offset between the first display position and the second display position increases as the distance between the camera and the display screen increases. Decreases as the distance between the camera and the display screen decreases.
  • the displacement offset between the first display position and the second display position is the first displacement offset .
  • the displacement offset between the first display position and the second display position is the second displacement offset.
  • the first distance is greater than or equal to the second distance
  • the first displacement offset is greater than or equal to the second displacement offset.
  • the first displacement offset is smaller than the second displacement offset.
  • the offset direction between the first display position of the first object on the first image and the second display position of the first object on the second image is related to the camera and The positional relationship between the display screens is related.
  • the offset direction between the first display position and the second display position changes as the direction between the camera and the display screen changes.
  • the offset direction between the first display position and the second display position is the first direction.
  • the offset direction between the first display position and the second display position is the second direction.
  • the position offset of the first object on the first image relative to the second object on the second image is a first offset
  • the position offset of the third object relative to the third object on the second image is the second offset
  • the third object is in the area where the user's gaze point is located, and is larger than the first object. Close to the edge of the area where the gaze point is located; the second offset is smaller than the first offset.
  • the position offset of the first object on the first image relative to the second object on the second image is a first offset;
  • the position offset of the third object relative to the third object on the second image is a second offset;
  • the third object is in a first area, and the first area is where the user's gaze point is located Outside the area, and around the edge of the area where the user's gaze point is located; the second offset is smaller than the first offset.
  • the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the
  • the morphological change degree of the third object on the second image is large; the third object is in the area where the user's gaze point is located, and the third object is closer to the area where the gaze point is located than the first object. edge.
  • the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the
  • the shape of the third object on the second image has a large degree of change; the third object is in a first area, and the first area is outside the area where the user's gaze point is located, and surrounds the area where the gaze point is located. edge.
  • the position offset of the first object on the first image relative to the first object on the second image is a first offset
  • the position offset of the third object relative to the third object on the second image is the second offset
  • the third object is in the area where the user's gaze point is located, and the third object is in the Within the first direction range of the first object
  • the first direction range includes a position offset direction of the first object on the first image relative to the first object on the second image
  • the second offset is greater than the first offset.
  • the position offset of the first object on the first image relative to the first object on the second image is a first offset;
  • the position offset of the third object relative to the third object on the second image is a second offset;
  • the third object is in a first area, and the first area is where the user's gaze point is located Outside the area and around the edge of the area where the user's gaze point is located;
  • the third object is within the first direction range of the first object, and the first direction range includes the first object on the first image
  • a direction is offset relative to the position of the first object on the second image;
  • the second offset is greater than the first offset.
  • the first image includes a first pixel point, a second pixel point and a third pixel point, and the first pixel point and the second pixel point are located at the user's gaze point In the area, and the first pixel point is closer to the edge of the area where the user's gaze point is located than the second pixel point; the third image information is related to the area outside the area where the user's gaze point is located; the first The image information of the pixel is located between the image information of the second pixel and the image information of the third pixel.
  • the image information of the pixels in the edge area (i.e. the first pixel) in the first area is the image information of the pixels in the central area (i.e. the second pixel) and the image information of the pixels in the area outside the first area.
  • the intermediate value of the image information of the pixel point that is, the third pixel point, so that the edge area of the first area can transition smoothly.
  • the image information includes: at least one of resolution, color, brightness, and color temperature.
  • image information may also include more information, which is not limited in this embodiment of the present application.
  • the at least one camera includes a first camera and a second camera
  • the at least one display screen includes a first display screen and a second display screen
  • the first display screen is configured to display the the image collected by the first camera
  • the second display screen is configured to display the image of the second camera
  • the first The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the first camera
  • the second object on the image displayed on the first display screen is The display position and form of the second object on the second image captured by the object and the first camera are the same; if the positions of the second display screen and the second camera are different, the second The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the second camera, and the second object on the image displayed on the second display screen is The display position and shape of the object and the second object on the image captured by the
  • the technical solutions provided in the embodiments of the present application can be applied to wearable devices including two display screens and two cameras.
  • the shape of the first object on the first image is different from that of the first object on the second image, including: the edge profile of the first object on the second image is larger than the The edge contour of the first object on the first image is smoothed.
  • an electronic device including:
  • processor memory, and, one or more programs
  • the one or more programs are stored in the memory, the one or more programs include instructions, and when the instructions are executed by the processor, the electronic device performs the above-mentioned first aspect Or the method steps described in the second aspect.
  • a computer-readable storage medium the computer-readable storage medium is used to store a computer program, and when the computer program is run on a computer, the computer executes the above-mentioned first or second aspect. The method described in the two aspects.
  • a computer program product including a computer program, which, when the computer program is run on a computer, causes the computer to execute the method as described in the first aspect or the second aspect above.
  • a graphical user interface on an electronic device the electronic device has a display screen, a memory, and a processor, and the processor is configured to execute one or more computer programs stored in the memory,
  • the graphical user interface includes a graphical user interface displayed when the electronic device executes the method described in the first aspect or the second aspect.
  • the embodiment of the present application further provides a chip, the chip is coupled with the memory in the electronic device, and is used to call the computer program stored in the memory and execute the technical solutions of the first aspect to the second aspect of the embodiment of the present application , "Coupling" in the embodiments of the present application means that two components are directly or indirectly combined with each other.
  • FIG. 1 is a schematic diagram of VR glasses provided by an embodiment of the present application.
  • FIG. 2A is a schematic diagram of a VR system provided by an embodiment of the present application.
  • FIG. 2B is a schematic diagram of a VR wearable device provided by an embodiment of the present application.
  • FIG. 2C is a schematic diagram of eye tracking provided by an embodiment of the present application.
  • Fig. 3 is a schematic structural diagram of a human eye provided by an embodiment of the present application.
  • FIG. 4A is a schematic diagram of naked-eye observation of an object provided by an embodiment of the present application.
  • FIG. 4B is a schematic diagram of human eyes wearing VR glasses to observe objects provided by an embodiment of the present application.
  • FIG. 4C is a schematic diagram of human eyes wearing VR glasses to observe objects provided by an embodiment of the present application.
  • 5A to 5B are schematic diagrams of an application scenario provided by an embodiment of the present application.
  • 6A to 6B are schematic diagrams of a visual reconstruction process provided by an embodiment of the present application.
  • FIG. 7 to 8 are schematic diagrams of visual reconstruction provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a first coordinate system and a second coordinate system provided by an embodiment of the present application.
  • FIG. 10 to FIG. 11 are schematic diagrams of viewing angle reconstruction in the first area provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a display method provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • Fig. 14 is a schematic diagram of a planar two-dimensional image provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a convergence angle provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of converting a plane two-dimensional image into a three-dimensional point cloud according to an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a virtual camera provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of an image before reconstruction and an image after reconstruction provided by an embodiment of the present application.
  • Fig. 19 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • At least one of the embodiments of the present application involves one or more; wherein, a plurality means greater than or equal to two.
  • words such as “first” and “second” are only used for the purpose of distinguishing descriptions, and cannot be understood as express or implied relative importance, nor can they be understood as express or imply order.
  • the first area and the second area do not represent the importance of the two, or represent the order of the two, but are only for distinguishing the areas.
  • "and/or” is just a kind of relationship describing the relationship between related objects, which means that there may be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, and A and B exist at the same time. B, there are three situations of B alone.
  • the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
  • VR technology is a means of human-computer interaction created with the help of computer and sensor technology.
  • VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other science and technology to create a virtual environment.
  • the virtual environment includes three-dimensional realistic images generated by computers and dynamically played in real time to bring visual perception to users; moreover, in addition to the visual perception generated by computer graphics technology, there are also perceptions such as hearing, touch, force, and movement.
  • the user can see the VR game interface by wearing the VR wearable device, and can interact with the VR game interface through gestures, handles, and other operations, as if in a game.
  • Augmented Reality (AR) technology refers to superimposing computer-generated virtual objects on real-world scenes to enhance the real world.
  • AR technology needs to collect real-world scenes, and then add a virtual environment to the real world.
  • VR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, that is, it includes objects in the real world. Also includes dummy objects.
  • the user wears transparent glasses, through which the real environment around can be seen, and virtual objects can also be displayed on the glasses, so that the user can see both real objects and virtual objects.
  • Mixed reality technology is to build a bridge of interactive feedback information between the virtual environment, the real world and users by introducing real scene information (or called real scene information) into the virtual environment. , thereby enhancing the realism of the user experience.
  • the real object is virtualized (for example, using a camera to scan the real object for 3D reconstruction to generate a virtual object), and the virtualized real object is introduced into the virtual environment, so that the user can see in the virtual environment real object.
  • FIG. 2A is a schematic diagram of a VR system according to an embodiment of the present application.
  • the VR system includes a VR wearable device 100 and an image processing device 200 .
  • the image processing device 200 may include a host (such as a VR host) or a server (such as a VR server).
  • the VR wearable device 100 is connected (wired connection or wireless connection) with a VR host or a VR server.
  • the VR host or VR server may be a device with relatively large computing capabilities.
  • the VR host can be a device such as a mobile phone, a tablet computer, or a notebook computer, and the VR server can be a cloud server, etc.
  • the VR wearable device 100 may be a head mounted device (Head Mounted Display, HMD), such as glasses, a helmet, and the like.
  • the VR wearable device 100 is provided with at least one camera and at least one display screen.
  • two display screens ie, a display screen 110 and a display screen 112 , are set on the VR wearable device 100 as an example.
  • the display screen 110 is used to display images to the user's right eye.
  • the display screen 112 is used to present images to the user's left eye.
  • the display screen 110 and the display screen 112 are wrapped inside the VR glasses, so the arrows indicating the display screen 110 and the display screen 112 in FIG. 2A are represented by dotted lines.
  • the display screen 110 and the display screen 112 may be two independent display screens or may be two different display areas on the same display screen, which is not limited in this application.
  • two cameras that is, a camera 120 and a camera 122 are set on the VR wearable device 100 as an example.
  • the camera 120 and the camera 122 are respectively used to collect images of the real world.
  • the image collected by the camera 120 can be displayed through the display screen 110 .
  • Images collected by the camera 122 can be displayed on the display screen 112 .
  • the human eyes are located close to the display screen, for example, the right eye is close to the display screen 110 to view the images on the display screen 110, and the left eye is close to the display screen 112 to view the images on the display screen 110.
  • the shooting angle of view of the camera 120 is different from that of the right eye, and the shooting angle of view of the camera 122 is different from that of the left eye.
  • the shooting angle of view of the camera 120 can cause discomfort to the user, can appear dizziness like this for a long time, and experience is poorer.
  • the VR wearable device 100 may send the image collected by the camera to the image processing device 200 for processing.
  • the image processing device 200 uses the perspective reconstruction scheme provided in this application to reconstruct the perspective of the image (the specific implementation process will be introduced later), and sends the reconstructed image to the VR wearable device 100 for display.
  • the VR wearable device 100 sends the image 1 captured by the camera 120 to the image processing device 200 to perform perspective reconstruction to obtain an image 2, and then the display screen 110 displays the image 2.
  • the VR wearable device 100 sends the image 3 collected by the camera 122 to the image processing device 200 to perform perspective reconstruction to obtain an image 4 , and then the display screen 112 displays the image 4 .
  • the VR system in FIG. 2A may not include the image processing device 200 .
  • the VR wearable device 100 locally has image processing capabilities (for example, the ability to reconstruct the viewing angle of images), and does not need to be processed by the image processing device 200 (VR host or VR server).
  • the following takes the VR wearable device 100 to locally perform perspective reconstruction as an example for illustration, and the following mainly takes the VR wearable device 100 as VR glasses as an example.
  • FIG. 2B shows a schematic structural diagram of a VR wearable device 100 provided by an embodiment of the present application.
  • the VR wearable device 100 may include a processor 111, a memory 101, a sensor module 130 (which may be used to obtain the user's posture), a microphone 140, a button 150, an input and output interface 160, a communication module 170, a camera 180, battery 190 , optical display module 1100 , eye tracking module 1200 and so on.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the VR wearable device 100 .
  • the VR wearable device 100 may include more or fewer components than shown in the illustration, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 111 is generally used to control the overall operation of the VR wearable device 100, and may include one or more processing units, for example: the processor 111 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU) controller, memory, video codec, digital signal processor (digital signal processor, DSP ), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • video processing unit video processing unit
  • VPU video processing unit
  • memory video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • a memory may also be provided in the processor 111 for storing instructions and data.
  • the memory in processor 111 is a cache memory.
  • the memory may hold instructions or data that the processor 111 has just used or recycled. If the processor 111 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 111 is reduced, thus improving the efficiency of the system.
  • the processor 111 may be used to control the optical power of the VR wearable device 100 .
  • the processor 111 may be used to control the optical power of the optical display module 1100 to realize the function of adjusting the optical power of the wearable device 100 .
  • the processor 111 can adjust the relative positions of the optical devices (such as lenses, etc.) When the human eye is imaging, the position of the corresponding virtual image plane can be adjusted. In this way, the effect of controlling the optical power of the wearable device 100 is achieved.
  • processor 111 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general input and output (general -purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface etc.
  • I2C integrated circuit
  • MIPI mobile industry processor interface
  • GPIO general input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB serial peripheral interface
  • SPI serial peripheral interface
  • the processor 111 may perform blurring processing to different degrees on objects at different depths of field, so that objects at different depths of field have different sharpness.
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 111 may include multiple sets of I2C buses.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 111 and the communication module 170 .
  • the processor 111 communicates with the Bluetooth module in the communication module 170 through the UART interface to realize the Bluetooth function.
  • the MIPI interface can be used to connect the processor 111 with the display screen in the optical display module 1100 , the camera 180 and other peripheral devices.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 111 with the camera 180 , the display screen in the optical display module 1100 , the communication module 170 , the sensor module 130 , the microphone 140 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the camera 180 can capture images including real objects, and the processor 111 can fuse the images captured by the camera with the virtual objects, and display the fused images through the optical display module 1100 .
  • the camera 180 can also capture images including human eyes.
  • the processor 111 performs eye tracking through the images.
  • the USB interface is an interface that conforms to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface can be used to connect a charger to charge the VR wearable device 100, and can also be used to transmit data between the VR wearable device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones.
  • the USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
  • DP display port
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the wearable device 100 .
  • the wearable device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the VR wearable device 100 may include a wireless communication function, for example, the VR wearable device 100 may receive images from other electronic devices (such as a VR host) for display.
  • the communication module 170 may include a wireless communication module and a mobile communication module.
  • the wireless communication function can be realized by an antenna (not shown), a mobile communication module (not shown), a modem processor (not shown), and a baseband processor (not shown).
  • Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the VR wearable device 100, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module can provide applications on the VR wearable device 100 including second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation (5th generation, 5G) network and other wireless communication solutions.
  • the mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna.
  • At least part of the functional modules of the mobile communication module may be set in the processor 111 . In some embodiments, at least part of the functional modules of the mobile communication module and at least part of the modules of the processor 111 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speakers, etc.), or displays images or videos through the display screen in the optical display module 1100 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 111, and be set in the same device as the mobile communication module or other functional modules.
  • the wireless communication module can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the VR wearable device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the wireless communication module receives electromagnetic waves through the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 111 .
  • the wireless communication module can also receive the signal to be sent from the processor 111 , frequency-modulate it, amplify it, and convert it into electromagnetic wave and radiate it through the antenna.
  • the antenna of the VR wearable device 100 is coupled to the mobile communication module, so that the VR wearable device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou satellite navigation system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the VR wearable device 100 realizes the display function through the GPU, the optical display module 1100 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the optical display module 1100 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 111 may include one or more GPUs that execute program instructions to generate or change display information.
  • the memory 101 may be used to store computer-executable program code including instructions.
  • the processor 111 executes various functional applications and data processing of the VR wearable device 100 by executing instructions stored in the memory 101 .
  • the memory 101 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the data storage area can store data created during use of the wearable device 100 (such as audio data, phonebook, etc.) and the like.
  • the memory 101 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash memory (universal flash storage, UFS) and the like.
  • the VR wearable device 100 can implement audio functions through an audio module, a speaker, a microphone 140, an earphone interface, and an application processor. Such as music playback, recording, etc.
  • the audio module is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be set in the processor 111 , or some functional modules of the audio module may be set in the processor 111 . Loudspeakers, also called “horns", are used to convert audio electrical signals into sound signals.
  • the wearable device 100 can listen to music through the speaker, or listen to hands-free calls.
  • the microphone 140 also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the VR wearable device 100 may be provided with at least one microphone 140 .
  • the VR wearable device 100 can be provided with two microphones 140, which can also implement a noise reduction function in addition to collecting sound signals.
  • the VR wearable device 100 can also be provided with three, four or more microphones 140 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the headphone jack is used to connect wired headphones.
  • the headphone interface can be a USB interface, or a 3.5mm (mm) open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface .
  • mm 3.5mm
  • CTIA cellular telecommunications industry association of the USA
  • the VR wearable device 100 may include one or more buttons 150 , and these buttons may control the VR wearable device and provide users with access to functions on the VR wearable device 100 .
  • Keys 150 may be in the form of buttons, switches, dials, and touch or near-touch sensing devices such as touch sensors. Specifically, for example, the user can turn on the optical display module 1100 of the VR wearable device 100 by pressing a button.
  • the keys 150 include a power key, a volume key and the like.
  • the key 150 may be a mechanical key. It can also be a touch button.
  • the wearable device 100 can receive key input and generate key signal input related to user settings and function control of the wearable device 100 .
  • the VR wearable device 100 may include an input-output interface 160, and the input-output interface 160 may connect other devices to the VR wearable device 100 through suitable components.
  • Components may include, for example, audio/video jacks, data connectors, and the like.
  • the optical display module 1100 is used for presenting images to the user under the control of the processor 111 .
  • the optical display module 1100 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as mirrors, transmission mirrors, or optical waveguides, so as to realize virtual interactive experience, or realize virtual and Interactive experience combined with reality.
  • the optical display module 1100 receives image data information sent by the processor 111 and presents corresponding images to the user.
  • the VR wearable device 100 may further include an eye tracking module 1200, which is used to track the movement of human eyes, and then determine the point of gaze of the human eyes.
  • the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated.
  • the eye tracking system can determine the position of the user's fixation point (or determine the direction of the user's line of sight) through methods such as video eye diagram method, photodiode response method, pupil cornea reflection method, etc., so as to realize the user's eye tracking. motion tracking.
  • the eye tracking system may include one or more near-infrared light-emitting diodes (Light-Emitting Diode, LED) and one or more near-infrared cameras.
  • the NIR LED and NIR camera are not shown in Figure 2B.
  • the near-infrared LEDs can be positioned around the eyepiece so as to fully illuminate the human eye.
  • the near-infrared LED may have a center wavelength of 850 nm or 940 nm.
  • the eye tracking system can obtain the user's line of sight direction through the following method: the human eye is illuminated by a near-infrared LED, and the near-infrared camera captures the image of the eyeball, and then according to the position of the reflective point of the near-infrared LED on the cornea in the eyeball image (i.e. The image of the LED spot on the near-infrared camera in Figure 2C) and the center of the pupil (that is, the image of the center of the pupil on the near-infrared camera in Figure 2C) determine the direction of the optical axis of the eyeball, thereby obtaining the direction of the user's line of sight.
  • eye-tracking systems corresponding to the two eyes of the user may be set respectively, so as to perform eye-tracking on the two eyes synchronously or asynchronously.
  • an eye-tracking system can also be set only near one human eye, and the line-of-sight direction of the corresponding human eye can be obtained through the eye-tracking system, and according to the relationship between the gaze points of the two eyes (for example, when the user passes through When observing an object with both eyes, the fixation point positions of the two eyes are generally similar or the same), combined with the user's binocular distance, the line of sight direction or fixation point position of the other eye can be determined.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the VR wearable device 100 .
  • the VR wearable device 100 may include more or fewer components than those shown in FIG. 2A , or combine certain components, or split certain components, or arrange different components. limited.
  • Figure 3 is a schematic diagram of the composition of the human eye.
  • the human eye can include a lens, a ciliary muscle, and a retina located in the fundus.
  • the lens can function as a zoom lens to converge the light rays entering the human eye, so that the incident light rays can be converged on the retina of the human eye fundus, so that the scene in the actual scene can form a clear image on the retina.
  • the ciliary muscle can be used to adjust the shape of the lens.
  • the ciliary muscle can adjust the diopter of the lens by contracting or relaxing, so as to achieve the effect of adjusting the focal length of the lens. Therefore, objects at different distances in the actual scene can be clearly imaged on the retina through the lens.
  • the real world when a user (not wearing VR glasses) views an object, the perspectives of the left eye and the right eye are different.
  • the user's brain can determine the depth of the object based on the parallax of the same object in the left and right eyes, so the world seen by the human eye is three-dimensional.
  • the real world includes an observed object 400 (take a triangle as an example).
  • the left eye captures an image 401 , in which the triangle is located at the position (A1, B1).
  • the triangle in the image 402 is located at (A2, B2).
  • the brain can determine the position of the object in the real world through the pixel difference (or parallax) of the same object (such as a triangle) on the image 401 and the image 402 .
  • the brain determines that the position of the triangle in the real world is (A3, B3, L1), where L1 is the Depth, which is the distance between the triangle and the user's eyes. That is to say, the distance between the triangle seen by the user without VR glasses and the user's eyes is L1.
  • the distance between the triangle and the user's eyes in the real world and the distance between the triangle perceived by the brain and the user's eyes The distance is equal.
  • the user keeps his position still and puts on the VR glasses, and watches the same observed object 400 (ie, a triangle) through the VR glasses.
  • the position of the observed object 400 seen by the user wearing the VR glasses is different from the position of the observed object 400 seen when the user does not wear the VR glasses.
  • the camera 120 on the VR glasses is located at the lower right of the display screen 110, and the camera 122 is located at the lower left of the display screen 112, so the distance between the two cameras is B 'It is greater than the interocular distance B of the human eye.
  • the camera 122 is further to the left than the person's left eye, and the camera 120 is further to the right than the person's right eye.
  • the triangle on the image 422 collected by the camera 122 is located at (A1, B1,).
  • the position of the triangle on the image 422 collected by the camera 122 is more to the right than the position of the triangle on the image 401 (see FIG. 4A ) collected by the left eye when not wearing VR glasses, that is, ( A1, B1,) is to the right of (A1, B1).
  • the triangle on the image 420 collected by the camera 120 is located at the position (A2, B2,). Since the camera 120 is more to the right than the right eye of a person, the position of the triangle on the image 420 collected by the camera 120 is more to the left than the position of the triangle on the image 402 (see FIG.
  • the object seen when wearing VR glasses is closer to the user than the object seen when not wearing VR glasses.
  • the human eye sees an object that is 1 meter away from the user, but when the user wears VR glasses, the object seen by the user is 0.7 meters away from the user, which is closer to the user, which is inconsistent with the real situation, and ,
  • the object seen by the user is 0.7 meters away from the user, which is closer to the user, which is inconsistent with the real situation
  • one observed object is taken as an example (that is, a triangle), and below, two observed objects are taken as an example, as shown in FIG. 4C , an observed object 400 (triangle) and an observed object 401 (square).
  • an observed object 400 triangle
  • an observed object 401 square
  • the observed object 402 located at infinite distance as an example, such as the sun. If the VR glasses are not worn, the image 460 should be seen by the left eye, and the image 470 should be seen by the right eye. Since the square is at infinity and close to the left and right eyes, both image 460 and image 470 have the square at the center of the image. In this way, the brain can see the real environment based on the image 460 and the image 470 .
  • the distance between the triangle and the square on the image 480 observed at the position of the camera 122 is larger than the distance between the triangle and the square on the image 460 observed at the position of the left eye .
  • the ratio of the distance between the triangle and the square on the image 490 observed at the position of the camera 120 is the distance between the triangle and the square on the image 490 observed at the position of the right eye big. Therefore, in the case of not wearing VR glasses, the triangle seen by the brain based on the image 480 and the image 490 is closer to the user, which is inconsistent with the real situation.
  • the camera 120 is located at the lower right of the display screen 110 and the camera 122 is located at the lower left of the display screen 112 on the VR glasses as an example. It can be understood that, in other embodiments, the camera 120 and the camera 122 can also be located at other positions, for example, the camera 120 is located above the display screen 110, and the camera 122 is located above the display screen 112; If the distance is smaller than the distance between the two display screens, etc., as long as the position of the camera is different from that of the display screen, the distance between the object and the human eye will appear when wearing VR glasses, and the distance between the object and the human eye will be seen when not wearing VR glasses. The distance between the eyes is different.
  • the application scenario takes a user wearing VR glasses to play a game at home as an example, as shown in FIG. 5A .
  • the VR glasses can show the real scene to the user, so when the user wears the VR glasses, he can see the environment at home, such as sofas and tables at home.
  • the VR glasses can display real scenes and virtual objects to the user, so when the user wears the VR glasses, what he will see is the environment and virtual objects at home (such as game characters, game interfaces, etc., the virtual Objects are not objects in the real scene), in this way, users can play virtual games in a familiar environment, and the experience is better.
  • FIG. 5B what the user sees when not wearing VR glasses should be the real world 501 shown in (a) in FIG. 5B .
  • the virtual world 502 shown in (b) in FIG. 5B what the human eyes see is the virtual world 502 shown in (b) in FIG. 5B . It can be seen that all objects in the virtual world 502 are closer to the user, especially objects that are already close to the user in the real world, such as a table. After the user wears VR glasses, the user will see that the table is closer to the user, which is inconsistent with the real situation.
  • angle reconstruction can be simply understood as angle adjustment/reconstruction, etc.
  • perspective reconstruction refers to adjusting the shooting perspective of the camera to the observation perspective of the human eye.
  • the shooting angle of the camera is difficult to adjust, for example, if the camera is fixed at a certain position on the VR glasses, adjusting the shooting angle requires corresponding hardware/mechanical structure, which is not only expensive, but also not conducive to thinning the device.
  • image perspective reconstruction is to adjust the display position of pixels on the image collected by the camera, so that the objects seen by the human eye based on the adjusted image conform to the real situation.
  • the image perspective reconstruction may include adjusting the position (A1', B1') of the triangle in the image 422 in Fig.
  • the screen of the VR glasses can display the reconstructed image (ie display image 401 and image 402 ), so that the human brain can accurately determine the real position of the object (ie the triangle) based on the image 401 and image 402 .
  • perspective reconstruction when performing perspective reconstruction on an image, perspective reconstruction may be performed on the entire image (which may be called global perspective reconstruction).
  • the scene in FIG. 5A is taken as an example, and the reconstruction of the global viewing angle of an image captured by a camera on the VR glasses is taken as an example.
  • the image collected by the camera is the image of (a) in FIG. 6A .
  • the image is divided into four regions, which are region 601 , region 602 , region 603 and region 604 .
  • the display positions of the regions 602 and 604 move down after the viewing angle reconstruction, and the display positions of the regions 601 and 603 move up after the viewing angle reconstruction.
  • the complete image formed by the four regions after perspective reconstruction is shown in (c) in Figure 6A. It can be seen that objects such as walls, sofas, and tables are deformed (or distorted, dislocated, etc.).
  • Figure 6A is an example of dividing the image into four regions for viewing angle reconstruction.
  • the regions on the image are divided into finer granularity, for example, divided into 9 or 16 regions. Or a larger number of regions; even reconstruct each pixel.
  • the deformation of the object on the image becomes more serious when the perspective is reconstructed for a finer-grained area or for each pixel.
  • the wall surface is distorted (for example, distorted in a wavy line) and the edge of the table is also distorted (for example, distorted in a wavy line) on the image reconstructed from the global perspective. Therefore, the global viewing angle reconstruction solution not only has a huge workload, but also the picture is seriously distorted after the viewing angle reconstruction, which has a great impact on user experience.
  • perspective reconstruction does not need to be performed on the entire image.
  • the first area may be the area where the user's gaze point is located, the user's interest area, the default area, the user-specified area, and the like.
  • performing perspective reconstruction on the first region on the image may be referred to as regional perspective reconstruction. Since the viewing angle reconstruction is only performed on the first area and not on the second area, the workload is reduced, and as mentioned above, the viewing angle reconstruction may cause picture distortion.
  • the image in the second area will not be distorted. That is to say, the probability or degree of image distortion in regional perspective reconstruction is much lower than that in global perspective reconstruction, which helps to improve the picture distortion in global perspective reconstruction.
  • the image captured by the VR glasses is the image shown in (a) in FIG. 7 .
  • the area where the user's gaze point is located is the area surrounded by the dotted line
  • the viewing angle is not reconstructed for other areas, so the display position and/or
  • the display positions and/or shapes of objects in other areas do not change, as shown in (b) in FIG. 7 . Therefore, the degree of distortion of the object on the image after perspective reconstruction is obviously lower than that of the object on the image after global perspective reconstruction.
  • Figure 6B is the reconstructed image from the global perspective
  • Figure 7 is the reconstructed image using the technical solution of this application
  • the area surrounded by a dotted line may be the smallest circumscribing rectangle of the table, or be greater than or equal to the smallest circumscribing rectangle of the table. It can be understood that the table may also be the smallest circumscribed square, the smallest circumscribed circle, etc., and the shape is not limited. In some other embodiments, the area where the user's gaze point is located may also be a partial area on the table.
  • FIG. 7 takes the reconstruction of the regional perspective of an image captured by a camera as an example. It can be understood that when the VR glasses include two cameras, the reconstruction of the regional perspective of the image captured by each camera can be performed separately.
  • the camera 122 on the VR glasses captures an image 622
  • the camera 120 captures an image 620
  • the VR glasses can reconstruct the area angle of view of the dotted line area on the image 622 to obtain an image 624 .
  • the display position and/or shape of the object in the dotted line area in image 624 and the object in the dotted line area in image 622 are different.
  • the display position of the table in image 624 is to the left of the display position of the table in image 622, and/or the table is deformed to a certain extent.
  • the VR glasses can also perform regional perspective reconstruction on the dotted line area on the image 620 to obtain an image 626 .
  • Objects within the dotted area in image 626 are displayed in different positions and/or shapes from those within the dotted area in image 620 .
  • the display position of the table in image 626 is to the right of the display position of the table in image 620, and/or the table is deformed to a certain extent.
  • perspective reconstruction is not performed, so objects in other areas on image 626 have the same display position and shape as objects in other areas on image 620.
  • the display screen 112 of the VR glasses displays an image 624
  • the display screen 120 displays an image 626 .
  • the left eye sees the image 624
  • the right eye sees the image 626.
  • the parallax of the table on the image 624 and the image 626 it is determined that the depth information of the table is accurate, because the display positions of the table on the image 624 and the image 626
  • the parallax of the table on the two images becomes smaller. Based on the smaller parallax, the determined depth information is larger, so the user will no longer feel that the table is approaching the user, and the scene seen is in line with the real situation.
  • the first coordinate system (X1-O1-Y1) is a coordinate system established based on the display screen.
  • the first coordinate system takes the center of the display screen 112 as the coordinate origin, and the display direction is the Y-axis direction.
  • the first coordinate system may also be a coordinate system established based on human eyes, for example, the first coordinate system established based on the left eye in FIG. 9 . Considering that it is difficult to establish a coordinate system based on the human eye, it is less difficult to create a coordinate system based on the display screen.
  • the position of the display screen is close to the position of the human eye, so the coordinate system based on the display screen is to a certain extent the same as that based on the human eye.
  • the created coordinate system can be considered the same.
  • the second coordinate system (X2-O2-Y2) is established based on the camera 122 .
  • the second coordinate system (X2-O2-Y2) is created based on the camera 122, that is, when the camera 122 shoots an object, it is imaged in the second coordinate system (X2-O2-Y2). Since the image collected by the camera 122 and the image displayed on the display screen 112 are not in the same coordinate system, the viewing angle of the camera is different from the viewing angle of human eyes. Therefore, performing perspective reconstruction on the image captured by the camera 122 can be understood as performing coordinate transformation on the image captured by the camera 122, that is, transforming from the second coordinate system to the first coordinate system.
  • the offset includes an offset direction and/or an offset distance (the offset distance may also be referred to as a displacement offset).
  • the offset distance may be the distance from the origin of the second coordinate system to the origin of the first coordinate system. That is to say, the offset distance is related to the distance between the display screen 112 and the camera 122 . For example, when the distance between the display screen 112 and the camera 122 is greater, the distance between the first coordinate system and the second coordinate system is greater, that is, the offset distance is greater. In some embodiments, the offset distance increases as the distance between the camera 122 and the display screen 112 increases, and decreases as the distance between the camera 122 and the display screen 112 decreases. Exemplarily, when the distance between the camera 122 and the display screen 112 is the first distance, the offset distance is the first displacement offset.
  • the displacement distance is the second displacement offset. If the first distance is greater than or equal to the second distance, the first displacement offset is greater than or equal to the second displacement offset. If the first distance is smaller than the second distance, the first displacement offset is smaller than the second displacement offset. For example, taking the previous FIG. 4B as an example, if the distance between the camera 122 and the display screen 112 increases, the position (A1', B1') of the triangle on the image 422 collected by the camera 122 is the same as that of the triangle in FIG. 4A ( The displacement offset between A1, B1) increases.
  • the offset direction may be a direction from the origin of the second coordinate system to the origin of the first coordinate system. That is to say, the offset direction is related to the positional relationship between the display screen and the camera.
  • the offset direction changes as the orientation between the camera and the display screen changes.
  • the offset direction is the first direction.
  • the offset direction is the second direction.
  • the offset direction is leftward offset. For example, taking the previous FIG.
  • the position (A1', B1') of the triangle on the image 422 captured by the camera 122 is shifted to the left to the position (A1, B1) of the triangle in FIG. 4A.
  • the offset direction is to the right.
  • the position (A2', B2') of the triangle on the image 420 collected by the camera 120 is shifted to the right to the position (A2, B2) of the triangle in Fig. 4A.
  • the first coordinate system, the second coordinate system, the offset, etc. may be stored in the VR glasses in advance.
  • the offset can be changed.
  • the relative position between the display screen and the camera can be changed.
  • the display screen can be moved on the VR glasses, and/or the camera is on the VR glasses. can be moved.
  • the offset between the first coordinate system corresponding to the display screen and the second coordinate system corresponding to the camera will occur correspondingly Or, with the adjustment of the distance between the two display screens on the VR glasses, and/or the adjustment of the distance between the two cameras, the offset changes accordingly.
  • the distance between the two display screens and/or the distance between the two cameras can be adjusted along with the distance between the user's left eye pupil and right eye pupil.
  • This solution can be applied to VR glasses with adjustable display screen and/or camera position.
  • This type of VR glasses can be applied to various groups of people. For example, when the VR glasses are used by users with a wider eye distance, the relative distance between the display screen and the camera can be adjusted to be larger; when the VR glasses are used by users with a narrower eye distance, The relative distance between the display screen and the camera can be adjusted to be smaller and so on. Therefore, one VR glasses can be suitable for multiple users, for example, one VR glasses can be used by the whole family. No matter how the position of the display screen and/or camera is adjusted, the offset is adjusted accordingly, and the VR glasses can realize perspective reconstruction based on the adjusted offset.
  • the VR glasses can shift all pixels on the image captured by the camera to the target position according to the offset amount (that is, reconstruct the global perspective).
  • the VR glasses may first determine the first area on the image, and shift the pixels in the first area to the target position according to the offset. That is, only the pixels in the first area are offset, and the pixels in other areas may not be moved.
  • the first area may be the area where the user's gaze point is located on the image.
  • the VR glasses include an eye tracking module, through which the user's gaze point can be located.
  • One implementation method is that the VR glasses determine that the user's gaze point is located at a point on an object (such as the table in FIG. 7 ), and then determine the smallest circumscribed rectangle of the object (such as the table) as the first area. It can be understood that the smallest circumscribing rectangle may also be the smallest circumscribing square, the smallest circumscribing circle, and so on.
  • the VR glasses when the VR glasses determine that the user's gaze point is located at a point on a certain object (such as a table), it may determine that a rectangle with the point as the center and a preset length as the side length is the first area, or , with the point as the center and a circle with a preset radius as the first area, and so on.
  • the preset length, preset radius, etc. may be set by default.
  • the area where the user's gaze point is located may be a partial area of the object.
  • the first area may also be all areas whose depth is located at the depth of the user's gaze point.
  • the first area may also be the user's interest area on the image.
  • the region of interest of the user may be the region where the object of interest of the user is located on the image.
  • an object of interest to the user eg, a person, an animal, etc.
  • the VR glasses may be stored in the VR glasses, and when it is recognized that the object of interest to the user exists on the image captured by the camera, the area where the object is located is determined to be the first area.
  • the object of interest to the user can be manually stored in the VR glasses by the user, or, in the virtual world, the user can interact with objects in the virtual world, and the object of interest to the user can also be that the number of interactions performed by the user recorded by the VR glasses is greater than Objects that perform interactions a preset number of times and/or take longer than a preset duration, etc.
  • the first area may also be a default area, such as a central area on the image. Considering that the user generally pays attention to the central area of the image first, the first area is defaulted to the central area.
  • the first area may also be a user-specified area.
  • the user may set the first area on the VR glasses or set the first area on an electronic device (such as a mobile phone) connected to the VR glasses, and so on.
  • the first area may also be determined according to different scenarios. Taking the VR game scene as an example, if user A participates in the game as a game player, the area where the game character corresponding to user A is located is the first area; or, if user A watches the game played by user B, then the The area where the corresponding game character (that is, the player being watched) is located is the first area.
  • the VR driving scene as an example. User A wears VR glasses and sees that he is driving a virtual vehicle on the road.
  • the first area can be the area where the vehicle driven by user A is located, or the steering wheel, The area where the windshield or the like is located, or the area where the vehicle is located in front of the vehicle driven by the user A on the driving road is the first area.
  • the first area is an area on the image captured by the camera, and specific methods for determining the first area include but are not limited to the above methods, which are not listed in this application.
  • the offsets of all pixels in the first area may be the same.
  • the offset distance of all pixels is the distance between the origin of the first coordinate system and the origin of the second coordinate system mentioned above, and the cheap direction of all pixels is from the origin of the second coordinate system to the first coordinate The direction of the origin.
  • the offsets of different pixel points in the first area may be different.
  • the first area 1000 includes a central area 1010 and an edge area 1020 (area drawn with oblique lines).
  • the area of the edge area 1020 may be default, such as the area formed from the edge of the first area to the preset width in the first area.
  • the offset of the pixels in the central area 1010 is greater than the offset of the pixels in the edge area 1020 .
  • the offset distance of the pixels in the central area 1010 is equal to L
  • the offset distance of the pixels in the edge area 1020 is equal to L.
  • the moving distance is less than L, such as L/2, L/3 and so on.
  • the displacement of the pixels at the center of the first area is relatively large, and the displacement of pixels at the edge is small, because the edge is connected to other areas.
  • the connection between them can be relatively smooth, avoiding obvious misalignment at the edge of the area where the fixation point is located.
  • the degree of deformation that is, the shape change
  • the degree of sound deformation of the object in the edge area is small. That is to say, the degree of deformation of objects in the first region decreases gradually from the center to the edge.
  • the offset of the pixels in the first area is greater than the offset of the pixels in the second area
  • the second area may be an area outside the first area and surround the outer edge of the first area.
  • the area of the second area is not limited, for example, it may be an area formed by a preset width outward from the outer edge of the first area.
  • the degree of deformation of the object in the first area is large, and the degree of deformation of the object in the second area is small. That is to say, the degree of deformation of the object from the first region to the second region gradually decreases.
  • the offsets of different pixels on the edge area 1020 may also be different.
  • the edge area 1020 includes a first edge area 1022 (hatched part) and a second edge area 1024 (black part), assuming that the offset direction is the direction shown by the arrow in the figure, that is, the first If an area 1000 is shifted to the lower left, then the first edge area 1022 is in the offset direction (ie, the lower left of the first area 1000), and the second edge area is in the direction opposite to the offset direction (ie, the first area 1000 upper right of the ).
  • the offsets of pixels in the two edge regions are different.
  • the offset direction is the direction shown by the arrow
  • the offset of the pixels in the first edge area 1022 black area
  • the offset of the pixels in the central area 1010 amount ⁇ the offset of the pixels in the second edge area 1024 (hatched area). That is to say, the objects in the range of offset directions in the first area (ie, the objects in the first edge area 1022) have a large amount of offset, and the objects in the range of directions opposite to the offset direction (ie, the objects in the second edge area 1022) have a large amount of offset. 1024)
  • the offset is small. In this way, when the first region is shifted according to the shifting direction, the edge of the first region opposite to the shifting direction can transition smoothly with other regions.
  • the first image information of the first pixel in the edge area of the first area on the image after perspective reconstruction may be an intermediate value between the second image information and the third image information, such as an average value.
  • the second image information is the image information of the second pixel in the central area of the first area
  • the third image information is the image information of the third pixel in other areas. For example, as shown in FIG. 11 , pixel A is located in the edge area 1020 of the first area 1000 , pixel B is located in other areas, and pixel C is located in the central area 1010 of the first area 1000 .
  • the image information of pixel point A may be the average value of the image information of pixel point B and pixel point C, and the image information includes one or more of resolution, color, color temperature, or brightness.
  • the pixel point C and the pixel point B may be pixel points close to the pixel point A. Since the edge area 1020 of the first area is a transition area between the first area and other areas, when the resolution, color, color temperature, brightness, etc. of pixels in the edge area 1020 are intermediate values, the Smooth transitions between other areas.
  • the present application provides a display method.
  • the method is applicable to an electronic device including at least one camera and at least one display screen, such as VR glasses, where the positions of the camera and the display screen are different.
  • the position of the camera on the VR glasses is different from the position of the display screen, so when the user wears the VR glasses, there will be a phenomenon that the viewing angle of the human eye is different from the shooting angle of the camera.
  • FIG. 12 is a schematic flowchart of a display method provided by an embodiment of the present application. The flow of the method includes:
  • the camera collects a second image.
  • the camera may be any camera on the VR glasses shown in FIG. 4C , such as the left camera 122 or the right camera 120 .
  • the left camera 122 as an example, as shown in FIG. 4C , within the viewing angle of view of the position of the left camera, the triangle is located in front of the left of the square (because the square is at infinity, such as the sun).
  • the imaging surface of the left camera 122 includes a triangle and a square, and the triangle is on the left side of the square, as shown in FIG. 13 , which is a schematic diagram of a plane two-dimensional image taken by the camera 122. . In this image the triangle is to the left of the square.
  • an other camera is set at the location of the camera 122 , and the image collected by the other camera is the same as the image collected by the camera 122 . That is to say, the image observed at the location of the camera 122 (observed by a person or captured by other cameras) is the same as the image collected by the camera 122 .
  • the first area is the dotted line area on the plane two-dimensional image collected by the camera.
  • reconstructing the viewing angle of the first area includes performing coordinate transformation on the image in the first area, that is, transforming from the coordinate system corresponding to the camera to the coordinate system corresponding to the display screen or human eyes.
  • the image collected by the camera is a plane two-dimensional image
  • one implementation of coordinate transformation can be to convert the plane two-dimensional image collected by the camera into a three-dimensional point cloud, and the three-dimensional point cloud can reflect the position of each object in the real environment (including Depth), and then create a virtual camera by simulating the human eye.
  • the image seen at the observation angle of the human eye can be obtained, and the observation angle from the position of the camera to the position of the human eye can be realized.
  • Reconstruction of the perspective of observation Specifically, in step S3, the way to reconstruct the viewing angle of the first region includes the following steps:
  • the first step is to determine the depth information of the pixels in the first area.
  • the manner of determining the depth information of the pixels in the first area includes at least one of manner 1 and manner 2.
  • Method 1 Determine the depth information of a pixel according to the pixel difference of the same pixel on two images captured by two cameras on the VR glasses.
  • the depth information of the pixel satisfies the following formula:
  • f is the focal length of the camera
  • B is the distance between the two cameras
  • disparity is the pixel difference between the same pixel on the two images
  • d is the depth information of the pixel.
  • Method 2 Determining the depth information of the pixel point according to the convergence angle between the user's left eye and the right eye, and the corresponding relationship between the convergence angle and the depth information.
  • the convergence angle ⁇ of the user's eyes is determined.
  • the VR glasses can store a database, which stores the correspondence between the convergence angle ⁇ and the depth information.
  • the database may be obtained based on experience and stored in the VR glasses in advance; or it may be determined based on deep learning.
  • the second step is to determine the 3D point cloud data corresponding to the first area according to the depth information of the pixels in the first area.
  • the 3D point cloud of pixels is shown in Figure 15.
  • the three-dimensional point cloud corresponding to the first area can map the position of each pixel in the first area in the real world.
  • the point cloud corresponding to the triangle in the 3D point cloud is in front left of the point cloud corresponding to the square, because the scene observed at the position of the camera 122 is that the triangle is in front left of the square.
  • the third step is to create a virtual camera.
  • the image acquisition principle of the human eye is similar to the image capture principle of the camera.
  • a virtual camera is created.
  • the virtual camera simulates the human eye.
  • the position of the virtual camera is the same as
  • the positions of the human eyes are the same, and/or, the field of view of the virtual camera is the same as that of the human eyes.
  • the angle of view of the human eye is 110 degrees up and down, and 110 degrees left and right, so the field of view of the virtual camera is 110 degrees up and down, 110 degrees left and right.
  • the VR glasses can determine the position of the human eye, so the virtual camera is set at the position of the human eye.
  • there are multiple ways to determine the position of the human eye For example, in method 1, the position of the display screen is determined first, and then the position of the human eyes can be estimated by adding the distance A to the position of the display screen. The human eye position determined in this way is more accurate. Wherein, the distance A is the distance between the display screen and human eyes, which may be stored in advance. Method 2. The position of the human eye is equal to the position of the display screen.
  • This method is relatively simple, and setting the virtual camera at the display screen can also alleviate the discomfort caused by the difference between the shooting angle of view and the angle of view of the human eye.
  • the virtual camera is at the position of the human eye.
  • the fourth step is to use the virtual camera to capture the 3D point cloud data corresponding to the first area to obtain an image, which is an image reconstructed from the angle of view of the first area.
  • the virtual camera corresponding to the left eye captures a three-dimensional point cloud (a three-dimensional point cloud converted from a two-dimensional image collected by the left camera 122), since the virtual camera corresponding to the left eye is closer to the right than the left camera 122 , so the distance between the triangle and the square on the image captured by the virtual camera corresponding to the left eye is small.
  • image 1701 is a plane two-dimensional image taken by camera 122
  • image 1702 is an image taken by a virtual camera corresponding to the left eye (that is, the first area on the plane two-dimensional image taken by camera 122 is regarded as the perspective weight). constructed image).
  • the distance between two objects on image 1702 is smaller than the distance between two objects on image 1701 .
  • Image 1702 corresponds to an image captured by a person's left eye.
  • image 1703 is a planar two-dimensional image captured by camera 120
  • image 1704 is an image captured by a virtual camera corresponding to the right eye (that is, the image after perspective reconstruction of the first region).
  • the distance between two objects on image 1704 is smaller than the distance between two objects on image 1703 . This is because the virtual camera corresponding to the right eye is more left than the camera 120 . Therefore, image 1704 is equivalent to an image captured by a person's right eye.
  • the image captured by the virtual camera only includes the first area, not the second area, and the workload is small .
  • the second area is an area other than the first area on the second image.
  • the second area does not perform perspective reconstruction, and the image captured by the virtual camera is an image after perspective reconstruction of the first area. Therefore, compared with the second image, the first image reconstructs the perspective of the first area, and The perspective of the second area is not reconstructed.
  • the above S2 to S4 can be executed by the processor in the VR glasses, that is, after the camera captures the second image (ie S1), the second image is sent to the processor, and the processor executes S2 to S4 to obtain For the first image, the processor displays the first image through the display screen.
  • FIG. 19 shows an electronic device 1900 provided by this application.
  • the electronic device 1900 may be the aforementioned VR wearable device (eg, VR glasses).
  • an electronic device 1900 may include: one or more processors 1901; one or more memories 1902; a communication interface 1903, and one or more computer programs 1904, and each of the above devices may communicate through one or more bus 1905 connection.
  • the one or more computer programs 1904 are stored in the memory 1902 and are configured to be executed by the one or more processors 1901, the one or more computer programs 1904 include instructions, and the instructions can be used to perform the above-mentioned Related steps of the VR wearable device in the corresponding embodiment.
  • the communication interface 1903 is used to implement communication with other devices, for example, the communication interface may be a transceiver.
  • the methods provided in the embodiments of the present application are introduced from the perspective of an electronic device (for example, a VR wearable device) as an execution subject.
  • the electronic device may include a hardware structure and/or a software module, and realize the above-mentioned functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above-mentioned functions is executed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the terms “when” or “after” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting ".
  • the phrases “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • relational terms such as first and second are used to distinguish one entity from another, without limiting any actual relationship and order between these entities.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or part of the processes or functions described in this embodiment will be generated.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
  • a magnetic medium for example, a floppy disk, a hard disk, or a magnetic tape
  • an optical medium for example, DVD
  • a semiconductor medium for example, a Solid State Disk (SSD)

Abstract

A display method and an electronic device. The method comprises: displaying a first image to a user by means of a display screen, at least one of the display position or state of a first object on the first image being different from that of the first object on a second image, and the display position and state of a second object on the first image being the same as those of the second object on the second image; the second image being an image acquired by a camera, wherein the first object is located in a region where the gaze point of the user is located, and the second object is located in a region other than the region where the gaze point of the user is located. In this way, the problem that the photography angle of the camera is different from the viewing angle of human eyes caused by different positions of the camera and the display screen can be solved.

Description

一种显示方法与电子设备A display method and electronic device
相关申请的交叉引用Cross References to Related Applications
本申请要求在2021年09月09日提交中国专利局、申请号为202111056782.7、申请名称为“一种显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application with the application number 202111056782.7 and the application title "A Display Method and Electronic Device" submitted to the China Patent Office on September 9, 2021, the entire contents of which are incorporated in this application by reference .
技术领域technical field
本申请涉及电子技术领域,尤其涉及一种显示方法与电子设备。The present application relates to the field of electronic technology, in particular to a display method and electronic equipment.
背景技术Background technique
虚拟现实(Virtual Reality,VR)技术是借助计算机及传感器技术创造的一种人机交互手段。VR技术综合了计算机图形技术、计算机仿真技术、传感器技术、显示技术等多种科学技术,可以创建虚拟世界。用户可以通过佩戴VR穿戴设备(如,VR眼镜、VR头盔等)沉浸于虚拟世界中。Virtual Reality (VR) technology is a means of human-computer interaction created with the help of computer and sensor technology. VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other science and technology to create a virtual world. Users can immerse themselves in the virtual world by wearing VR wearable devices (eg, VR glasses, VR helmets, etc.).
虚拟世界中的对象可以全部是虚构对象,也可以包括真实物体的三维模型,这样用户看到的虚拟世界中既包括虚构对象也包括真实对象,体验更为真实。比如,VR穿戴设备上可以设置摄像头,以捕捉真实物体的图像,基于该图像构建真实物体的三维模型并将其显示在虚拟世界中。以VR眼镜为例,图1为VR眼镜的一种示意图。VR眼镜上包括摄像头和显示屏。一般,为了VR眼镜的外观轻薄、不厚重,摄像头不会设置在显示屏所在位置,通常是设置在显示屏下方位置,如图1。The objects in the virtual world can be all fictitious objects, and can also include three-dimensional models of real objects, so that the virtual world seen by the user includes both fictitious objects and real objects, and the experience is more realistic. For example, a camera can be set on a VR wearable device to capture an image of a real object, and based on the image, a three-dimensional model of the real object can be constructed and displayed in the virtual world. Taking VR glasses as an example, FIG. 1 is a schematic diagram of VR glasses. The VR glasses include a camera and a display screen. Generally, in order to make the appearance of the VR glasses thin and not heavy, the camera will not be set at the position of the display screen, but is usually set at the position below the display screen, as shown in Figure 1.
这种设置方式会导致人眼的视角方向与摄像头的视角方向(或称为拍摄方向)不一致。比如,图1中,摄像头的拍摄方向朝下,人眼的视角方向超前。如果将摄像头采集的图像直接通过显示屏向人眼展示,会给人不适感,长时间如此会出现眩晕感,体验较差。This setting method will cause the angle of view direction of the human eye to be inconsistent with the angle of view direction (or called the shooting direction) of the camera. For example, in Figure 1, the shooting direction of the camera is facing downward, and the viewing angle of the human eye is ahead. If the images captured by the camera are directly displayed to the human eyes through the display screen, it will give people a sense of discomfort, and if this is done for a long time, they will feel dizzy and the experience is poor.
发明内容Contents of the invention
本申请的目的在于提供了一种显示方法与电子设备,用于提升VR体验。The purpose of the present application is to provide a display method and an electronic device for improving VR experience.
第一方面,提供一种显示方法,应用于穿戴设备,所述穿戴设备上包括至少一个显示屏和至少一个摄像头;包括:通过所述显示屏向用户展示第一图像;所述第一图像上第一对象与第二图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一图像上第二对象与所述第二图像上所述第二对象的显示位置和形态均相同;所述第二图像是所述摄像头采集的图像;其中,所述第一对象处于用户注视点所在区域,所述第二对象处于用户注视点所在区域以外的区域。In a first aspect, a display method is provided, which is applied to a wearable device, and the wearable device includes at least one display screen and at least one camera; including: presenting a first image to the user through the display screen; displaying a first image on the first image The first object is different from at least one of the display position or form of the first object on the second image, and the display of the second object on the first image is different from the display of the second object on the second image The position and shape are the same; the second image is an image collected by the camera; wherein, the first object is in the area where the user's gaze point is located, and the second object is in an area other than the area where the user's gaze point is located.
在本申请实施例中,穿戴设备可以对摄像头采集的第二图像上用户注视点所在区域作视角重构,不对用户注视点所在区域以外的区域作视角重构。通过对用户注视点所在区域的视角重构可以缓解眩晕感(显示屏与摄像头的位置导致摄像头拍摄视角与人眼观察视角而带来的眩晕感),提升VR体验,而且,只对注视点所在区域作视角重构工作量少,并且可以降低发生画面扭曲的概率或程度。In the embodiment of the present application, the wearable device may reconstruct the viewing angle of the area where the user's gaze point is located on the second image captured by the camera, and not perform viewing angle reconstruction for areas other than the area where the user's gaze point is located. By reconstructing the viewing angle of the area where the user's gaze point is located, it can alleviate the feeling of vertigo (the position of the display screen and the camera causes the sense of vertigo caused by the shooting angle of the camera and the viewing angle of the human eye), and improve the VR experience. The workload of viewing angle reconstruction in the area is less, and the probability or degree of picture distortion can be reduced.
在一种可能的设计中,所述第一图像上所述第一对象的第一显示位置与第二图像上所述第一对象的第二显示位置之间的位移偏移量与所述摄像头和所述显示屏之间的距离相关。比如,摄像头与显示屏之间的距离越大,第一对象和第二对象之间的偏移量越大。In a possible design, the displacement offset between the first display position of the first object on the first image and the second display position of the first object on the second image is the same as that of the camera It is related to the distance between the display screens. For example, the greater the distance between the camera and the display screen, the greater the offset between the first object and the second object.
在一种可能的设计中,所述第一显示位置与所述第二显示位置之间的位移偏移量随着所述摄像头与所述显示屏之间的距离的增大而增大,随着所述摄像头与所述显示屏之间的距离的减少而减少。In a possible design, the displacement offset between the first display position and the second display position increases as the distance between the camera and the display screen increases. Decreases as the distance between the camera and the display screen decreases.
示例性的,当所述摄像头与所述显示屏之间的距离为第一距离时,所述第一显示位置与所述第二显示位置之间的位移偏移量为第一位移偏移量。当所述摄像头与所述显示屏之间的距离为第二距离时,所述第一显示位置与所述第二显示位置之间的位移偏移量为第二位移偏移量。所述第一距离大于或等于所述第二距离时,所述第一位移偏移量大于或等于所述第二位移偏移量。所述第一距离小于所述第二距离时,所述第一位移偏移量小于所述第二位移偏移量。Exemplarily, when the distance between the camera and the display screen is the first distance, the displacement offset between the first display position and the second display position is the first displacement offset . When the distance between the camera and the display screen is the second distance, the displacement offset between the first display position and the second display position is the second displacement offset. When the first distance is greater than or equal to the second distance, the first displacement offset is greater than or equal to the second displacement offset. When the first distance is smaller than the second distance, the first displacement offset is smaller than the second displacement offset.
在一种可能的设计中,所述第一图像上所述第一对象的第一显示位置与第二图像上所述第一对象的第二显示位置之间的偏移方向与所述摄像头和所述显示屏之间的位置关系相关。比如,摄像头位于显示屏的左侧,第二图像上第一对象向左偏移到第一图像上第一对象的位置。In a possible design, the offset direction between the first display position of the first object on the first image and the second display position of the first object on the second image is related to the camera and The positional relationship between the display screens is related. For example, the camera is located on the left side of the display screen, and the first object on the second image shifts to the left to the position of the first object on the first image.
在一种可能的设计中,所述第一显示位置与所述第二显示位置之间的偏移方向随着所述摄像头与所述显示屏之间的方向的变化而变化。In a possible design, the offset direction between the first display position and the second display position changes as the direction between the camera and the display screen changes.
示例性的,当所述摄像头位于所述显示屏的第一方向时,所述第二显示位置与所述第一显示位置之间的偏移方向为所述第一方向。当所述摄像头位于所述显示屏的第二方向时,所述第二显示位置与所述第一显示位置之间的偏移方向为所述第二方向。Exemplarily, when the camera is located in the first direction of the display screen, the offset direction between the second display position and the first display position is the first direction. When the camera is located in the second direction of the display screen, the offset direction between the second display position and the first display position is the second direction.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第二对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于所述用户注视点所在区域内,且比所述第一对象靠近所述注视点所在区域的边缘;所述第二偏移量小于所述第一偏移量。也就是说,用户注视点所在区域内中心位置处的第一对象比边缘位置处的第三对象的偏移量大,这样可以实现用户注视点所在区域与其它区域的边缘平滑过渡。In a possible design, the position offset of the first object on the first image relative to the second object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is the second offset; the third object is in the area where the user's gaze point is located, and is larger than the first object. Close to the edge of the area where the gaze point is located; the second offset is smaller than the first offset. That is to say, the offset of the first object at the center of the area where the user's gaze point is located is larger than that of the third object at the edge position, so that a smooth edge transition between the area where the user's gaze point is located and other areas can be achieved.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第二对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于所述用户注视点所在区域以外的区域,且该区域围绕在所述用户注视点所在区域的边缘;所述第二偏移量小于所述第一偏移量。也就是说,用户注视点所在区域内的第一对象比外围区域(用户注视点所在区域以外的区域,且该区域围绕在所述用户注视点所在区域的边缘)内的第三对象的偏移量大,这样可以实现用户注视点所在区域与其它区域的边缘平滑过渡。In a possible design, the position offset of the first object on the first image relative to the second object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is the second offset; the third object is located in an area other than the area where the user's gaze point is located, and the area is surrounded by The edge of the area where the user's gaze point is located; the second offset is smaller than the first offset. That is to say, the first object in the area where the user's gaze point is located is offset from the third object in the peripheral area (the area outside the area where the user's gaze point is located, and this area surrounds the edge of the area where the user's gaze point is located) The amount is large, so that the edge of the area where the user's gaze point is located and other areas can be smoothly transitioned.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的形态变化程度比所述第一图像上第三对象相对于所述第二图像上所述第三对象的形态变化程度大;所述第三对象处于所述用户注视点所在区域内,且所述第三对象比所述第一对象靠近所述注视点所在区域的边缘。也就是说,从用户注视点所在区域内的中心位置到边缘位置、物体的形态变化程度越小。这样可以实现用户注视点所在区域与其它区域的边缘平滑过渡。In a possible design, the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the The morphological change degree of the third object on the second image is large; the third object is in the area where the user's gaze point is located, and the third object is closer to the area where the gaze point is located than the first object. edge. That is to say, from the center position to the edge position in the area where the user's gaze point is located, the degree of shape change of the object is smaller. In this way, a smooth edge transition between the area where the user's gaze point is located and other areas can be achieved.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的形态变化程度比所述第一图像上第三对象相对于所述第二图像上所述第三对象的形态变化程度大;所述第三对象处于所述用户注视点所在区域以外的区域,且该区域围绕在所述注视点所在区域的边缘。也就是说,从用户注视点所在区域向外到外围区域(用户注视点所在区域以外的区域,且该区域围绕在所述注视点所在区域的边缘)、物体的形态变化程度越小。这样可以实现用户注视点所在区域与其它区域的边缘平滑过渡。In a possible design, the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the The shape of the third object on the second image changes greatly; the third object is located in an area other than the area where the user's gaze point is located, and this area surrounds an edge of the area where the gaze point is located. That is to say, from the area where the user's gazing point is located outward to the peripheral area (the area outside the area where the user's gazing point is located, and the area surrounds the edge of the area where the gazing point is located), the smaller the degree of shape change of the object is. In this way, a smooth edge transition between the area where the user's gaze point is located and other areas can be achieved.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于所述用户注视点所在区域内,且所述第三对象处于所述第一对象的第一方向范围内,所述第一方向范围包括所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移方向;所述第二偏移量大于所述第一偏移量。以偏移方向是向左下方偏移为例,用户注视点所在区域内,左下方范围内的对象偏移量大,右上方范围内的对象偏移量较小。这样,用户注视点所在区域向左下方偏移时,右上方区域内的图像能够与其它区域平滑过渡。In a possible design, the position offset of the first object on the first image relative to the first object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is the second offset; the third object is in the area where the user's gaze point is located, and the third object is in the Within the first direction range of the first object, the first direction range includes a position offset direction of the first object on the first image relative to the first object on the second image; The second offset is greater than the first offset. Taking the offset direction as an example of shifting to the lower left, in the area where the user gazes, the offset of objects in the lower left range is large, and the offset of objects in the upper right range is small. In this way, when the area where the user's gaze point is shifted to the lower left, the image in the upper right area can smoothly transition with other areas.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于所述用户注视点所在区域以外的区域且该区域围绕所述用户注视点所在区域的边缘;所述第三对象处于所述第一对象的第一方向范围内,所述第一方向范围包括所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移方向;所述第二偏移量大于所述第一偏移量。以偏移方向是向左下方偏移为例,用户注视点所在区域内左下方范围内对象的偏移量小于包围用户注视点所在区域的外围区域中左下方范围内对象的偏移量。也就是说,从用户注视点所在区域的左下方向外,越是远的对象,偏移量越大,而右上方区域内的图像能够与其它区域平滑过渡。In a possible design, the position offset of the first object on the first image relative to the first object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is the second offset; the third object is located in an area other than the area where the user's gaze point is located and the area surrounds the The edge of the area where the user's gaze point is located; the third object is within the first direction range of the first object, and the first direction range includes the first object on the first image relative to the second The position of the first object on the image is offset in a direction; the second offset is greater than the first offset. Taking the offset direction as an example of shifting to the lower left, the offset of objects in the lower left range in the area where the user's gaze point is located is smaller than the offset amount of objects in the lower left range in the peripheral area surrounding the area where the user's gaze point is located. That is to say, from the lower left direction of the area where the user's gaze point is located, the farther the object is, the larger the offset is, and the image in the upper right area can smoothly transition with other areas.
在一种可能的设计中,所述第一图像上包括第一像素点、第二像素点和第三像素点,所述第一像素点和所述第二像素点处于所述用户注视点所在区域中,且所述第一像素点比所述第二像素点靠近所述用户注视点所在区域的边缘;所述第三图像信息是与处于用户注视点所在区域以外的区域;所述第一像素点的图像信息位于所述第二像素点的图像信息和所述第三像素点的图像信息之间。In a possible design, the first image includes a first pixel point, a second pixel point and a third pixel point, and the first pixel point and the second pixel point are located at the user's gaze point In the area, and the first pixel point is closer to the edge of the area where the user's gaze point is located than the second pixel point; the third image information is related to the area outside the area where the user's gaze point is located; the first The image information of the pixel is located between the image information of the second pixel and the image information of the third pixel.
也就是说,第一区域中边缘区域内像素点(即第一像素点)的图像信息是中心区域内像素点(即第二像素点)的图像信息和第一区域以外的区域内像素点(即第三像素点)的图像信息的中间值,这样,第一区域与其它区域可以平滑过渡。比如,从第一区域以外的区域到第一区域内,像素点的颜色、亮度、分辨率等逐渐变化。That is to say, the image information of the pixels (i.e. the first pixels) in the edge area in the first area is the image information of the pixels (i.e. the second pixels) in the center area and the image information of the pixels (i.e. the second pixels) in the area outside the first area ( That is, the intermediate value of the image information of the third pixel point), so that the transition between the first area and other areas can be smooth. For example, from an area outside the first area to within the first area, the color, brightness, resolution, etc. of the pixels gradually change.
在一种可能的设计中,所述图像信息包括:分辨率、颜色、亮度、色温中的至少一种。需要说明的是,图像信息还可以包括更多信息,本申请实施例不作限定。In a possible design, the image information includes: at least one of resolution, color, brightness, and color temperature. It should be noted that the image information may also include more information, which is not limited in this embodiment of the present application.
在一种可能的设计中,所述至少一个摄像头包括第一摄像头和第二摄像头,所述至少一个显示屏包括第一显示屏和第二显示屏;所述第一显示屏被配置为显示所述第一摄像头采集的图像;所述第二显示屏被配置为显示所述第二摄像头的图像;在所述第一显示屏与所述第一摄像头的位置不同的情况下,所述第一显示屏显示的图像上第一对象与所述第一摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一显示屏显示的图像上第二对象与所述第一摄像头采集的第二图像上所述第二对象的显示位 置和形态均相同;在所述第二显示屏与所述第二摄像头的位置不同的情况下,所述第二显示屏显示的图像上第一对象与所述第二摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第二显示屏显示的图像上第二对象与所述第二摄像头采集的图像上所述第二对象的显示位置和形态均相同。也就是说,本申请实施例提供的技术方案中,可以适用于包括两个显示屏和两个摄像头的穿戴设备。比如,VR眼镜。In a possible design, the at least one camera includes a first camera and a second camera, and the at least one display screen includes a first display screen and a second display screen; the first display screen is configured to display the the image collected by the first camera; the second display screen is configured to display the image of the second camera; if the position of the first display screen is different from that of the first camera, the first The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the first camera, and the second object on the image displayed on the first display screen is The display position and form of the second object on the second image captured by the object and the first camera are the same; if the positions of the second display screen and the second camera are different, the second The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the second camera, and the second object on the image displayed on the second display screen is The display position and shape of the object and the second object on the image captured by the second camera are the same. That is to say, the technical solutions provided in the embodiments of the present application can be applied to wearable devices including two display screens and two cameras. For example, VR glasses.
在一种可能的设计中,所述第一图像上第一对象与第二图像上所述第一对象的形态不同,包括:所述第二图像上所述第一对象的边缘轮廓比所述第一图像上所述第一对象的边缘轮廓平整。由于第一图像上第一对象是经过视角重构的,所以第一图像上第一对象的边缘可能不平整,而第二图像上第一对象未经过视角重构,所以第二图像上第二对象的边缘平整。由于第一对象经过视角重构,所以用户佩戴穿戴设备时看到第一对象不会有眩晕感(显示屏与摄像头的位置导致摄像头拍摄视角与人眼观察视角而带来的眩晕感),提升VR体验。In a possible design, the shape of the first object on the first image is different from that of the first object on the second image, including: the edge profile of the first object on the second image is larger than the The edge contour of the first object on the first image is smoothed. Since the first object on the first image has undergone perspective reconstruction, the edge of the first object on the first image may be uneven, while the first object on the second image has not undergone perspective reconstruction, so the second Objects have smooth edges. Since the first object has been reconstructed from the angle of view, the user will not feel dizzy when seeing the first object when wearing the wearable device (the position of the display screen and the camera causes the dizziness caused by the shooting angle of the camera and the viewing angle of the human eye), improving VR experience.
第二方面,还提供一种显示方法,应用于穿戴设备,所述穿戴设备上包括至少一个显示屏、至少一个摄像头和处理器;所述摄像头被配置为将其采集的图像传输给所述处理器,所述图像经由所述处理器在所述显示屏上显示,包括:通过所述显示屏向用户展示第一图像;所述第一图像上第一对象与第二图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一图像上第二对象与所述第二图像上所述第二对象的显示位置和形态均相同;所述第二图像是所述摄像头处采集的图像;其中,所述第一对象处于用户注视点所在区域,所述第二对象处于用户注视点所在区域以外的区域。In the second aspect, a display method is also provided, which is applied to a wearable device, and the wearable device includes at least one display screen, at least one camera, and a processor; the camera is configured to transmit the image it collects to the processing displaying the image on the display screen via the processor, including: displaying the first image to the user through the display screen; the first object on the first image and the first object on the second image At least one of the display position or form of the object is different, and the display position and form of the second object on the first image and the second object on the second image are the same; the second image is The image collected by the camera; wherein, the first object is located in the area where the user's gaze point is located, and the second object is located in an area other than the area where the user's gaze point is located.
在本申请实施例中,穿戴设备可以对摄像头采集的第二图像上用户注视点所在区域作视角重构,不对用户注视点所在区域以外的区域作视角重构。通过对用户注视点所在区域的视角重构可以缓解眩晕感(显示屏与摄像头的位置导致摄像头拍摄视角与人眼观察视角而带来的眩晕感),提升VR体验。In the embodiment of the present application, the wearable device may reconstruct the viewing angle of the area where the user's gaze point is located on the second image captured by the camera, and not perform viewing angle reconstruction for areas other than the area where the user's gaze point is located. By reconstructing the viewing angle of the area where the user's gaze is located, it can alleviate the feeling of vertigo (the position of the display screen and the camera causes the feeling of vertigo caused by the shooting angle of the camera and the viewing angle of the human eye), and improve the VR experience.
在一种可能的设计中,在所述摄像头所在位置处设置一个其它摄像头,该其它摄像头采集的图像与所述摄像头采集的图像相同。也就是说,在所述摄像头所在位置处观察(人观察或使用其它摄像头拍摄)到的图像与所述摄像头采集的图像相同。In a possible design, another camera is set at the location of the camera, and the image collected by the other camera is the same as the image collected by the camera. That is to say, the image observed at the location of the camera (observed by a person or taken by other cameras) is the same as the image collected by the camera.
在一种可能的设计中,所述第一图像上所述第一对象的第一显示位置与第二图像上所述第一对象的第二显示位置之间的位移偏移量与所述摄像头和所述显示屏之间的距离相关。In a possible design, the displacement offset between the first display position of the first object on the first image and the second display position of the first object on the second image is the same as that of the camera It is related to the distance between the display screens.
在一种可能的设计中,所述第一显示位置与所述第二显示位置之间的位移偏移量随着所述摄像头与所述显示屏之间的距离的增大而增大,随着所述摄像头与所述显示屏之间的距离的减少而减少。In a possible design, the displacement offset between the first display position and the second display position increases as the distance between the camera and the display screen increases. Decreases as the distance between the camera and the display screen decreases.
示例性的,当所述摄像头与所述显示屏之间的距离为第一距离时,所述第一显示位置与所述第二显示位置之间的位移偏移量为第一位移偏移量。当所述摄像头与所述显示屏之间的距离为第二距离时,所述第一显示位置与所述第二显示位置之间的位移偏移量为第二位移偏移量。所述第一距离大于或等于所述第二距离时,所述第一位移偏移量大于或等于所述第二位移偏移量。所述第一距离小于所述第二距离时,所述第一位移偏移量小于所述第二位移偏移量。Exemplarily, when the distance between the camera and the display screen is the first distance, the displacement offset between the first display position and the second display position is the first displacement offset . When the distance between the camera and the display screen is the second distance, the displacement offset between the first display position and the second display position is the second displacement offset. When the first distance is greater than or equal to the second distance, the first displacement offset is greater than or equal to the second displacement offset. When the first distance is smaller than the second distance, the first displacement offset is smaller than the second displacement offset.
在一种可能的设计中,所述第一图像上所述第一对象的第一显示位置与第二图像上所述第一对象的第二显示位置之间的偏移方向与所述摄像头和所述显示屏之间的位置关系 相关。In a possible design, the offset direction between the first display position of the first object on the first image and the second display position of the first object on the second image is related to the camera and The positional relationship between the display screens is related.
在一种可能的设计中,所述第一显示位置与所述第二显示位置之间的偏移方向随着所述摄像头与所述显示屏之间的方向的变化而变化。In a possible design, the offset direction between the first display position and the second display position changes as the direction between the camera and the display screen changes.
示例性的,当所述摄像头位于所述显示屏的第一方向时,所述第一显示位置与所述第二显示位置之间的偏移方向为所述第一方向。当所述摄像头位于所述显示屏的第二方向时,所述第一显示位置与所述第二显示位置之间的偏移方向为所述第二方向。Exemplarily, when the camera is located in the first direction of the display screen, the offset direction between the first display position and the second display position is the first direction. When the camera is located in the second direction of the display screen, the offset direction between the first display position and the second display position is the second direction.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第二对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于所述用户注视点所在区域内,且比所述第一对象靠近所述注视点所在区域的边缘;所述第二偏移量小于所述第一偏移量。In a possible design, the position offset of the first object on the first image relative to the second object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is the second offset; the third object is in the area where the user's gaze point is located, and is larger than the first object. Close to the edge of the area where the gaze point is located; the second offset is smaller than the first offset.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第二对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于第一区域,所述第一区域在所述用户注视点所在区域外,且围绕在所述用户注视点所在区域的边缘;所述第二偏移量小于所述第一偏移量。In a possible design, the position offset of the first object on the first image relative to the second object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is a second offset; the third object is in a first area, and the first area is where the user's gaze point is located Outside the area, and around the edge of the area where the user's gaze point is located; the second offset is smaller than the first offset.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的形态变化程度比所述第一图像上第三对象相对于所述第二图像上所述第三对象的形态变化程度大;所述第三对象处于所述用户注视点所在区域内,且所述第三对象比所述第一对象靠近所述注视点所在区域的边缘。In a possible design, the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the The morphological change degree of the third object on the second image is large; the third object is in the area where the user's gaze point is located, and the third object is closer to the area where the gaze point is located than the first object. edge.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的形态变化程度比所述第一图像上第三对象相对于所述第二图像上所述第三对象的形态变化程度大;所述第三对象处于第一区域,所述第一区域在所述用户注视点所在区域外,且围绕在所述注视点所在区域的边缘。In a possible design, the degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the The shape of the third object on the second image has a large degree of change; the third object is in a first area, and the first area is outside the area where the user's gaze point is located, and surrounds the area where the gaze point is located. edge.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于所述用户注视点所在区域内,且所述第三对象处于所述第一对象的第一方向范围内,所述第一方向范围包括所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移方向;所述第二偏移量大于所述第一偏移量。In a possible design, the position offset of the first object on the first image relative to the first object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is the second offset; the third object is in the area where the user's gaze point is located, and the third object is in the Within the first direction range of the first object, the first direction range includes a position offset direction of the first object on the first image relative to the first object on the second image; The second offset is greater than the first offset.
在一种可能的设计中,所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移量为第一偏移量;所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;所述第三对象处于第一区域,所述第一区域在所述用户注视点所在区域外且围绕所述用户注视点所在区域的边缘;所述第三对象处于所述第一对象的第一方向范围内,所述第一方向范围包括所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移方向;所述第二偏移量大于所述第一偏移量。In a possible design, the position offset of the first object on the first image relative to the first object on the second image is a first offset; The position offset of the third object relative to the third object on the second image is a second offset; the third object is in a first area, and the first area is where the user's gaze point is located Outside the area and around the edge of the area where the user's gaze point is located; the third object is within the first direction range of the first object, and the first direction range includes the first object on the first image A direction is offset relative to the position of the first object on the second image; the second offset is greater than the first offset.
在一种可能的设计中,所述第一图像上包括第一像素点、第二像素点和第三像素点,所述第一像素点和所述第二像素点处于所述用户注视点所在区域中,且所述第一像素点比所述第二像素点靠近所述用户注视点所在区域的边缘;所述第三图像信息是与处于用户注视点所在区域以外的区域;所述第一像素点的图像信息位于所述第二像素点的图像信息和 所述第三像素点的图像信息之间。In a possible design, the first image includes a first pixel point, a second pixel point and a third pixel point, and the first pixel point and the second pixel point are located at the user's gaze point In the area, and the first pixel point is closer to the edge of the area where the user's gaze point is located than the second pixel point; the third image information is related to the area outside the area where the user's gaze point is located; the first The image information of the pixel is located between the image information of the second pixel and the image information of the third pixel.
也就是说,第一区域中的边缘区域内的像素点(即第一像素点)的图像信息是中心区域的像素点(即第二像素点)的图像信息和第一区域以外的区域内的像素点(即第三像素点)的图像信息的中间值,这样,第一区域的边缘区域可以平滑过渡。That is to say, the image information of the pixels in the edge area (i.e. the first pixel) in the first area is the image information of the pixels in the central area (i.e. the second pixel) and the image information of the pixels in the area outside the first area. The intermediate value of the image information of the pixel point (that is, the third pixel point), so that the edge area of the first area can transition smoothly.
在一种可能的设计中,所述图像信息包括:分辨率、颜色、亮度、色温中的至少一种。In a possible design, the image information includes: at least one of resolution, color, brightness, and color temperature.
需要说明的是,图像信息还可以包括更多信息,本申请实施例不作限定。It should be noted that the image information may also include more information, which is not limited in this embodiment of the present application.
在一种可能的设计中,所述至少一个摄像头包括第一摄像头和第二摄像头,所述至少一个显示屏包括第一显示屏和第二显示屏;所述第一显示屏被配置为显示所述第一摄像头采集的图像;所述第二显示屏被配置为显示所述第二摄像头的图像;在所述第一显示屏与所述第一摄像头的位置不同的情况下,所述第一显示屏显示的图像上第一对象与所述第一摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一显示屏显示的图像上第二对象与所述第一摄像头采集的第二图像上所述第二对象的显示位置和形态均相同;在所述第二显示屏与所述第二摄像头的位置不同的情况下,所述第二显示屏显示的图像上第一对象与所述第二摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第二显示屏显示的图像上第二对象与所述第二摄像头采集的图像上所述第二对象的显示位置和形态均相同。In a possible design, the at least one camera includes a first camera and a second camera, and the at least one display screen includes a first display screen and a second display screen; the first display screen is configured to display the the image collected by the first camera; the second display screen is configured to display the image of the second camera; if the position of the first display screen is different from that of the first camera, the first The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the first camera, and the second object on the image displayed on the first display screen is The display position and form of the second object on the second image captured by the object and the first camera are the same; if the positions of the second display screen and the second camera are different, the second The first object on the image displayed on the display screen is different from at least one of the display position or form of the first object on the image captured by the second camera, and the second object on the image displayed on the second display screen is The display position and shape of the object and the second object on the image captured by the second camera are the same.
也就是说,本申请实施例提供的技术方案中,可以适用于包括两个显示屏和两个摄像头的穿戴设备。That is to say, the technical solutions provided in the embodiments of the present application can be applied to wearable devices including two display screens and two cameras.
在一种可能的设计中,所述第一图像上第一对象与第二图像上所述第一对象的形态不同,包括:所述第二图像上所述第一对象的边缘轮廓比所述第一图像上所述第一对象的边缘轮廓平整。In a possible design, the shape of the first object on the first image is different from that of the first object on the second image, including: the edge profile of the first object on the second image is larger than the The edge contour of the first object on the first image is smoothed.
第三方面,还提供一种电子设备,包括:In a third aspect, an electronic device is also provided, including:
处理器,存储器,以及,一个或多个程序;processor, memory, and, one or more programs;
其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如上述第一方面或第二方面所述的方法步骤。Wherein, the one or more programs are stored in the memory, the one or more programs include instructions, and when the instructions are executed by the processor, the electronic device performs the above-mentioned first aspect Or the method steps described in the second aspect.
第四方面,还提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面或第二方面所述的方法。In the fourth aspect, there is also provided a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and when the computer program is run on a computer, the computer executes the above-mentioned first or second aspect. The method described in the two aspects.
第五方面,还提供一种计算机程序产品,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面或第二方面所述的方法。In a fifth aspect, there is also provided a computer program product, including a computer program, which, when the computer program is run on a computer, causes the computer to execute the method as described in the first aspect or the second aspect above.
第六方面,还提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、存储器、以及处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行上述第一方面或第二方面所述的方法时显示的图形用户界面。In a sixth aspect, there is also provided a graphical user interface on an electronic device, the electronic device has a display screen, a memory, and a processor, and the processor is configured to execute one or more computer programs stored in the memory, The graphical user interface includes a graphical user interface displayed when the electronic device executes the method described in the first aspect or the second aspect.
第七方面,本申请实施例还提供一种芯片,所述芯片与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本申请实施例第一方面至第二方面的技术方案,本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。In the seventh aspect, the embodiment of the present application further provides a chip, the chip is coupled with the memory in the electronic device, and is used to call the computer program stored in the memory and execute the technical solutions of the first aspect to the second aspect of the embodiment of the present application , "Coupling" in the embodiments of the present application means that two components are directly or indirectly combined with each other.
上述第二方面至第七方面的有益效果,参见第一方面的有益效果,不重复赘述。For the above beneficial effects of the second aspect to the seventh aspect, refer to the beneficial effects of the first aspect, which will not be repeated.
附图说明Description of drawings
图1为本申请一实施例提供的VR眼镜的示意图;FIG. 1 is a schematic diagram of VR glasses provided by an embodiment of the present application;
图2A为本申请一实施例提供的VR系统的示意图;FIG. 2A is a schematic diagram of a VR system provided by an embodiment of the present application;
图2B为本申请一实施例提供的VR穿戴设备的示意图;FIG. 2B is a schematic diagram of a VR wearable device provided by an embodiment of the present application;
图2C为本申请一实施例提供的眼动追踪的示意图;FIG. 2C is a schematic diagram of eye tracking provided by an embodiment of the present application;
图3为本申请一实施例提供的人眼的结构示意图;Fig. 3 is a schematic structural diagram of a human eye provided by an embodiment of the present application;
图4A为本申请一实施例提供的人眼裸眼观察物体的示意图;FIG. 4A is a schematic diagram of naked-eye observation of an object provided by an embodiment of the present application;
图4B为本申请一实施例提供的人眼佩戴VR眼镜观察物体的示意图;FIG. 4B is a schematic diagram of human eyes wearing VR glasses to observe objects provided by an embodiment of the present application;
图4C为本申请一实施例提供的人眼佩戴VR眼镜观察物体的示意图;FIG. 4C is a schematic diagram of human eyes wearing VR glasses to observe objects provided by an embodiment of the present application;
图5A至图5B为本申请一实施例提供的一种应用场景的示意图;5A to 5B are schematic diagrams of an application scenario provided by an embodiment of the present application;
图6A至图6B为本申请一实施例提供的视觉重构过程的示意图;6A to 6B are schematic diagrams of a visual reconstruction process provided by an embodiment of the present application;
图7至图8为本申请一实施例提供的视觉重构的示意图;7 to 8 are schematic diagrams of visual reconstruction provided by an embodiment of the present application;
图9为本申请一实施例提供的第一坐标系和第二坐标系的示意图;FIG. 9 is a schematic diagram of a first coordinate system and a second coordinate system provided by an embodiment of the present application;
图10至图11为本申请一实施例提供的第一区域进行视角重构的示意图;FIG. 10 to FIG. 11 are schematic diagrams of viewing angle reconstruction in the first area provided by an embodiment of the present application;
图12为本申请一实施例提供的一种显示方法的流程示意图;FIG. 12 is a schematic flowchart of a display method provided by an embodiment of the present application;
图13为本申请一实施例提供的一种应用场景的示意图;FIG. 13 is a schematic diagram of an application scenario provided by an embodiment of the present application;
图14为本申请一实施例提供的平面二维图像的示意图;Fig. 14 is a schematic diagram of a planar two-dimensional image provided by an embodiment of the present application;
图15为本申请一实施例提供的辐辏角度的示意图;FIG. 15 is a schematic diagram of a convergence angle provided by an embodiment of the present application;
图16为本申请一实施例提供的平面二维图像转换为三维点云的示意图;FIG. 16 is a schematic diagram of converting a plane two-dimensional image into a three-dimensional point cloud according to an embodiment of the present application;
图17为本申请一实施例提供的虚拟摄像头的示意图;FIG. 17 is a schematic diagram of a virtual camera provided by an embodiment of the present application;
图18为本申请一实施例提供的重构前的图像和重构后的图像的示意图;FIG. 18 is a schematic diagram of an image before reconstruction and an image after reconstruction provided by an embodiment of the present application;
图19为本申请一实施例提供的电子设备的示意图。Fig. 19 is a schematic diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。In the following, some terms used in the embodiments of the present application are explained, so as to facilitate the understanding of those skilled in the art.
(1)本申请实施例涉及的至少一个,包括一个或者多个;其中,多个是指大于或者等于两个。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为明示或暗示相对重要性,也不能理解为明示或暗示顺序。比如,第一区域和第二区域并不代表二者的重要程度,或者代表二者的顺序,仅仅是为了区分区域。在本申请实施例中,“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。(1) At least one of the embodiments of the present application involves one or more; wherein, a plurality means greater than or equal to two. In addition, it should be understood that in the description of this application, words such as "first" and "second" are only used for the purpose of distinguishing descriptions, and cannot be understood as express or implied relative importance, nor can they be understood as express or imply order. For example, the first area and the second area do not represent the importance of the two, or represent the order of the two, but are only for distinguishing the areas. In the embodiment of this application, "and/or" is just a kind of relationship describing the relationship between related objects, which means that there may be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, and A and B exist at the same time. B, there are three situations of B alone. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
(2)虚拟现实(Virtual Reality,VR)技术是借助计算机及传感器技术创造的一种人机交互手段。VR技术综合了计算机图形技术、计算机仿真技术、传感器技术、显示技术等多种科学技术,可以创建虚拟环境。虚拟环境包括由计算机生成的、并实时动态播放的三维立体逼真图像为用户带来视觉感知;而且,除了计算机图形技术所生成的视觉感知外,还有听觉、触觉、力觉、运动等感知,甚至还包括嗅觉和味觉等,也称为多感知;此外,还可以检测用户的头部转动,眼睛、手势、或其他人体行为动作,由计算机来处理与用户的动作相适应的数据,并对用户的动作实时响应,并分别反馈到用户的五官,进而形式虚 拟环境。示例性的,用户佩戴VR穿戴设备可以看到VR游戏界面,通过手势、手柄等操作,可以与VR游戏界面交互,仿佛身处游戏中。(2) Virtual Reality (VR) technology is a means of human-computer interaction created with the help of computer and sensor technology. VR technology integrates computer graphics technology, computer simulation technology, sensor technology, display technology and other science and technology to create a virtual environment. The virtual environment includes three-dimensional realistic images generated by computers and dynamically played in real time to bring visual perception to users; moreover, in addition to the visual perception generated by computer graphics technology, there are also perceptions such as hearing, touch, force, and movement. It even includes the sense of smell and taste, also known as multi-sensing; in addition, it can also detect the user's head rotation, eyes, gestures, or other human behaviors, and the computer will process the data that is suitable for the user's actions, and The user's actions respond in real time and are fed back to the user's facial features respectively, thereby forming a virtual environment. Exemplarily, the user can see the VR game interface by wearing the VR wearable device, and can interact with the VR game interface through gestures, handles, and other operations, as if in a game.
(3)增强现实(Augmented Reality,AR)技术是指将计算机生成的虚拟对象叠加到真实世界的场景之上,从而实现对真实世界的增强。也就是说,AR技术中需要采集真实世界的场景,然后在真实世界上增加虚拟环境。(3) Augmented Reality (AR) technology refers to superimposing computer-generated virtual objects on real-world scenes to enhance the real world. In other words, AR technology needs to collect real-world scenes, and then add a virtual environment to the real world.
因此,VR技术与AR技术的区别在于,VR技术创建的是完全的虚拟环境,用户看到的全部是虚拟对象;而AR技术是在真实世界上叠加了虚拟对象,即既包括真实世界中对象也包括虚拟对象。比如,用户佩戴透明眼镜,通过该眼镜可以看到周围的真实环境,而且该眼镜上还可以显示虚拟对象,这样,用户既可以看到真实对象也可以看到虚拟对象。Therefore, the difference between VR technology and AR technology is that VR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, that is, it includes objects in the real world. Also includes dummy objects. For example, the user wears transparent glasses, through which the real environment around can be seen, and virtual objects can also be displayed on the glasses, so that the user can see both real objects and virtual objects.
(4)混合现实技术(Mixed Reality,MR),是通过在虚拟环境中引入现实场景信息(或称为真实场景信息),将虚拟环境、现实世界和用户之间搭起一个交互反馈信息的桥梁,从而增强用户体验的真实感。具体来说,把现实对象虚拟化,(比如,使用摄像头来扫描现实对象进行三维重建,生成虚拟对象),经过虚拟化的真实对象引入到虚拟环境中,这样,用户在虚拟环境中可以看到真实对象。(4) Mixed reality technology (Mixed Reality, MR) is to build a bridge of interactive feedback information between the virtual environment, the real world and users by introducing real scene information (or called real scene information) into the virtual environment. , thereby enhancing the realism of the user experience. Specifically, the real object is virtualized (for example, using a camera to scan the real object for 3D reconstruction to generate a virtual object), and the virtualized real object is introduced into the virtual environment, so that the user can see in the virtual environment real object.
需要说明的是,本申请实施例提供的技术方案可以适用于VR场景、AR场景或MR场景中;或者,还可以适用于除了VR、AR和MR之外的其它场景,总之,适用于任何的需要将拍摄视角与人眼观察视角不同的图像向用户展示的场景。It should be noted that the technical solutions provided by the embodiments of this application can be applied to VR scenes, AR scenes, or MR scenes; or, they can also be applied to other scenes except VR, AR, and MR. A scene that needs to show the user an image with a shooting angle different from that observed by the human eye.
为了方便理解,下文主要以VR场景为例进行介绍。For ease of understanding, the following mainly uses VR scenarios as an example.
示例性的,请参见图2A,为本申请实施例VR系统的示意图。VR系统中包括VR穿戴设备100,以及图像处理设备200。For example, please refer to FIG. 2A , which is a schematic diagram of a VR system according to an embodiment of the present application. The VR system includes a VR wearable device 100 and an image processing device 200 .
其中,图像处理设备200可以包括主机(例如VR主机)或服务器(例如VR服务器)。VR穿戴设备100与VR主机或VR服务器连接(有线连接或无线连接)。VR主机或VR服务器可以是具有较大计算能力的设备。例如,VR主机可以是手机、平板电脑、笔记本电脑等设备,VR服务器可以是云服务器等。Wherein, the image processing device 200 may include a host (such as a VR host) or a server (such as a VR server). The VR wearable device 100 is connected (wired connection or wireless connection) with a VR host or a VR server. The VR host or VR server may be a device with relatively large computing capabilities. For example, the VR host can be a device such as a mobile phone, a tablet computer, or a notebook computer, and the VR server can be a cloud server, etc.
其中,VR穿戴设备100可以是头戴式设备(Head Mounted Display,HMD),比如眼镜、头盔等。VR穿戴设备100上设置有至少一个摄像头,以及至少一个显示屏。图2A中以VR穿戴设备100上设置两个显示屏,即显示屏110和显示屏112为例。其中,显示屏110用于向用户右眼展示图像。显示屏112用于向用户左眼展示图像。需要说明的是,显示屏110和显示屏112被包裹在VR眼镜内部,所以图2A中指示显示屏110和显示屏112的箭头使用虚线表示。其中,显示屏110和显示屏112可以是两个独立的显示屏或者可以是同一个显示屏上的两个不同显示区域,本申请不作限定。而且,图2A中以VR穿戴设备100上设置两个摄像头,即摄像头120和摄像头122为例。摄像头120和摄像头122分别用于采集真实世界的图像。其中,摄像头120采集的图像可以通过显示屏110显示。摄像头122采集的图像可以通过显示屏112显示。一般,用户佩戴VR穿戴设备100时,人眼是位于靠近显示屏的位置的,比如,右眼靠近显示屏110的位置以观看显示屏110上的图像,左眼靠近显示屏112的位置以观看显示屏112上的图像。由于摄像头与显示屏的位置不同(比如,摄像头120位于显示屏110的右下方,摄像头122位于显示屏112的左下方),所以摄像头与人眼的位置不同。这样的话,摄像头的拍摄视角与人眼的观察视角不同。比如,继续参见图2A,摄像头120的拍摄视角与右眼的观察视角不同,摄像头122的拍摄视角与左眼的观察视角不同。这样,会给用户造成不适感,长时间如此会出现眩晕 感,体验较差。Wherein, the VR wearable device 100 may be a head mounted device (Head Mounted Display, HMD), such as glasses, a helmet, and the like. The VR wearable device 100 is provided with at least one camera and at least one display screen. In FIG. 2A , two display screens, ie, a display screen 110 and a display screen 112 , are set on the VR wearable device 100 as an example. Wherein, the display screen 110 is used to display images to the user's right eye. The display screen 112 is used to present images to the user's left eye. It should be noted that the display screen 110 and the display screen 112 are wrapped inside the VR glasses, so the arrows indicating the display screen 110 and the display screen 112 in FIG. 2A are represented by dotted lines. Wherein, the display screen 110 and the display screen 112 may be two independent display screens or may be two different display areas on the same display screen, which is not limited in this application. Moreover, in FIG. 2A , two cameras, that is, a camera 120 and a camera 122 are set on the VR wearable device 100 as an example. The camera 120 and the camera 122 are respectively used to collect images of the real world. Wherein, the image collected by the camera 120 can be displayed through the display screen 110 . Images collected by the camera 122 can be displayed on the display screen 112 . Generally, when the user wears the VR wearable device 100, the human eyes are located close to the display screen, for example, the right eye is close to the display screen 110 to view the images on the display screen 110, and the left eye is close to the display screen 112 to view the images on the display screen 110. The image on the display screen 112. Since the positions of the camera and the display screen are different (for example, the camera 120 is located at the lower right of the display screen 110 , and the camera 122 is located at the lower left of the display screen 112 ), the positions of the camera and human eyes are different. In this case, the viewing angle of the camera is different from that of the human eye. For example, continue to refer to FIG. 2A , the shooting angle of view of the camera 120 is different from that of the right eye, and the shooting angle of view of the camera 122 is different from that of the left eye. Like this, can cause discomfort to the user, can appear dizziness like this for a long time, and experience is poorer.
在本申请实施例中,VR穿戴设备100可以将摄像头采集的图像发送给图像处理设备200进行处理。比如,图像处理设备200使用本申请提供的视角重构方案对所述图像作视角重构(具体实现过程将在后文介绍),将经过视角重构的图像发送给VR穿戴设备100进行显示。比如,VR穿戴设备100将摄像头120采集的图像1发送给图像处理设备200作视角重构得到图像2,然后显示屏110显示图像2。VR穿戴设备100将摄像头122采集的图像3发送给图像处理设备200作视角重构得到图像4,然后显示屏112显示图像4。这样的话,用户眼镜看到的是视角重构后的图像,可以缓解不适感(将在后文介绍)。在一些实施例中,图2A中VR系统中也可以不包括图像处理设备200。比如,VR穿戴设备100本地具有图像处理能力(比如,对图像作视角重构的能力),无需由图像处理设备200(VR主机或VR服务器)进行处理。为了方便理解,下文中以VR穿戴设备100本地进行视角重构为例进行说明,而且,下文主要以VR穿戴设备100是VR眼镜为例。In the embodiment of the present application, the VR wearable device 100 may send the image collected by the camera to the image processing device 200 for processing. For example, the image processing device 200 uses the perspective reconstruction scheme provided in this application to reconstruct the perspective of the image (the specific implementation process will be introduced later), and sends the reconstructed image to the VR wearable device 100 for display. For example, the VR wearable device 100 sends the image 1 captured by the camera 120 to the image processing device 200 to perform perspective reconstruction to obtain an image 2, and then the display screen 110 displays the image 2. The VR wearable device 100 sends the image 3 collected by the camera 122 to the image processing device 200 to perform perspective reconstruction to obtain an image 4 , and then the display screen 112 displays the image 4 . In this case, what the user's glasses see is the image after the angle of view has been reconstructed, which can relieve the discomfort (will be introduced later). In some embodiments, the VR system in FIG. 2A may not include the image processing device 200 . For example, the VR wearable device 100 locally has image processing capabilities (for example, the ability to reconstruct the viewing angle of images), and does not need to be processed by the image processing device 200 (VR host or VR server). For the convenience of understanding, the following takes the VR wearable device 100 to locally perform perspective reconstruction as an example for illustration, and the following mainly takes the VR wearable device 100 as VR glasses as an example.
示例性的,请参考图2B,示出了本申请实施例提供的一种VR穿戴设备100的结构示意图。如图2B所示,VR穿戴设备100可以包括处理器111,存储器101,传感器模块130(可以用于获取用户的姿态),麦克风140,按键150,输入输出接口160,通信模块170,摄像头180,电池190、光学显示模组1100以及眼动追踪模组1200等。For example, please refer to FIG. 2B , which shows a schematic structural diagram of a VR wearable device 100 provided by an embodiment of the present application. As shown in FIG. 2B, the VR wearable device 100 may include a processor 111, a memory 101, a sensor module 130 (which may be used to obtain the user's posture), a microphone 140, a button 150, an input and output interface 160, a communication module 170, a camera 180, battery 190 , optical display module 1100 , eye tracking module 1200 and so on.
可以理解的是,本申请实施例示意的结构并不构成对VR穿戴设备100的具体限定。在本申请另一些实施例中,VR穿戴设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the VR wearable device 100 . In other embodiments of the present application, the VR wearable device 100 may include more or fewer components than shown in the illustration, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器111通常用于控制VR穿戴设备100的整体操作,可以包括一个或多个处理单元,例如:处理器111可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频处理单元(video processing unit,VPU)控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 111 is generally used to control the overall operation of the VR wearable device 100, and may include one or more processing units, for example: the processor 111 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU) controller, memory, video codec, digital signal processor (digital signal processor, DSP ), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
处理器111中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器111中的存储器为高速缓冲存储器。该存储器可以保存处理器111刚用过或循环使用的指令或数据。如果处理器111需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器111的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 111 for storing instructions and data. In some embodiments, the memory in processor 111 is a cache memory. The memory may hold instructions or data that the processor 111 has just used or recycled. If the processor 111 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 111 is reduced, thus improving the efficiency of the system.
在本申请的一些实施例中,处理器111可以用于控制VR穿戴设备100的光焦度。示例性的,处理器111可以用于控制光学显示模组1100的光焦度,实现对穿戴设备100的光焦度的调整的功能。例如,处理器111可以通过调整光学显示模组1100中各个光学器件(如透镜等)之间的相对位置,使得光学显示模组1100的光焦度得到调整,进而使得光学显示模组1100在向人眼成像时,对应的虚像面的位置可以得到调整。从而达到控制穿戴设备100的光焦度的效果。In some embodiments of the present application, the processor 111 may be used to control the optical power of the VR wearable device 100 . Exemplarily, the processor 111 may be used to control the optical power of the optical display module 1100 to realize the function of adjusting the optical power of the wearable device 100 . For example, the processor 111 can adjust the relative positions of the optical devices (such as lenses, etc.) When the human eye is imaging, the position of the corresponding virtual image plane can be adjusted. In this way, the effect of controlling the optical power of the wearable device 100 is achieved.
在一些实施例中,处理器111可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface, MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口,串行外设接口(serial peripheral interface,SPI)接口等。In some embodiments, processor 111 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general input and output (general -purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface etc.
在一些实施例中,处理器111可以对不同景深处的对象做不同程度的模糊化处理,以使不同景深处的对象的清晰度不同。In some embodiments, the processor 111 may perform blurring processing to different degrees on objects at different depths of field, so that objects at different depths of field have different sharpness.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器111可以包含多组I2C总线。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 111 may include multiple sets of I2C buses.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器111与通信模块170。例如:处理器111通过UART接口与通信模块170中的蓝牙模块通信,实现蓝牙功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 111 and the communication module 170 . For example: the processor 111 communicates with the Bluetooth module in the communication module 170 through the UART interface to realize the Bluetooth function.
MIPI接口可以被用于连接处理器111与光学显示模组1100中的显示屏,摄像头180等外围器件。The MIPI interface can be used to connect the processor 111 with the display screen in the optical display module 1100 , the camera 180 and other peripheral devices.
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器111与摄像头180,光学显示模组1100中的显示屏,通信模块170,传感器模块130,麦克风140等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。在一些实施例中,摄像头180可以采集包括真实对象的图像,处理器111可以将摄像头采集的图像与虚拟对象融合,通过光学显示模组1100现实融合得到的图像。在一些实施例中,摄像头180还可以采集包括人眼的图像。处理器111通过所述图像进行眼动追踪。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 111 with the camera 180 , the display screen in the optical display module 1100 , the communication module 170 , the sensor module 130 , the microphone 140 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc. In some embodiments, the camera 180 can capture images including real objects, and the processor 111 can fuse the images captured by the camera with the virtual objects, and display the fused images through the optical display module 1100 . In some embodiments, the camera 180 can also capture images including human eyes. The processor 111 performs eye tracking through the images.
USB接口是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为VR穿戴设备100充电,也可以用于VR穿戴设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如手机等。USB接口可以是USB3.0,用于兼容高速显示接口(display port,DP)信号传输,可以传输视音频高速数据。The USB interface is an interface that conforms to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc. The USB interface can be used to connect a charger to charge the VR wearable device 100, and can also be used to transmit data between the VR wearable device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones. The USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对穿戴设备100的结构限定。在本申请另一些实施例中,穿戴设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the wearable device 100 . In other embodiments of the present application, the wearable device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
另外,VR穿戴设备100可以包含无线通信功能,比如,VR穿戴设备100可以从其它电子设备(比如VR主机)接收图像进行显示。通信模块170可以包含无线通信模块和移动通信模块。无线通信功能可以通过天线(未示出)、移动通信模块(未示出),调制解调处理器(未示出)以及基带处理器(未示出)等实现。天线用于发射和接收电磁波信号。VR穿戴设备100中可以包含多个天线,每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。In addition, the VR wearable device 100 may include a wireless communication function, for example, the VR wearable device 100 may receive images from other electronic devices (such as a VR host) for display. The communication module 170 may include a wireless communication module and a mobile communication module. The wireless communication function can be realized by an antenna (not shown), a mobile communication module (not shown), a modem processor (not shown), and a baseband processor (not shown). Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the VR wearable device 100, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块可以提供应用在VR穿戴设备100上的包括第二代(2th generation,2G)网络/第三代(3th generation,3G)网络/第四代(4th generation,4G)网络/第五代(5th generation,5G)网络等无线通信的解决方案。移动通信模块可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块可以由天 线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块的至少部分功能模块可以被设置于处理器111中。在一些实施例中,移动通信模块的至少部分功能模块可以与处理器111的至少部分模块被设置在同一个器件中。The mobile communication module can provide applications on the VR wearable device 100 including second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation (5th generation, 5G) network and other wireless communication solutions. The mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like. The mobile communication module can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna. In some embodiments, at least part of the functional modules of the mobile communication module may be set in the processor 111 . In some embodiments, at least part of the functional modules of the mobile communication module and at least part of the modules of the processor 111 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器等)输出声音信号,或通过光学显示模组1100中的显示屏显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器111,与移动通信模块或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speakers, etc.), or displays images or videos through the display screen in the optical display module 1100 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 111, and be set in the same device as the mobile communication module or other functional modules.
无线通信模块可以提供应用在VR穿戴设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器111。无线通信模块还可以从处理器111接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。The wireless communication module can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the VR wearable device 100. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module may be one or more devices integrating at least one communication processing module. The wireless communication module receives electromagnetic waves through the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 111 . The wireless communication module can also receive the signal to be sent from the processor 111 , frequency-modulate it, amplify it, and convert it into electromagnetic wave and radiate it through the antenna.
在一些实施例中,VR穿戴设备100的天线和移动通信模块耦合,使得VR穿戴设备100可以通过无线通信技术与网络以及其他设备通信。该无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna of the VR wearable device 100 is coupled to the mobile communication module, so that the VR wearable device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc. GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
VR穿戴设备100通过GPU,光学显示模组1100,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接光学显示模组1100和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器111可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The VR wearable device 100 realizes the display function through the GPU, the optical display module 1100 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the optical display module 1100 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 111 may include one or more GPUs that execute program instructions to generate or change display information.
存储器101可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。处理器111通过运行存储在存储器101的指令,从而执行VR穿戴设备100的各种功能应用以及数据处理。存储器101可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储穿戴设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器101可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一 个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。The memory 101 may be used to store computer-executable program code including instructions. The processor 111 executes various functional applications and data processing of the VR wearable device 100 by executing instructions stored in the memory 101 . The memory 101 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like. The data storage area can store data created during use of the wearable device 100 (such as audio data, phonebook, etc.) and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash memory (universal flash storage, UFS) and the like.
VR穿戴设备100可以通过音频模块,扬声器,麦克风140,耳机接口,以及应用处理器等实现音频功能。例如音乐播放,录音等。音频模块用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块还可以用于对音频信号编码和解码。在一些实施例中,音频模块可以设置于处理器111中,或将音频模块的部分功能模块设置于处理器111中。扬声器,也称“喇叭”,用于将音频电信号转换为声音信号。穿戴设备100可以通过扬声器收听音乐,或收听免提通话。The VR wearable device 100 can implement audio functions through an audio module, a speaker, a microphone 140, an earphone interface, and an application processor. Such as music playback, recording, etc. The audio module is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module can also be used to encode and decode audio signals. In some embodiments, the audio module may be set in the processor 111 , or some functional modules of the audio module may be set in the processor 111 . Loudspeakers, also called "horns", are used to convert audio electrical signals into sound signals. The wearable device 100 can listen to music through the speaker, or listen to hands-free calls.
麦克风140,也称“话筒”,“传声器”,用于将声音信号转换为电信号。VR穿戴设备100可以设置至少一个麦克风140。在另一些实施例中,VR穿戴设备100可以设置两个麦克风140,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,VR穿戴设备100还可以设置三个,四个或更多麦克风140,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 140, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. The VR wearable device 100 may be provided with at least one microphone 140 . In other embodiments, the VR wearable device 100 can be provided with two microphones 140, which can also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the VR wearable device 100 can also be provided with three, four or more microphones 140 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
耳机接口用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5毫米(mm)的开放移动穿戴设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The headphone jack is used to connect wired headphones. The headphone interface can be a USB interface, or a 3.5mm (mm) open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface .
在一些实施例中,VR穿戴设备100可以包括一个或多个按键150,这些按键可以控制VR穿戴设备,为用户提供访问VR穿戴设备100上的功能。按键150的形式可以是按钮、开关、刻度盘和触摸或近触摸传感设备(如触摸传感器)。具体的,例如,用户可以通过按下按钮来打开VR穿戴设备100的光学显示模组1100。按键150包括开机键,音量键等。按键150可以是机械按键。也可以是触摸式按键。穿戴设备100可以接收按键输入,产生与穿戴设备100的用户设置以及功能控制有关的键信号输入。In some embodiments, the VR wearable device 100 may include one or more buttons 150 , and these buttons may control the VR wearable device and provide users with access to functions on the VR wearable device 100 . Keys 150 may be in the form of buttons, switches, dials, and touch or near-touch sensing devices such as touch sensors. Specifically, for example, the user can turn on the optical display module 1100 of the VR wearable device 100 by pressing a button. The keys 150 include a power key, a volume key and the like. The key 150 may be a mechanical key. It can also be a touch button. The wearable device 100 can receive key input and generate key signal input related to user settings and function control of the wearable device 100 .
在一些实施例中,VR穿戴设备100可以包括输入输出接口160,输入输出接口160可以通过合适的组件将其他装置连接到VR穿戴设备100。组件例如可以包括音频/视频插孔,数据连接器等。In some embodiments, the VR wearable device 100 may include an input-output interface 160, and the input-output interface 160 may connect other devices to the VR wearable device 100 through suitable components. Components may include, for example, audio/video jacks, data connectors, and the like.
光学显示模组1100用于在处理器111的控制下,为用户呈现图像。光学显示模组1100可以通过反射镜、透射镜或光波导等中的一种或几种光学器件,将实像素图像显示转化为近眼投影的虚拟图像显示,实现虚拟的交互体验,或实现虚拟与现实相结合的交互体验。例如,光学显示模组1100接收处理器111发送的图像数据信息,并向用户呈现对应的图像。The optical display module 1100 is used for presenting images to the user under the control of the processor 111 . The optical display module 1100 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as mirrors, transmission mirrors, or optical waveguides, so as to realize virtual interactive experience, or realize virtual and Interactive experience combined with reality. For example, the optical display module 1100 receives image data information sent by the processor 111 and presents corresponding images to the user.
在一些实施例中,VR穿戴设备100还可以包括眼动跟踪模组1200,眼动跟踪模组1200用于跟踪人眼的运动,进而确定人眼的注视点。如,可以通过图像处理技术,定位瞳孔位置,获取瞳孔中心坐标,进而计算人的注视点。在一些实施例中,该眼动追踪系统可以通过视频眼图法或者光电二极管响应法或者瞳孔角膜反射法等方法,确定用户的注视点位置(或者确定用户的视线方向),从而实现用户的眼动追踪。In some embodiments, the VR wearable device 100 may further include an eye tracking module 1200, which is used to track the movement of human eyes, and then determine the point of gaze of the human eyes. For example, the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated. In some embodiments, the eye tracking system can determine the position of the user's fixation point (or determine the direction of the user's line of sight) through methods such as video eye diagram method, photodiode response method, pupil cornea reflection method, etc., so as to realize the user's eye tracking. motion tracking.
在一些实施例中,以采用瞳孔角膜反射法确定用户的视线方向为例。如图2C,该眼动追踪系统可以包括一个或多个近红外发光二极管(Light-Emitting Diode,LED)以及一个或多个近红外相机。该近红外LED和近红外相机未在图2B中示出。在不同的示例中,该近红外LED可以设置在目镜周围,以便对人眼进行全面的照射。在一些实施例中,近红外LED的中心波长可以为850nm或940nm。该眼动追踪系统可以通过如下方法获取用户的视线方向:由近红外LED对人眼进行照明,近红外相机拍摄眼球的图像,然后根据眼球图像中近红外LED在角膜上的反光点位置(即图2C中近红外相机上LED光斑的像)以及 瞳孔的中心(即,图2C中近红外相机上瞳孔中心的像),确定眼球的光轴方向,从而得到用户的视线方向。In some embodiments, it is taken as an example to determine a user's line of sight direction by using a pupil-cornea reflection method. As shown in FIG. 2C , the eye tracking system may include one or more near-infrared light-emitting diodes (Light-Emitting Diode, LED) and one or more near-infrared cameras. The NIR LED and NIR camera are not shown in Figure 2B. In a different example, the near-infrared LEDs can be positioned around the eyepiece so as to fully illuminate the human eye. In some embodiments, the near-infrared LED may have a center wavelength of 850 nm or 940 nm. The eye tracking system can obtain the user's line of sight direction through the following method: the human eye is illuminated by a near-infrared LED, and the near-infrared camera captures the image of the eyeball, and then according to the position of the reflective point of the near-infrared LED on the cornea in the eyeball image (i.e. The image of the LED spot on the near-infrared camera in Figure 2C) and the center of the pupil (that is, the image of the center of the pupil on the near-infrared camera in Figure 2C) determine the direction of the optical axis of the eyeball, thereby obtaining the direction of the user's line of sight.
需要说明的是,在本申请的一些实施例中,可以分别为用户的双眼设置各自对应的眼动追踪系统,以便同步或异步地对双眼进行眼动追踪。在本申请的另一些实施例中,也可以仅在一个人眼附近设置眼动追踪系统,通过该眼动追踪系统获取对应人眼的视线方向,并根据双眼注视点的关系(如用户在通过双眼观察物体时,两个眼睛的注视点位置一般相近或相同),结合用户的双眼间距,即可确定另一个人眼的视线方向或者注视点位置。It should be noted that, in some embodiments of the present application, eye-tracking systems corresponding to the two eyes of the user may be set respectively, so as to perform eye-tracking on the two eyes synchronously or asynchronously. In some other embodiments of the present application, an eye-tracking system can also be set only near one human eye, and the line-of-sight direction of the corresponding human eye can be obtained through the eye-tracking system, and according to the relationship between the gaze points of the two eyes (for example, when the user passes through When observing an object with both eyes, the fixation point positions of the two eyes are generally similar or the same), combined with the user's binocular distance, the line of sight direction or fixation point position of the other eye can be determined.
可以理解的是,本申请实施例示意的结构并不构成对VR穿戴设备100的具体限定。在本申请另一些实施例中,VR穿戴设备100可以包括比图2A更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置,本申请实施例不作限定。It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the VR wearable device 100 . In other embodiments of the present application, the VR wearable device 100 may include more or fewer components than those shown in FIG. 2A , or combine certain components, or split certain components, or arrange different components. limited.
为了能够清楚的说明本申请的技术方案,以下首先对人眼视觉产生机制进行简单说明。In order to clearly illustrate the technical solution of the present application, a brief description of the human vision generation mechanism will be given below.
图3为人眼的组成示意图。如图3,人眼中可以包括晶状体和睫状肌,以及位于眼底的视网膜。其中,晶状体可以起到变焦透镜的作用,对射入人眼的光线进行汇聚处理,以便将入射光线汇聚到人眼眼底的视网膜上,使得实际场景中的景物能够在视网膜上成清晰的像。睫状肌可以用于调节晶状体的形态,比如睫状肌可以通过收缩或放松,调节晶状体的屈光度,达到调整晶状体焦距的效果。从而使得实际场景中不同距离的物体,都可以通过晶状体清晰地在视网膜上的成像。Figure 3 is a schematic diagram of the composition of the human eye. As shown in Figure 3, the human eye can include a lens, a ciliary muscle, and a retina located in the fundus. Among them, the lens can function as a zoom lens to converge the light rays entering the human eye, so that the incident light rays can be converged on the retina of the human eye fundus, so that the scene in the actual scene can form a clear image on the retina. The ciliary muscle can be used to adjust the shape of the lens. For example, the ciliary muscle can adjust the diopter of the lens by contracting or relaxing, so as to achieve the effect of adjusting the focal length of the lens. Therefore, objects at different distances in the actual scene can be clearly imaged on the retina through the lens.
在真实世界中,用户(未佩戴VR眼镜)观看物体时,左眼和右眼的视角有不同。用户大脑可以根据同一物体在左、右眼中的视差,确定该物体的深度,所以人眼看到的世界是立体的。一般,视差越大,深度越小,视差越小,深度越大。示例性的,请参见图4A,真实世界中包括被观察物体400(以三角形为例)。人眼观察该被观察物体400时,左眼捕捉到的图像401,图像401中三角形位于(A1,B1)的位置。右眼捕捉到的图像402,图像402中三角形位于(A2,B2)的位置。大脑可以通过图像401和图像402上同一物体(比如三角形)的像素差(或者称为视差),确定该物体在真实世界中的位置。比如,大脑根据图像401中三角形的位置(A1,B1)和图像402中三角形的位置(A2,B2),确定三角形在真实世界中的位置为(A3,B3,L1),其中L1为三角形的深度,即三角形与用户眼睛之间的距离。也就是说,用户未佩戴VR眼镜的情况下看到的三角形与用户眼睛之间的距离为L1,此时真实世界中三角形与用户眼睛之间的距离和大脑感知的三角形与用户眼睛之间的距离相等。In the real world, when a user (not wearing VR glasses) views an object, the perspectives of the left eye and the right eye are different. The user's brain can determine the depth of the object based on the parallax of the same object in the left and right eyes, so the world seen by the human eye is three-dimensional. Generally, the larger the parallax, the smaller the depth, and the smaller the parallax, the larger the depth. Exemplarily, please refer to FIG. 4A , the real world includes an observed object 400 (take a triangle as an example). When human eyes observe the observed object 400 , the left eye captures an image 401 , in which the triangle is located at the position (A1, B1). In the image 402 captured by the right eye, the triangle in the image 402 is located at (A2, B2). The brain can determine the position of the object in the real world through the pixel difference (or parallax) of the same object (such as a triangle) on the image 401 and the image 402 . For example, according to the position (A1, B1) of the triangle in the image 401 and the position (A2, B2) of the triangle in the image 402, the brain determines that the position of the triangle in the real world is (A3, B3, L1), where L1 is the Depth, which is the distance between the triangle and the user's eyes. That is to say, the distance between the triangle seen by the user without VR glasses and the user's eyes is L1. At this time, the distance between the triangle and the user's eyes in the real world and the distance between the triangle perceived by the brain and the user's eyes The distance is equal.
此时,用户位置保持不动并佩戴上VR眼镜,通过VR眼镜观看同一被观察物体400(即,三角形)。然而,用户佩戴VR眼镜后看到的该被观察物体400的位置与未佩戴VR眼镜时看到的该被观察物体400的位置有差异。At this time, the user keeps his position still and puts on the VR glasses, and watches the same observed object 400 (ie, a triangle) through the VR glasses. However, the position of the observed object 400 seen by the user wearing the VR glasses is different from the position of the observed object 400 seen when the user does not wear the VR glasses.
示例性的,请结合图4B与图2A,在一些实施例中,VR眼镜上摄像头120位于显示屏110的右下方,摄像头122位于显示屏112的左下方,所以两个摄像头之间的间距B’大于人眼的双眼间距B。简单来说,摄像头122比人的左眼更靠左,摄像头120比人的右眼更靠右。如图4B,摄像头122采集的图像422上三角形位于(A1,B1,)的位置。由于摄像头122比人左眼更靠左,所以摄像头122采集的图像422上三角形的位置比未佩戴VR眼镜时左眼采集的图像401(见图4A)上三角形的位置更靠右,即,(A1,B1,)比(A1,B1)靠右。继续如图4B,摄像头120采集的图像420上三角形位于(A2,B2,)的位置。 由于摄像头120比人右眼更靠右,所以摄像头120采集的图像420上三角形的位置比未佩戴VR眼镜时右眼采集的图像402(见图4A)上三角形的位置更靠左,即(A2,B2,)比(A2,B2)靠左。继续参见图4B,假设图像422通过显示屏112显示,图像420通过显示屏110显示,大脑基于图像422和图像420看到三角形的位置为(A3,B3,L2),也就是说,用户佩戴VR眼镜后看到三角形与用户眼睛之间的距离为L2。由于(A1,B1,)比(A1,B1)靠右或者说(A1,B1)比(A1,B1,)靠近图像中心,(A2,B2,)比(A2,B2)靠左或者说(A2,B2)比(A2,B2,)靠近图像中心,那么,图4B中(A2,B2,)与(A1,B1,)之间的像素差大于图4A中(A2,B2)与(A1,B1)之间的像素差。因此,用户基于(A1,B1,)与(A2,B2,)之间的像素差看到的三角形的深度L2,小于根据(A1,B1)与(A2,B2)之间的像素差看到的三角形的深度L1。Exemplarily, please refer to FIG. 4B and FIG. 2A. In some embodiments, the camera 120 on the VR glasses is located at the lower right of the display screen 110, and the camera 122 is located at the lower left of the display screen 112, so the distance between the two cameras is B 'It is greater than the interocular distance B of the human eye. In simple terms, the camera 122 is further to the left than the person's left eye, and the camera 120 is further to the right than the person's right eye. As shown in FIG. 4B , the triangle on the image 422 collected by the camera 122 is located at (A1, B1,). Since the camera 122 is more to the left than the left eye of the person, the position of the triangle on the image 422 collected by the camera 122 is more to the right than the position of the triangle on the image 401 (see FIG. 4A ) collected by the left eye when not wearing VR glasses, that is, ( A1, B1,) is to the right of (A1, B1). Continuing with FIG. 4B , the triangle on the image 420 collected by the camera 120 is located at the position (A2, B2,). Since the camera 120 is more to the right than the right eye of a person, the position of the triangle on the image 420 collected by the camera 120 is more to the left than the position of the triangle on the image 402 (see FIG. 4A ) collected by the right eye when not wearing VR glasses, that is (A2 , B2,) is more left than (A2, B2). Continue referring to FIG. 4B , assuming that the image 422 is displayed on the display screen 112, and the image 420 is displayed on the display screen 110, the brain sees the position of the triangle based on the image 422 and the image 420 as (A3, B3, L2), that is, the user wearing the VR The distance between the triangle seen behind the glasses and the user's eyes is L2. Since (A1, B1,) is closer to the right than (A1, B1) or (A1, B1) is closer to the center of the image than (A1, B1,), (A2, B2,) is closer to the left than (A2, B2) or ( A2, B2) is closer to the center of the image than (A2, B2,), then, the pixel difference between (A2, B2,) and (A1, B1,) in Figure 4B is greater than that between (A2, B2) and (A1 , B1) pixel difference between. Therefore, the depth L2 of the triangle that the user sees based on the pixel difference between (A1, B1,) and (A2, B2,) is smaller than that seen based on the pixel difference between (A1, B1) and (A2, B2). The depth L1 of the triangle.
也就是说,用户在同一位置时,佩戴VR眼镜时看到的物体比未佩戴VR眼镜时看到的该物体更靠近用户。举例来说,在真实世界中人眼(未佩戴VR眼镜)看到物体距离用户1米,当用户佩戴VR眼镜时看到的该物体距离用户0.7米,更加靠近用户,与真实情况不符,而且,对于真实世界已经比较靠近用户的物体,当佩戴VR穿戴设备时会看到这些物体更加逼近用户,会让人感觉不舒服,会有压迫感,长时间如此,会出现眩晕感,体验较差。That is to say, when the user is at the same location, the object seen when wearing VR glasses is closer to the user than the object seen when not wearing VR glasses. For example, in the real world, the human eye (not wearing VR glasses) sees an object that is 1 meter away from the user, but when the user wears VR glasses, the object seen by the user is 0.7 meters away from the user, which is closer to the user, which is inconsistent with the real situation, and , For objects that are already relatively close to the user in the real world, when wearing a VR wearable device, you will see that these objects are closer to the user, which will make people feel uncomfortable and oppressive. If this happens for a long time, there will be dizziness and a poor experience .
上面的实施例以被观察物体是一个为例(即三角形),下面以被观察物体有两个为例,如图4C,被观察物体400(三角形)和被观察物体401(正方形)。为了方便理解,以被观察物体402位于无限远的位置为例,比如太阳。在未佩戴VR眼镜的情况下,左眼看到的应该是图像460,右眼看到的应该是图像470。由于正方形在无限远处、且与左眼和右眼距离接近,所以图像460和图像470上正方形均位于图像中心。这样,大脑基于图像460和图像470能看到真实环境。当用户佩戴VR眼镜时,左眼看到的是摄像头122采集的图像480,右眼看到的是摄像头120采集的图像490。因为,摄像头122比人眼靠左,所以摄像头122的位置处观察到的图像480上三角形与正方形之间的距离比,在左眼位置处观察到的图像460上三角形与正方形之间的距离大。同理,因为摄像头120比右眼靠右,所以摄像头120位置处观察到的图像490上三角形与正方形之间的距离比,在右眼位置处观察到的图像490上三角形和正方形之间的距离大。因此,为佩戴VR眼镜的情况下,大脑基于图像480和图像490看到的三角形更靠近用户,与真实情况不符合。In the above embodiment, one observed object is taken as an example (that is, a triangle), and below, two observed objects are taken as an example, as shown in FIG. 4C , an observed object 400 (triangle) and an observed object 401 (square). To facilitate understanding, take the observed object 402 located at infinite distance as an example, such as the sun. If the VR glasses are not worn, the image 460 should be seen by the left eye, and the image 470 should be seen by the right eye. Since the square is at infinity and close to the left and right eyes, both image 460 and image 470 have the square at the center of the image. In this way, the brain can see the real environment based on the image 460 and the image 470 . When the user wears VR glasses, what the left eye sees is the image 480 collected by the camera 122 , and what the right eye sees is the image 490 collected by the camera 120 . Because the camera 122 is to the left of the human eye, the distance between the triangle and the square on the image 480 observed at the position of the camera 122 is larger than the distance between the triangle and the square on the image 460 observed at the position of the left eye . Similarly, because the camera 120 is to the right of the right eye, the ratio of the distance between the triangle and the square on the image 490 observed at the position of the camera 120 is the distance between the triangle and the square on the image 490 observed at the position of the right eye big. Therefore, in the case of not wearing VR glasses, the triangle seen by the brain based on the image 480 and the image 490 is closer to the user, which is inconsistent with the real situation.
在上面的实施例中,以VR眼镜上摄像头120位于显示屏110右下方、摄像头122位于显示屏112左下方为例进行介绍。可以理解的是,在其它实施例中,摄像头120和摄像头122还可以位于其它位置,比如,摄像头120位于显示屏110上方,摄像头122位于显示屏112上方;或者,摄像头120与摄像头122之间的距离小于两个显示屏之间的距离等等,只要摄像头位置与显示屏位置不同,就会出现佩戴VR眼镜时看到物体与人眼的距离,与未佩戴VR眼镜时看到该物体与人眼的距离不同。In the above embodiments, the camera 120 is located at the lower right of the display screen 110 and the camera 122 is located at the lower left of the display screen 112 on the VR glasses as an example. It can be understood that, in other embodiments, the camera 120 and the camera 122 can also be located at other positions, for example, the camera 120 is located above the display screen 110, and the camera 122 is located above the display screen 112; If the distance is smaller than the distance between the two display screens, etc., as long as the position of the camera is different from that of the display screen, the distance between the object and the human eye will appear when wearing VR glasses, and the distance between the object and the human eye will be seen when not wearing VR glasses. The distance between the eyes is different.
为了方便理解,下面以一种应用场景为例进行说明,该应用场景以用户佩戴VR眼镜在家里进行游戏为例,如图5A。该应用场景中VR眼镜可以向用户展示真实场景,那么用户佩戴VR眼镜时能够看到家里的环境,比如家里的沙发、桌子等。在另一些实施例中,VR眼镜可以向用户展示真实场景以及虚拟物体,那么用户佩戴VR眼镜时,会看到的是在家里的环境以及虚拟物体(比如,游戏人物、游戏界面等,该虚拟物体非真实场景中的 物体),这种方式用户可以在熟悉的环境中进行虚拟游戏,体验较好。For the convenience of understanding, an application scenario is taken as an example for description below. The application scenario takes a user wearing VR glasses to play a game at home as an example, as shown in FIG. 5A . In this application scenario, the VR glasses can show the real scene to the user, so when the user wears the VR glasses, he can see the environment at home, such as sofas and tables at home. In some other embodiments, the VR glasses can display real scenes and virtual objects to the user, so when the user wears the VR glasses, what he will see is the environment and virtual objects at home (such as game characters, game interfaces, etc., the virtual Objects are not objects in the real scene), in this way, users can play virtual games in a familiar environment, and the experience is better.
如图5B,用户未佩戴VR眼镜时看到的应该是图5B中的(a)所示的真实世界501。当用户佩戴VR眼镜时人眼看到的是图5B中的(b)所示的虚拟世界502。可见,虚拟世界502中各个对象都更加靠近用户,尤其是真实世界中原本就距离用户较近物体,比如桌子,在用户佩戴VR眼镜后,用户会看到桌子更加逼近用户,与真实情况不符。As shown in FIG. 5B , what the user sees when not wearing VR glasses should be the real world 501 shown in (a) in FIG. 5B . When the user wears VR glasses, what the human eyes see is the virtual world 502 shown in (b) in FIG. 5B . It can be seen that all objects in the virtual world 502 are closer to the user, especially objects that are already close to the user in the real world, such as a table. After the user wears VR glasses, the user will see that the table is closer to the user, which is inconsistent with the real situation.
为了解决这个问题,本申请实施例提供一种解决方案,即视角重构。视角重构可以简单理解为视角调整/重建等。前文提到,因为摄像头的拍摄视角与人眼的观察视角不同,所以用户佩戴VR眼镜时看到的景象与未佩戴VR眼镜时看到的景象有差异。所以,简单来说,视角重构是指将摄像头的拍摄视角调整为人眼的观察视角。但是,因为摄像头的拍摄视角调整起来难度较大,比如摄像头固定在VR眼镜上某个位置,调整其拍摄视角需要有相应的硬件/机械结构,不仅成本较高,而且不利于设备轻薄化。因此,为了避免增加硬件成本,可以通过图像后处理方式实现摄像头的拍摄视角调整为人眼的观察视角的效果,即将摄像头采集的图像作处理,处理后的图像在VR眼镜上显示时,以使用户看到的景象与未佩戴VR眼镜时看到的景象差异降低。其中,图像处理称为图像视角重构,简单来说,图像视角重构是将摄像头采集的图像上像素点的显示位置作调整,使得人眼基于调整后的图像看到的物体符合真实情况。以图4B和图4A为例,图像视角重构可以包括图4B中图像422中三角形的位置(A1’,B1’)调整为(A1,B1);将图4B中图像420中三角形的位置(A2’,B2’)调整为(A2,B2)。也就是说,视角重构前的图像为图4B中的图像422和图像420,经过视角重构后的图像为图4A中的图像401和图像402。这样的话,VR眼镜的显示屏上可以显示经过视角重构后的图像(即显示图像401和图像402),这样人脑基于图像401和图像402能够准确的确定物体(即三角形)的真实位置。In order to solve this problem, the embodiment of the present application provides a solution, that is, perspective reconstruction. Angle reconstruction can be simply understood as angle adjustment/reconstruction, etc. As mentioned above, because the shooting angle of view of the camera is different from that of the human eye, the scene that the user sees when wearing VR glasses is different from that when not wearing VR glasses. Therefore, in simple terms, perspective reconstruction refers to adjusting the shooting perspective of the camera to the observation perspective of the human eye. However, because the shooting angle of the camera is difficult to adjust, for example, if the camera is fixed at a certain position on the VR glasses, adjusting the shooting angle requires corresponding hardware/mechanical structure, which is not only expensive, but also not conducive to thinning the device. Therefore, in order to avoid increasing hardware costs, the effect of adjusting the shooting angle of view of the camera to that of the human eye can be realized through image post-processing, that is, the image collected by the camera is processed, and when the processed image is displayed on the VR glasses, the user The difference between what you see and what you see when you don't wear VR glasses is reduced. Among them, image processing is called image perspective reconstruction. Simply put, image perspective reconstruction is to adjust the display position of pixels on the image collected by the camera, so that the objects seen by the human eye based on the adjusted image conform to the real situation. Taking Fig. 4B and Fig. 4A as an example, the image perspective reconstruction may include adjusting the position (A1', B1') of the triangle in the image 422 in Fig. 4B to (A1, B1); the position of the triangle in the image 420 in Fig. 4B ( A2', B2') is adjusted to (A2, B2). That is to say, the images before perspective reconstruction are image 422 and image 420 in FIG. 4B , and the images after perspective reconstruction are images 401 and 402 in FIG. 4A . In this way, the screen of the VR glasses can display the reconstructed image (ie display image 401 and image 402 ), so that the human brain can accurately determine the real position of the object (ie the triangle) based on the image 401 and image 402 .
在一种实现方式中,对图像进行视角重构时,可以对整张图做视角重构(可以称为全局视角重构)。In an implementation manner, when performing perspective reconstruction on an image, perspective reconstruction may be performed on the entire image (which may be called global perspective reconstruction).
下文以图5A的场景为例,且以对VR眼镜上一个摄像头采集的图像进行全局视角重构为例。假设摄像头采集的图像为图6A中的(a)的图像。如图6A中的(b),在一些实施例中,图像被划分为四个区域,分别是区域601、区域602、区域603以及区域604。假设区域602、区域604经过视角重构之后显示位置下移,区域601、区域603经过视角重构之后显示位置上移。经过视角重构后的四个区域构成的完整图像如图6A中的(c),可见,墙面、沙发、桌子等物体发生形变(或者扭曲、错位等)。In the following, the scene in FIG. 5A is taken as an example, and the reconstruction of the global viewing angle of an image captured by a camera on the VR glasses is taken as an example. It is assumed that the image collected by the camera is the image of (a) in FIG. 6A . As shown in (b) of FIG. 6A , in some embodiments, the image is divided into four regions, which are region 601 , region 602 , region 603 and region 604 . Assume that the display positions of the regions 602 and 604 move down after the viewing angle reconstruction, and the display positions of the regions 601 and 603 move up after the viewing angle reconstruction. The complete image formed by the four regions after perspective reconstruction is shown in (c) in Figure 6A. It can be seen that objects such as walls, sofas, and tables are deformed (or distorted, dislocated, etc.).
需要说明的是,图6A是以将图像划分为四个区域进行视角重构为例的,实际上,全局视角重构时对图像上区域划分粒度更细,比如划分为9个或16个区域或更多数量的区域;甚至是对每个像素点作重构。可以理解的是,当对更细粒度的区域作视角重构或者对每个像素点作视角重构之后,图像上物体的形变更为严重。比如,如图6B,经过全局视角重构后的图像上墙面发生扭曲(比如、呈波浪线式的扭曲)、桌子的边缘也扭曲(比如呈现波浪线式的扭曲)。因此,全局视角重构方案不仅工作量巨大,而且经过视角重构之后画面扭曲严重,对用户体验影响较大。It should be noted that Figure 6A is an example of dividing the image into four regions for viewing angle reconstruction. In fact, when reconstructing the global viewing angle, the regions on the image are divided into finer granularity, for example, divided into 9 or 16 regions. Or a larger number of regions; even reconstruct each pixel. It can be understood that the deformation of the object on the image becomes more serious when the perspective is reconstructed for a finer-grained area or for each pixel. For example, as shown in FIG. 6B , the wall surface is distorted (for example, distorted in a wavy line) and the edge of the table is also distorted (for example, distorted in a wavy line) on the image reconstructed from the global perspective. Therefore, the global viewing angle reconstruction solution not only has a huge workload, but also the picture is seriously distorted after the viewing angle reconstruction, which has a great impact on user experience.
在另一些实现方式中,不需要对整张图像作视角重构。比如,只需要对图像(摄像头采集的图像)上第一区域作视角重构即可,对于图像上第二区域(第一区域之外的其它区域)不作视角重构。其中,第一区域可以是用户注视点所在区域、用户感兴趣区域、默认 区域、用户指定区域等等。为了方便理解,对图像上第一区域作视角重构可以称为区域视角重构。由于仅对第一区域作视角重构,不对第二区域作视角重构,所以工作量减小,而且如前文所述,视角重构可能会发生画面扭曲,由于第二区域不需要重构,所以第二区域内的图像不会发生扭曲。也就是说,区域视角重构发生画面扭曲的概率或程度要远小于全局视角重构时发生画面扭曲的概率或程度,有助于改善全局视角重构时出现的画面扭曲现象。In other implementation manners, perspective reconstruction does not need to be performed on the entire image. For example, it is only necessary to reconstruct the viewing angle of the first area on the image (the image captured by the camera), and not perform viewing angle reconstruction on the second area (other areas other than the first area) on the image. Wherein, the first area may be the area where the user's gaze point is located, the user's interest area, the default area, the user-specified area, and the like. For ease of understanding, performing perspective reconstruction on the first region on the image may be referred to as regional perspective reconstruction. Since the viewing angle reconstruction is only performed on the first area and not on the second area, the workload is reduced, and as mentioned above, the viewing angle reconstruction may cause picture distortion. Since the second area does not need to be reconstructed, So the image in the second area will not be distorted. That is to say, the probability or degree of image distortion in regional perspective reconstruction is much lower than that in global perspective reconstruction, which helps to improve the picture distortion in global perspective reconstruction.
示例性的,继续以图5A的场景为例,假设VR眼镜采集的图像是如图7中的(a)所示的图像。假设用户注视点所在区域为虚线所围区域,那么只对虚线所围区域作视角重构,不对其它区域作视角重构,所以视角重构后的图像上虚线区域内的对象的显示位置和/形态发生变化,其它区域内的对象的显示位置和/或形态均没有发生变化,如图7中的(b)。因此,视角重构之后的图像上物体扭曲程度明显低于全局视角重构后的图像上物体的扭曲程度。比如,请对比图6B和图7中的(b),图6B是全局视角重构后的图像,图7是使用本申请技术方案重构后的图像,可见,使用本申请技术方案重构后的图像上其它区域(注视点所在区域以外的区域)是稳定的,比如沙发、墙面等未发生扭曲,明显降低了图像扭曲程度和概率。Exemplarily, continuing to take the scene in FIG. 5A as an example, assume that the image captured by the VR glasses is the image shown in (a) in FIG. 7 . Assuming that the area where the user's gaze point is located is the area surrounded by the dotted line, then only the area surrounded by the dotted line is reconstructed, and the viewing angle is not reconstructed for other areas, so the display position and/or When the shape changes, the display positions and/or shapes of objects in other areas do not change, as shown in (b) in FIG. 7 . Therefore, the degree of distortion of the object on the image after perspective reconstruction is obviously lower than that of the object on the image after global perspective reconstruction. For example, please compare (b) in Figure 6B and Figure 7, Figure 6B is the reconstructed image from the global perspective, and Figure 7 is the reconstructed image using the technical solution of this application, it can be seen that after reconstruction using the technical solution of this application Other areas of the image (areas other than the area where the gaze point is located) are stable, such as sofas, walls, etc., are not distorted, which significantly reduces the degree and probability of image distortion.
需要说明的是,图7中,以用户注视点所在区域是虚线所围区域为例,所述虚线所围区域可以是桌子的最小外接矩形,或者,大于或等于桌子的最小外接矩形。可以理解的是,还可以是桌子的最小外接正方形、最小外接圆形等等,对形状不作限定。在另一些实施例中,用户注视点所在区域也可以是桌子上的部分区域。It should be noted that in FIG. 7 , taking the area where the user's gaze is located is an area surrounded by a dotted line as an example, the area surrounded by a dotted line may be the smallest circumscribing rectangle of the table, or be greater than or equal to the smallest circumscribing rectangle of the table. It can be understood that the table may also be the smallest circumscribed square, the smallest circumscribed circle, etc., and the shape is not limited. In some other embodiments, the area where the user's gaze point is located may also be a partial area on the table.
图7以对一个摄像头采集的图像作区域视角重构为例的,可以理解的是,VR眼镜上包括两个摄像头时,可以分别对每个摄像头采集的图像作区域视角重构。FIG. 7 takes the reconstruction of the regional perspective of an image captured by a camera as an example. It can be understood that when the VR glasses include two cameras, the reconstruction of the regional perspective of the image captured by each camera can be performed separately.
示例性的,如图8,VR眼镜上摄像头122采集到图像622,摄像头120采集到图像620。VR眼镜可以对图像622上虚线区域作区域视角重构,得到图像624。图像624中虚线区域内的对象和图像622中虚线区域内的对象的显示位置和/或形态不同。比如,图像624中桌子的显示位置比图像622中桌子的显示位置靠左,和/或,桌子发生一定程度的形变。对于图像622中其它区域(虚线区域以外的区域)不作视角重构,所以图像624上其它区域内的对象(比如桌子)与图像622上其它区域内的对象的显示位置和形态相同。VR眼镜还可以对图像620上虚线区域作区域视角重构,得到图像626。图像626中虚线区域内的对象和图像620中虚线区域内的对象的显示位置和/或形态不同。比如,图像626中桌子的显示位置比图像620中桌子的显示位置靠右,和/或,桌子发生一定程度的形变。对于图像620中其它区域(虚线区域以外的区域,例如沙发所在的区域)不作视角重构,所以图像626上其它区域内的对象与图像620上其它区域内的对象的显示位置和形态均相同。Exemplarily, as shown in FIG. 8 , the camera 122 on the VR glasses captures an image 622 , and the camera 120 captures an image 620 . The VR glasses can reconstruct the area angle of view of the dotted line area on the image 622 to obtain an image 624 . The display position and/or shape of the object in the dotted line area in image 624 and the object in the dotted line area in image 622 are different. For example, the display position of the table in image 624 is to the left of the display position of the table in image 622, and/or the table is deformed to a certain extent. For other areas in the image 622 (areas other than the dotted line area), perspective reconstruction is not performed, so objects (such as tables) in other areas on the image 624 have the same display position and shape as objects in other areas on the image 622 . The VR glasses can also perform regional perspective reconstruction on the dotted line area on the image 620 to obtain an image 626 . Objects within the dotted area in image 626 are displayed in different positions and/or shapes from those within the dotted area in image 620 . For example, the display position of the table in image 626 is to the right of the display position of the table in image 620, and/or the table is deformed to a certain extent. For other areas in the image 620 (areas other than the dotted line area, such as the area where the sofa is located), perspective reconstruction is not performed, so objects in other areas on image 626 have the same display position and shape as objects in other areas on image 620.
VR眼镜的显示屏112显示图像624,显示屏120显示图像626。这样,用户佩戴VR眼镜后左眼看到图像624,右眼看到图像626,基于图像624和图像626上桌子的视差,确定桌子的深度信息是准确的,因为图像624和图像626上桌子的显示位置被调整过,调整后两张图像上桌子的视差变小,基于较小的视差,确定出的深度信息较大,所以用户不会再感受到桌子逼近用户,看到的景象符合真实情况。此外,需要说明的是,由于对虚线所在区域作视角重构了,所以用户佩戴VR眼镜后看到的桌子是有一定形变的,但由于其它区域未作视角重构,所以用户看到的其它区域内的对象未发生扭曲,相对于全局视角重构来说,发生扭曲/形态的程度降低。而且,由于其它区域未作视角重构,所以用户佩戴 VR眼镜看到的其它区域内的对象的显示位置是不准确的、是与真实世界有差异的,但是因为其它区域不是用户注视区域,用户关注度低,所以其它区域内的对象的显示位置不准确时对用户体验的影响不大,而且节省了工作量,有助于提升效率。The display screen 112 of the VR glasses displays an image 624 , and the display screen 120 displays an image 626 . In this way, after the user wears VR glasses, the left eye sees the image 624, and the right eye sees the image 626. Based on the parallax of the table on the image 624 and the image 626, it is determined that the depth information of the table is accurate, because the display positions of the table on the image 624 and the image 626 After adjustment, the parallax of the table on the two images becomes smaller. Based on the smaller parallax, the determined depth information is larger, so the user will no longer feel that the table is approaching the user, and the scene seen is in line with the real situation. In addition, it should be noted that since the perspective of the area where the dotted line is located has been reconstructed, the table that the user sees after wearing the VR glasses is deformed to a certain extent, but since the perspective of other areas has not been reconstructed, the other Objects within the region are not distorted, and are less distorted/morphological than global view reconstruction. Moreover, since other areas have not been reconstructed for viewing angles, the display positions of objects in other areas seen by the user wearing VR glasses are inaccurate and different from the real world. However, because other areas are not the user's gaze area, the user The degree of attention is low, so when the display position of objects in other areas is inaccurate, it has little impact on user experience, and it saves workload and helps to improve efficiency.
以下结合附图对上述视角重构的实现原理进行说明。The implementation principle of the above view angle reconstruction will be described below with reference to the accompanying drawings.
首先说明两个坐标系,第一坐标系(X1-O1-Y1)和第二坐标系(X2-O2-Y2)。其中,第一坐标系(X1-O1-Y1)是基于显示屏建立的坐标系。比如,如图9,第一坐标系以显示屏112的中心为坐标原点,显示方向为Y轴方向。可以理解的是,第一坐标系也可以是基于人眼建立的坐标系,比如基于图9中左眼建立第一坐标系。考虑到基于人眼建立坐标系难度较大,基于显示屏创建坐标系难度较小,而且,显示屏的位置与人眼的位置接近,所以基于显示屏创建的坐标系一定程度上和基于人眼创建的坐标系可以认为相同。第二坐标系(X2-O2-Y2)是基于摄像头122建立的。比如,如图9,第二坐标系(X2-O2-Y2)基于摄像头122创建,即,摄像头122拍摄物体时成像于第二坐标系(X2-O2-Y2)中。由于摄像头122采集的图像和显示屏112所显示的图像不处于同一个坐标系中,所以会导致摄像头拍摄视角与人眼观察视角不同。因此,对摄像头122采集的图像作视角重构,可以理解为,对摄像头122采集的图像作坐标转换,即从第二坐标系转换到第一坐标系中。First, two coordinate systems, the first coordinate system (X1-O1-Y1) and the second coordinate system (X2-O2-Y2), will be described. Wherein, the first coordinate system (X1-O1-Y1) is a coordinate system established based on the display screen. For example, as shown in FIG. 9 , the first coordinate system takes the center of the display screen 112 as the coordinate origin, and the display direction is the Y-axis direction. It can be understood that the first coordinate system may also be a coordinate system established based on human eyes, for example, the first coordinate system established based on the left eye in FIG. 9 . Considering that it is difficult to establish a coordinate system based on the human eye, it is less difficult to create a coordinate system based on the display screen. Moreover, the position of the display screen is close to the position of the human eye, so the coordinate system based on the display screen is to a certain extent the same as that based on the human eye. The created coordinate system can be considered the same. The second coordinate system (X2-O2-Y2) is established based on the camera 122 . For example, as shown in FIG. 9 , the second coordinate system (X2-O2-Y2) is created based on the camera 122, that is, when the camera 122 shoots an object, it is imaged in the second coordinate system (X2-O2-Y2). Since the image collected by the camera 122 and the image displayed on the display screen 112 are not in the same coordinate system, the viewing angle of the camera is different from the viewing angle of human eyes. Therefore, performing perspective reconstruction on the image captured by the camera 122 can be understood as performing coordinate transformation on the image captured by the camera 122, that is, transforming from the second coordinate system to the first coordinate system.
从第二坐标系转移到第一坐标系需要使用到偏移量,偏移量是指第二坐标系与第一坐标系之间的偏移量,也可以理解为摄像头122中心与显示屏112中心之间的距离。对摄像头122采集的图像作视角重构包括:将所述图像上像素点按照所述偏移量偏移到目标位置。比如,以前面的图4B为例,摄像头122采集的图像422上三角形的位置为(A1’,B1’),那么(A1’,B1’)+偏移量=(A1,B1),即得到图4A中三角形的位置(A1,B1),完成三角形的视角重构。Transferring from the second coordinate system to the first coordinate system requires the use of an offset, which refers to the offset between the second coordinate system and the first coordinate system, and can also be understood as the center of the camera 122 and the display screen 112 distance between centers. Reconstructing the view angle of the image collected by the camera 122 includes: shifting the pixel points on the image to the target position according to the offset amount. For example, taking the previous Figure 4B as an example, the position of the triangle on the image 422 collected by the camera 122 is (A1', B1'), then (A1', B1')+offset=(A1, B1), that is, The position (A1, B1) of the triangle in FIG. 4A completes the perspective reconstruction of the triangle.
在一些实施例中,偏移量包括偏移方向和/或偏移距离(偏移距离也可以称为位移偏移量)。In some embodiments, the offset includes an offset direction and/or an offset distance (the offset distance may also be referred to as a displacement offset).
偏移距离可以是第二坐标系原点到第一坐标系原点之间的距离。也就是说,偏移距离与显示屏112与摄像头122之间的距离相关。比如,当显示屏112与摄像头122之间的距离越大时,第一坐标系和第二坐标系之间的距离越大,即偏移距离越大。在一些实施例中,偏移距离随着摄像头122与显示屏112之间的距离的增大而增大,随着摄像头122与显示屏112之间的距离的减少而减少。示例性的,当摄像头122与显示屏112之间的距离为第一距离时,偏移距离为第一位移偏移量。当摄像头122与显示屏112之间的距离为第二距离时,位移距离为第二位移偏移量。如果第一距离大于或等于第二距离时,第一位移偏移量大于或等于第二位移偏移量。如果第一距离小于第二距离时,第一位移偏移量小于第二位移偏移量。比如,以前面的图4B为例,如果摄像头122与显示屏112之间的距离增大,那么摄像头122采集的图像422上三角形的位置(A1’,B1’)与图4A中三角形的位置(A1,B1)之间的位移偏移量增大。The offset distance may be the distance from the origin of the second coordinate system to the origin of the first coordinate system. That is to say, the offset distance is related to the distance between the display screen 112 and the camera 122 . For example, when the distance between the display screen 112 and the camera 122 is greater, the distance between the first coordinate system and the second coordinate system is greater, that is, the offset distance is greater. In some embodiments, the offset distance increases as the distance between the camera 122 and the display screen 112 increases, and decreases as the distance between the camera 122 and the display screen 112 decreases. Exemplarily, when the distance between the camera 122 and the display screen 112 is the first distance, the offset distance is the first displacement offset. When the distance between the camera 122 and the display screen 112 is the second distance, the displacement distance is the second displacement offset. If the first distance is greater than or equal to the second distance, the first displacement offset is greater than or equal to the second displacement offset. If the first distance is smaller than the second distance, the first displacement offset is smaller than the second displacement offset. For example, taking the previous FIG. 4B as an example, if the distance between the camera 122 and the display screen 112 increases, the position (A1', B1') of the triangle on the image 422 collected by the camera 122 is the same as that of the triangle in FIG. 4A ( The displacement offset between A1, B1) increases.
偏移方向可以是第二坐标系原点到第一坐标系原点的方向。也就是说,偏移方向与显示屏与摄像头之间的位置关系相关。在一些实施例中,偏移方向随着摄像头与显示屏之间的方向的变化而变化。示例性的,当摄像头位于显示屏的第一方向时,偏移方向为第一方向。当摄像头位于显示屏的第二方向时,偏移方向为第二方向。比如,当摄像头122位于显示屏112左侧时,即第二坐标系在第一坐标系左侧,那么偏移方向为向左偏移。比如, 以前面的图4B为例,摄像头122采集的图像422上三角形的位置(A1’,B1’)向左偏移到图4A中三角形的位置(A1,B1)。同理,摄像头120位于显示屏110右侧,那么偏移方向为向右偏移。比如,前面的图4B为例,摄像头120采集的图像420上三角形的位置(A2’,B2’)向右偏移到图4A中三角形的位置(A2,B2)。The offset direction may be a direction from the origin of the second coordinate system to the origin of the first coordinate system. That is to say, the offset direction is related to the positional relationship between the display screen and the camera. In some embodiments, the offset direction changes as the orientation between the camera and the display screen changes. Exemplarily, when the camera is located in the first direction of the display screen, the offset direction is the first direction. When the camera is located in the second direction of the display screen, the offset direction is the second direction. For example, when the camera 122 is located on the left side of the display screen 112, that is, the second coordinate system is on the left side of the first coordinate system, then the offset direction is leftward offset. For example, taking the previous FIG. 4B as an example, the position (A1', B1') of the triangle on the image 422 captured by the camera 122 is shifted to the left to the position (A1, B1) of the triangle in FIG. 4A. Similarly, if the camera 120 is located on the right side of the display screen 110, then the offset direction is to the right. For example, taking the previous Fig. 4B as an example, the position (A2', B2') of the triangle on the image 420 collected by the camera 120 is shifted to the right to the position (A2, B2) of the triangle in Fig. 4A.
其中,第一坐标系、第二坐标系、偏移量等可以事先存储在VR眼镜中。Wherein, the first coordinate system, the second coordinate system, the offset, etc. may be stored in the VR glasses in advance.
在其它实施例中,偏移量可以变化,在一些实施例中,显示屏和摄像头之间的相对位置可以变化,例如,显示屏在VR眼镜上可以移动,和/或,摄像头在VR眼镜上可以移动。比如,随着VR眼镜上显示屏的位置调整,和/或,摄像头位置调整或拍摄角度变化,显示屏对应的第一坐标系和摄像头所对应的第二坐标系之间的偏移量对应发生变化;或者,随着VR眼镜上两个显示屏之间的距离的调整,和/或,两个摄像头之间的距离的调整,对应的,偏移量发生变化。示例性的,两个显示屏之间的距离和/或两个摄像头之间的距离可以随着用户的左眼瞳孔和右眼瞳孔之间的距离作调整。这种方案可以适用于显示屏和/或摄像头位置可调整的VR眼镜。这类VR眼镜可以适用于各类人群,比如,该VR眼镜被眼间距较宽的用户使用时,可以调整显示屏和摄像头之间的相对距离大一点;被眼间距较窄的用户使用时,可以调整显示屏和摄像头之间的相对距离小一点等等。因此,一款VR眼镜可适用于多用户,如,一个VR眼镜可用于全家使用。无论显示屏和/或摄像头位置如何调整,偏移量对应调整,VR眼镜都可以基于调整的偏移量能够实现视角重构。In other embodiments, the offset can be changed. In some embodiments, the relative position between the display screen and the camera can be changed. For example, the display screen can be moved on the VR glasses, and/or the camera is on the VR glasses. can be moved. For example, as the position of the display screen on the VR glasses is adjusted, and/or the position of the camera is adjusted or the shooting angle changes, the offset between the first coordinate system corresponding to the display screen and the second coordinate system corresponding to the camera will occur correspondingly Or, with the adjustment of the distance between the two display screens on the VR glasses, and/or the adjustment of the distance between the two cameras, the offset changes accordingly. Exemplarily, the distance between the two display screens and/or the distance between the two cameras can be adjusted along with the distance between the user's left eye pupil and right eye pupil. This solution can be applied to VR glasses with adjustable display screen and/or camera position. This type of VR glasses can be applied to various groups of people. For example, when the VR glasses are used by users with a wider eye distance, the relative distance between the display screen and the camera can be adjusted to be larger; when the VR glasses are used by users with a narrower eye distance, The relative distance between the display screen and the camera can be adjusted to be smaller and so on. Therefore, one VR glasses can be suitable for multiple users, for example, one VR glasses can be used by the whole family. No matter how the position of the display screen and/or camera is adjusted, the offset is adjusted accordingly, and the VR glasses can realize perspective reconstruction based on the adjusted offset.
在一些实施例中,VR眼镜可以将摄像头采集的图像上的所有像素点按照所述偏移量偏移到目标位置(即全局视角重构)。In some embodiments, the VR glasses can shift all pixels on the image captured by the camera to the target position according to the offset amount (that is, reconstruct the global perspective).
在另一些实施例中,VR眼镜可以先确定图像上第一区域,将第一区域内像素点按照所述偏移量偏移到目标位置。也就是,只将第一区域内的像素点进行偏移,其它区域内的像素点可以不动。In some other embodiments, the VR glasses may first determine the first area on the image, and shift the pixels in the first area to the target position according to the offset. That is, only the pixels in the first area are offset, and the pixels in other areas may not be moved.
示例性的,第一区域可以是图像上用户注视点所在区域。在一些实施例中,VR眼镜中包括眼动追踪模块,通过眼动追踪模块可以定位用户注视点。一种可实现方式为,VR眼镜确定用户注视点位于某个物体(比如图7中的桌子)上的一点,那么确定该物体(比如桌子)的最小外接矩形为第一区域。可以理解的是,最小外接矩形还可以是最小外接正方形、最小外接圆形等等。在另一些实施例中,当VR眼镜确定用户注视点位于某个物体(比如桌子)上的一点时,可以确定以该点为中心、以预设长度为边长的矩形为第一区域,或者,以该点为中心、以预设半径为半径的圆形为第一区域,等等。其中,预设长度、预设半径等可以是默认设置的。这样的话,用户注视点所在区域可能是所述物体的部分区域。在又一些实施例中,第一区域还可以是深度位于用户注视点所在深度的全部区域。Exemplarily, the first area may be the area where the user's gaze point is located on the image. In some embodiments, the VR glasses include an eye tracking module, through which the user's gaze point can be located. One implementation method is that the VR glasses determine that the user's gaze point is located at a point on an object (such as the table in FIG. 7 ), and then determine the smallest circumscribed rectangle of the object (such as the table) as the first area. It can be understood that the smallest circumscribing rectangle may also be the smallest circumscribing square, the smallest circumscribing circle, and so on. In some other embodiments, when the VR glasses determine that the user's gaze point is located at a point on a certain object (such as a table), it may determine that a rectangle with the point as the center and a preset length as the side length is the first area, or , with the point as the center and a circle with a preset radius as the first area, and so on. Wherein, the preset length, preset radius, etc. may be set by default. In this case, the area where the user's gaze point is located may be a partial area of the object. In still some embodiments, the first area may also be all areas whose depth is located at the depth of the user's gaze point.
或者,第一区域还可以是图像上用户感兴趣区域。其中,用户感兴趣区域可以是图像上用户感兴趣物体所在区域。示例性的,VR眼镜中可以存储用户感兴趣物体(比如,人、动物等等),当识别出摄像头采集的图像上存在所述用户感兴趣物体,则确定该物体所在区域为第一区域。其中,用户感兴趣物体可以是用户手动存储在VR眼镜中的,或者,在虚拟世界中,用户可以与虚拟世界中的物体交互,用户感兴趣物体还可以是VR眼镜记录的用户执行交互次数大于预设次数和/或执行交互时长大于预设时长的物体,等等。Alternatively, the first area may also be the user's interest area on the image. Wherein, the region of interest of the user may be the region where the object of interest of the user is located on the image. Exemplarily, an object of interest to the user (eg, a person, an animal, etc.) may be stored in the VR glasses, and when it is recognized that the object of interest to the user exists on the image captured by the camera, the area where the object is located is determined to be the first area. Among them, the object of interest to the user can be manually stored in the VR glasses by the user, or, in the virtual world, the user can interact with objects in the virtual world, and the object of interest to the user can also be that the number of interactions performed by the user recorded by the VR glasses is greater than Objects that perform interactions a preset number of times and/or take longer than a preset duration, etc.
或者,第一区域还可以是默认区域,比如图像上的中心区域。考虑到用户一般首先关注到的是图像上的中心区域,所以第一区域默认为中心区域。Alternatively, the first area may also be a default area, such as a central area on the image. Considering that the user generally pays attention to the central area of the image first, the first area is defaulted to the central area.
或者,第一区域还可以是用户指定区域。比如,用户可以在VR眼镜上设置第一区域或者在与VR眼镜连接的电子设备(比如,手机)上设置第一区域,等等。Alternatively, the first area may also be a user-specified area. For example, the user may set the first area on the VR glasses or set the first area on an electronic device (such as a mobile phone) connected to the VR glasses, and so on.
在另一些实施例中,还可以根据不同场景来确定第一区域。以VR游戏场景为例,如果用户A以游戏玩家的角色参与游戏,那么该用户A所对应的游戏角色所在的区域为第一区域,或者,如果用户A观战用户B的游戏,那么用户B所对应的游戏角色(即被观战的玩家)所在区域为第一区域。再比如,以VR驾驶场景为例,用户A佩戴VR眼镜看到正驾驶虚拟车辆行驶在道路上,第一区域可以是用户A驾驶的车辆的所在区域,或者,用户A驾驶的车辆上方向盘、挡风玻璃等所在区域,或者,行驶道路上位于用户A驾驶车辆前方的车辆所在区域为第一区域。In some other embodiments, the first area may also be determined according to different scenarios. Taking the VR game scene as an example, if user A participates in the game as a game player, the area where the game character corresponding to user A is located is the first area; or, if user A watches the game played by user B, then the The area where the corresponding game character (that is, the player being watched) is located is the first area. For another example, take the VR driving scene as an example. User A wears VR glasses and sees that he is driving a virtual vehicle on the road. The first area can be the area where the vehicle driven by user A is located, or the steering wheel, The area where the windshield or the like is located, or the area where the vehicle is located in front of the vehicle driven by the user A on the driving road is the first area.
总之,第一区域是摄像头采集的图像上的一个区域,具体确定第一区域的方式包括但不限定于如上方式,本申请不一一列举。In short, the first area is an area on the image captured by the camera, and specific methods for determining the first area include but are not limited to the above methods, which are not listed in this application.
在一些实施例中,第一区域内所有像素点的偏移量可以相同。比如,所有像素点的偏移距离都是前文所述的第一坐标系的原点和第二坐标系的原点之间的距离,所有像素点的便宜方向都是第二坐标系原点到第一坐标系原点的方向。In some embodiments, the offsets of all pixels in the first area may be the same. For example, the offset distance of all pixels is the distance between the origin of the first coordinate system and the origin of the second coordinate system mentioned above, and the cheap direction of all pixels is from the origin of the second coordinate system to the first coordinate The direction of the origin.
在另一些实施例中,第一区域内不同像素点的偏移量可以不同。比如,如图10中的(a),第一区域1000包括中心区域1010和边缘区域1020(画斜线的区域)。其中,边缘区域1020的面积可以是默认的,比如从第一区域的边缘向第一区域内的预设宽度所形成的区域。中心区域1010内的像素点的偏移量大于边缘区域1020内的像素点的偏移量。比如,以前文所述的第一坐标系的原点和第二坐标系的原点之间的距离是L为例,中心区域1010内像素点的偏移距离等于L,边缘区域1020内像素点的偏移距离小于L,比如是L/2、L/3等等。这样的话,第一区域内中心位置的像素点位移幅度较大,边缘位置的像素点位移幅度较小,因为边缘位置与其它区域连接,如果边缘位置的像素点位移幅度小,那么与其它区域之间的连接处可以比较平滑,避免在注视点所在区域边缘出现明显的错位。In some other embodiments, the offsets of different pixel points in the first area may be different. For example, as shown in (a) of FIG. 10 , the first area 1000 includes a central area 1010 and an edge area 1020 (area drawn with oblique lines). Wherein, the area of the edge area 1020 may be default, such as the area formed from the edge of the first area to the preset width in the first area. The offset of the pixels in the central area 1010 is greater than the offset of the pixels in the edge area 1020 . For example, taking the above-mentioned distance between the origin of the first coordinate system and the origin of the second coordinate system as an example, the offset distance of the pixels in the central area 1010 is equal to L, and the offset distance of the pixels in the edge area 1020 is equal to L. The moving distance is less than L, such as L/2, L/3 and so on. In this case, the displacement of the pixels at the center of the first area is relatively large, and the displacement of pixels at the edge is small, because the edge is connected to other areas. The connection between them can be relatively smooth, avoiding obvious misalignment at the edge of the area where the fixation point is located.
可以理解的是,当中心区域偏移量大、边缘区域偏移量小时,中心区域内的物体发生形变(即形态变化)的程度大,边缘区域内物体发声形变的程度小。也就是说,第一区域内从中心到边缘物体的形变程度逐渐减小。It can be understood that when the central area offset is large and the edge area offset is small, the degree of deformation (that is, the shape change) of the object in the central area is large, and the degree of sound deformation of the object in the edge area is small. That is to say, the degree of deformation of objects in the first region decreases gradually from the center to the edge.
在另一些情况中,第一区域内像素点的偏移量大于第二区域内像素点的偏移量,第二区域可以是第一区域以外的区域且围绕第一区域外边缘。第二区域的面积不限定,比如可以是从第一区域的外边缘向外的预设宽度所形成的区域。相应的,由于第一区域内偏移量大,第二区域内偏移量小,所以第一区域内物体发生形变的程度大,第二区域内物体发生形变的程度小。也就是说,第一区域向外到第二区域物体的形变程度逐渐减少。In some other cases, the offset of the pixels in the first area is greater than the offset of the pixels in the second area, and the second area may be an area outside the first area and surround the outer edge of the first area. The area of the second area is not limited, for example, it may be an area formed by a preset width outward from the outer edge of the first area. Correspondingly, since the offset in the first area is large and the offset in the second area is small, the degree of deformation of the object in the first area is large, and the degree of deformation of the object in the second area is small. That is to say, the degree of deformation of the object from the first region to the second region gradually decreases.
在另一些情况中,边缘区域1020上不同像素点的偏移量也可以不同。比如,如图10中的(b),边缘区域1020包括第一边缘区域1022(斜线部分)和第二边缘区域1024(黑色部分),假设偏移方向是图中箭头所示方向,即第一区域1000向左下方偏移,那么第一边缘区域1022处于偏移方向内(即第一区域1000的左下方),第二边缘区域处于与偏移方向相反的方向内(即第一区域1000的右上方)。两个边缘区域内像素点的偏移量不同。继续如图10中的(b),假设偏移方向是箭头所示方向,那么,第一边缘区域1022(黑色区域)内的像素点的偏移量<中心区域1010内的像素点的偏移量<第二边缘区域1024(斜线区域)内的像素点的偏移量。也就是说,第一区域中处于偏移方向范围内的物体(即第 一边缘区域1022内的物体)偏移量大,处于与偏移方向相反的方向范围内的物体(即第二边缘区域1024)偏移量小。这样,第一区域按照所述偏移方向偏移时,第一区域中与偏移方向相反一侧的边缘与其它区域能够平滑过渡。In other cases, the offsets of different pixels on the edge area 1020 may also be different. For example, as shown in (b) in Figure 10, the edge area 1020 includes a first edge area 1022 (hatched part) and a second edge area 1024 (black part), assuming that the offset direction is the direction shown by the arrow in the figure, that is, the first If an area 1000 is shifted to the lower left, then the first edge area 1022 is in the offset direction (ie, the lower left of the first area 1000), and the second edge area is in the direction opposite to the offset direction (ie, the first area 1000 upper right of the ). The offsets of pixels in the two edge regions are different. Continuing with (b) in Figure 10, assuming that the offset direction is the direction shown by the arrow, then the offset of the pixels in the first edge area 1022 (black area)<the offset of the pixels in the central area 1010 amount<the offset of the pixels in the second edge area 1024 (hatched area). That is to say, the objects in the range of offset directions in the first area (ie, the objects in the first edge area 1022) have a large amount of offset, and the objects in the range of directions opposite to the offset direction (ie, the objects in the second edge area 1022) have a large amount of offset. 1024) The offset is small. In this way, when the first region is shifted according to the shifting direction, the edge of the first region opposite to the shifting direction can transition smoothly with other regions.
在一些实施例中,经过视角重构后的图像上第一区域中边缘区域内的第一像素点的第一图像信息可以是第二图像信息和第三图像信息的中间值,如平均值。其中,第二图像信息是第一区域的中心区域内第二像素点的图像信息,第三图像信息是其它区域内第三像素点的图像信息。比如,如图11,像素点A位于第一区域1000的边缘区域1020中,像素点B位于其他区域,像素点C位于第一区域1000的中心区域1010中。像素点A的图像信息可以是像素点B和像素点C的图像信息的平均值,所述图像信息包括分辨率、颜色、色温或亮度等中的一种或几种。其中,像素点C和像素点B可以是靠近像素点A的像素点。由于第一区域的边缘区域1020是第一区域与其它区域之间的过渡区域,所以,当边缘区域1020内像素点的分辨率、颜色、色温、亮度等是中间值时会实现第一区域与其它区域之间的平滑过渡。In some embodiments, the first image information of the first pixel in the edge area of the first area on the image after perspective reconstruction may be an intermediate value between the second image information and the third image information, such as an average value. Wherein, the second image information is the image information of the second pixel in the central area of the first area, and the third image information is the image information of the third pixel in other areas. For example, as shown in FIG. 11 , pixel A is located in the edge area 1020 of the first area 1000 , pixel B is located in other areas, and pixel C is located in the central area 1010 of the first area 1000 . The image information of pixel point A may be the average value of the image information of pixel point B and pixel point C, and the image information includes one or more of resolution, color, color temperature, or brightness. Wherein, the pixel point C and the pixel point B may be pixel points close to the pixel point A. Since the edge area 1020 of the first area is a transition area between the first area and other areas, when the resolution, color, color temperature, brightness, etc. of pixels in the edge area 1020 are intermediate values, the Smooth transitions between other areas.
可以理解的是,以上是以对图8中摄像头122采集的图像作视角重构为例进行介绍,可以理解的是,还可以对摄像头120采集的图像作视角重构,实现原理是相同的,不重复赘述。It can be understood that the above is an example of viewing angle reconstruction of the image captured by the camera 122 in FIG. Do not repeat the details.
在另一些实施例中,本申请提供一种显示方法。该方法适用于包括至少一个摄像头和至少一个显示屏的电子设备,比如VR眼镜,其中,摄像头位置和显示屏位置不同。示例性的,如图4C,VR眼镜上摄像头位置与显示屏位置不同,所以用户佩戴VR眼镜时会出现人眼观察视角与摄像头拍摄视角不同的现象。示例性的,图12为本申请实施例提供的显示方法的流程示意图。该方法的流程包括:In some other embodiments, the present application provides a display method. The method is applicable to an electronic device including at least one camera and at least one display screen, such as VR glasses, where the positions of the camera and the display screen are different. Exemplarily, as shown in FIG. 4C , the position of the camera on the VR glasses is different from the position of the display screen, so when the user wears the VR glasses, there will be a phenomenon that the viewing angle of the human eye is different from the shooting angle of the camera. Exemplarily, FIG. 12 is a schematic flowchart of a display method provided by an embodiment of the present application. The flow of the method includes:
S1,摄像头采集第二图像。S1, the camera collects a second image.
其中,摄像头可以是图4C所示的VR眼镜上的任一摄像头,比如左摄像头122或右摄像头120。以左摄像头122为例,如图4C,在左摄像头所在位置的观察视角内,三角形位于正方形左前方(因为正方形在无限远处比如太阳)。由于左摄像头122拍摄的图像是平面二维图像,所以在左摄像头122的成像面上包括三角形和正方形,且三角形在正方形的左侧,如图13,为摄像头122拍摄的平面二维图像的示意图。该图像上三角形在正方形左边。可以理解的是,在摄像头122所在位置处设置一个其它摄像头,该其它摄像头采集的图像与摄像头122采集的图像相同。也就是说,在摄像头122所在位置处观察(人观察或使用其它摄像头拍摄)到的图像与摄像头122采集的图像相同。Wherein, the camera may be any camera on the VR glasses shown in FIG. 4C , such as the left camera 122 or the right camera 120 . Taking the left camera 122 as an example, as shown in FIG. 4C , within the viewing angle of view of the position of the left camera, the triangle is located in front of the left of the square (because the square is at infinity, such as the sun). Since the image taken by the left camera 122 is a plane two-dimensional image, the imaging surface of the left camera 122 includes a triangle and a square, and the triangle is on the left side of the square, as shown in FIG. 13 , which is a schematic diagram of a plane two-dimensional image taken by the camera 122. . In this image the triangle is to the left of the square. It can be understood that an other camera is set at the location of the camera 122 , and the image collected by the other camera is the same as the image collected by the camera 122 . That is to say, the image observed at the location of the camera 122 (observed by a person or captured by other cameras) is the same as the image collected by the camera 122 .
S2,确定第二图像上的第一区域。S2. Determine the first area on the second image.
第一区域有多种确定方式,请参见前文,在此不重复赘述。示例性的,如图13,第一区域为摄像头采集的平面二维图像上的虚线区域。There are multiple ways to determine the first area, please refer to the previous section, and details will not be repeated here. Exemplarily, as shown in FIG. 13 , the first area is the dotted line area on the plane two-dimensional image collected by the camera.
S3,对第一区域作视角重构。S3. Reconfigure the viewing angle of the first area.
如前文所述,对第一区域作视角重构包括对第一区域内的图像作坐标转换,即从摄像头对应的坐标系转换到显示屏或人眼对应坐标系中。因为摄像头采集的图像是平面二维图像,所以坐标转换的一种实现方式可以是,将摄像头采集的平面二维图像转换为三维点云,三维点云可以反映真实环境中各个物体的位置(包括深度),然后通过模拟人眼创建出虚拟摄像头,通过虚拟摄像头拍摄三维点云可以得到在人眼位置处的观察视角下所看到的图 像,实现从摄像头所在位置的观察视角到人眼所在位置的观察视角的重构。具体地,步骤S3中,对第一区域作视角重构的方式包括如下步骤:As mentioned above, reconstructing the viewing angle of the first area includes performing coordinate transformation on the image in the first area, that is, transforming from the coordinate system corresponding to the camera to the coordinate system corresponding to the display screen or human eyes. Because the image collected by the camera is a plane two-dimensional image, one implementation of coordinate transformation can be to convert the plane two-dimensional image collected by the camera into a three-dimensional point cloud, and the three-dimensional point cloud can reflect the position of each object in the real environment (including Depth), and then create a virtual camera by simulating the human eye. By shooting a 3D point cloud through the virtual camera, the image seen at the observation angle of the human eye can be obtained, and the observation angle from the position of the camera to the position of the human eye can be realized. Reconstruction of the perspective of observation. Specifically, in step S3, the way to reconstruct the viewing angle of the first region includes the following steps:
第一步,确定第一区域内像素点的深度信息。其中,确定第一区域内像素点的深度信息的方式包括方式1和方式2中的至少一种。The first step is to determine the depth information of the pixels in the first area. Wherein, the manner of determining the depth information of the pixels in the first area includes at least one of manner 1 and manner 2.
方式1,根据VR眼镜上两个摄像头采集的两张图像上同一像素点的像素差,确定像素点的深度信息。示例性的,所述像素点的深度信息满足如下公式:Method 1: Determine the depth information of a pixel according to the pixel difference of the same pixel on two images captured by two cameras on the VR glasses. Exemplarily, the depth information of the pixel satisfies the following formula:
Figure PCTCN2022113692-appb-000001
Figure PCTCN2022113692-appb-000001
其中,f是摄像头的焦距,B是两个摄像头之间的间距,disparity是两张图像上同一像素点之间的像素差;d是像素点的深度信息。Among them, f is the focal length of the camera, B is the distance between the two cameras, disparity is the pixel difference between the same pixel on the two images; d is the depth information of the pixel.
方式2,根据用户左眼和右眼之间的辐辏角度,以及辐辏角度与深度信息之间的对应关系,确定像素点的深度信息。Method 2: Determining the depth information of the pixel point according to the convergence angle between the user's left eye and the right eye, and the corresponding relationship between the convergence angle and the depth information.
请参见图14中的(a),在真实环境中,双眼在观察物体时左眼视线和右眼视线所形成的夹角称为辐辏角度θ。可以理解的是,被观察物体距离人眼越近,辐辏角度θ就越大,辐辏深度越小。对应的,被观察物体距离人眼越远,辐辏角度θ就越小,辐辏深度越大。如图14中的(b),当用户佩戴VR眼镜时,在VR眼镜所呈现的虚拟环境中,用户看到的景物都是由VR眼镜的显示屏显示出来的。屏幕发出的光线并没有深度差异,所以经过焦距调节使得眼睛焦点就定在屏幕上,即辐辏角度θ变为眼睛视线指向显示屏的角度。但用户实际所看到的虚拟环境中物体深度与显示屏距离用户的距离并不相同,所以,在本申请实施例中,用户佩戴VR眼镜之后,确定用户眼睛的辐辏角度θ。VR眼镜中可以存储数据库,该数据库中保存有辐辏角度θ与深度信息之间的对应关系,当VR眼镜确定辐辏角度θ时,基于该对应关系,确定对应的深度信息。其中,所述数据库可以是根据经验获取的,并事先存储在VR眼镜中的;或者,可以基于深度学习方式确定。Please refer to (a) in FIG. 14 , in a real environment, the angle formed by the sight line of the left eye and the line of sight of the right eye when the two eyes observe an object is called the convergence angle θ. It can be understood that the closer the observed object is to the human eye, the larger the convergence angle θ and the smaller the convergence depth. Correspondingly, the farther the observed object is from the human eye, the smaller the convergence angle θ and the greater the convergence depth. As shown in (b) in Figure 14, when the user wears VR glasses, in the virtual environment presented by the VR glasses, all the scenes seen by the user are displayed on the display screen of the VR glasses. There is no depth difference in the light emitted by the screen, so after adjusting the focal length, the focus of the eyes is fixed on the screen, that is, the angle of convergence θ becomes the angle at which the line of sight of the eyes points to the display screen. However, the depth of objects in the virtual environment that the user actually sees is not the same as the distance from the display screen to the user. Therefore, in the embodiment of the present application, after the user wears VR glasses, the convergence angle θ of the user's eyes is determined. The VR glasses can store a database, which stores the correspondence between the convergence angle θ and the depth information. When the VR glasses determine the convergence angle θ, the corresponding depth information is determined based on the correspondence. Wherein, the database may be obtained based on experience and stored in the VR glasses in advance; or it may be determined based on deep learning.
第二步,根据第一区域内像素点的深度信息,确定第一区域对应的三维点云数据。The second step is to determine the 3D point cloud data corresponding to the first area according to the depth information of the pixels in the first area.
示例性的,以图13中摄像头122所拍摄的平面二位图像为例,当确定该平面二维图像上第一区域(虚线区域)内像素点的深度信息之后,可以获取该第一区域内像素点的三维点云,如图15。第一区域对应的三维点云可以映射出第一区域内各个像素点在真实世界中的位置。比如图15中,三维点云中三角形对应的点云在正方形对应的点云的左前方,这是因为,在摄像头122位置处所观察到的景象是三角形在正方形的左前方。Exemplarily, taking the planar two-dimensional image captured by the camera 122 in FIG. The 3D point cloud of pixels is shown in Figure 15. The three-dimensional point cloud corresponding to the first area can map the position of each pixel in the first area in the real world. For example, in FIG. 15 , the point cloud corresponding to the triangle in the 3D point cloud is in front left of the point cloud corresponding to the square, because the scene observed at the position of the camera 122 is that the triangle is in front left of the square.
第三步,创建虚拟摄像头。The third step is to create a virtual camera.
可以理解的是,人眼的图像采集原理与摄像头的图像拍摄原理类似,为了模拟人眼的图像采集过程,创建虚拟摄像头,该虚拟摄像头是模拟人眼的,比如,所述虚拟摄像头的位置与人眼位置相同,和/或,虚拟摄像头的视场角与人眼的视场角相同。It can be understood that the image acquisition principle of the human eye is similar to the image capture principle of the camera. In order to simulate the image acquisition process of the human eye, a virtual camera is created. The virtual camera simulates the human eye. For example, the position of the virtual camera is the same as The positions of the human eyes are the same, and/or, the field of view of the virtual camera is the same as that of the human eyes.
比如,一般来说,人眼的视角上下110度,左右110度,那么虚拟摄像头的视场角上下110度,左右110度。再比如,VR眼镜可以确定人眼位置,那么虚拟摄像头设置在所述人眼位置。其中,确定人眼位置的方式有多种。比如,方式1、先确定显示屏所在位置,然后,显示屏所在位置加上间距A可以估算出人眼所在位置。这种方式确定的人眼位置比较准确。其中,间距A是显示屏与人眼之间的间距,可以是事先存储好的。方式2、人眼位置等于显示屏所在位置。这种方式比较简单,而且虚拟摄像头设置在显示屏处也可以缓 解因为拍摄视角与人眼视角不同的而导致的不适感。示例性的,如图16,虚拟摄像头处于人眼位置处。For example, generally speaking, the angle of view of the human eye is 110 degrees up and down, and 110 degrees left and right, so the field of view of the virtual camera is 110 degrees up and down, 110 degrees left and right. For another example, the VR glasses can determine the position of the human eye, so the virtual camera is set at the position of the human eye. Among them, there are multiple ways to determine the position of the human eye. For example, in method 1, the position of the display screen is determined first, and then the position of the human eyes can be estimated by adding the distance A to the position of the display screen. The human eye position determined in this way is more accurate. Wherein, the distance A is the distance between the display screen and human eyes, which may be stored in advance. Method 2. The position of the human eye is equal to the position of the display screen. This method is relatively simple, and setting the virtual camera at the display screen can also alleviate the discomfort caused by the difference between the shooting angle of view and the angle of view of the human eye. Exemplarily, as shown in FIG. 16 , the virtual camera is at the position of the human eye.
第四步,使用虚拟摄像头对第一区域对应的三维点云数据进行拍摄得到图像,该图像是对第一区域作视角重构后的图像。The fourth step is to use the virtual camera to capture the 3D point cloud data corresponding to the first area to obtain an image, which is an image reconstructed from the angle of view of the first area.
示例性的,如图17中,左眼对应的虚拟摄像头拍摄三维点云(经过左摄像头122采集的二维图像转换成的三维点云),由于左眼对应的虚拟摄像头比左摄像头122靠右,所以左眼对应的虚拟摄像头拍摄的图像上三角形和正方形之间的距离较小。为了方便对比,如图18,图像1701是摄像头122拍摄的平面二维图像,图像1702是左眼对应的虚拟摄像头拍摄的图像(即对摄像头122拍摄的平面二维图像上第一区域作视角重构之后的图像)。图像1702上两个物体之间的距离小于图像1701上两个物体之间的距离。图像1702相当于人的左眼采集的图像。Exemplarily, as shown in FIG. 17, the virtual camera corresponding to the left eye captures a three-dimensional point cloud (a three-dimensional point cloud converted from a two-dimensional image collected by the left camera 122), since the virtual camera corresponding to the left eye is closer to the right than the left camera 122 , so the distance between the triangle and the square on the image captured by the virtual camera corresponding to the left eye is small. For convenience of comparison, as shown in FIG. 18 , image 1701 is a plane two-dimensional image taken by camera 122, and image 1702 is an image taken by a virtual camera corresponding to the left eye (that is, the first area on the plane two-dimensional image taken by camera 122 is regarded as the perspective weight). constructed image). The distance between two objects on image 1702 is smaller than the distance between two objects on image 1701 . Image 1702 corresponds to an image captured by a person's left eye.
上面是以摄像头122为例进行介绍,对于摄像头120也是同样原理,即将摄像头120采集的平面二维图像转换为三维点云,然后创建右眼对应的虚拟摄像头,使用该虚拟摄像头拍摄所述三维点云。示例性的,如图17,图像1703是摄像头120拍摄的平面二维图像,图像1704是右眼对应的虚拟摄像头拍摄的图像(即对第一区域作视角重构之后的图像)。图像1704上两个物体之间的距离小于图像1703上两个物体之间的距离。这是因为,右眼对应的虚拟摄像头比摄像头120靠左。所以,图像1704相当于人的右眼采集的图像。The above is an introduction using the camera 122 as an example, and the same principle applies to the camera 120, that is, converting the plane two-dimensional image collected by the camera 120 into a three-dimensional point cloud, and then creating a virtual camera corresponding to the right eye, and using the virtual camera to capture the three-dimensional point cloud. Exemplarily, as shown in FIG. 17 , image 1703 is a planar two-dimensional image captured by camera 120 , and image 1704 is an image captured by a virtual camera corresponding to the right eye (that is, the image after perspective reconstruction of the first region). The distance between two objects on image 1704 is smaller than the distance between two objects on image 1703 . This is because the virtual camera corresponding to the right eye is more left than the camera 120 . Therefore, image 1704 is equivalent to an image captured by a person's right eye.
本申请中只对第一区域作三维点云的映射,并未对第二区域作三维点云的映射,所以虚拟摄像机拍摄的图像只包括第一区域,不包括第二区域,工作量较小。In this application, only the first area is mapped to the 3D point cloud, and the second area is not mapped to the 3D point cloud, so the image captured by the virtual camera only includes the first area, not the second area, and the workload is small .
S4,将第二图像上第二区域内的图像块与视角重构后的第一区域的图像块合成第一图像。其中,第二区域是第二图像上第一区域以外的区域。S4. Combining the image blocks in the second area on the second image with the image blocks in the first area after perspective reconstruction into the first image. Wherein, the second area is an area other than the first area on the second image.
第二区域未做视角重构,虚拟摄像机拍摄的图像是对第一区域作视角重构后的图像,所以,第一图像相对于第二图像来说,对第一区域作视角重构,对第二区域未作视角重构。The second area does not perform perspective reconstruction, and the image captured by the virtual camera is an image after perspective reconstruction of the first area. Therefore, compared with the second image, the first image reconstructs the perspective of the first area, and The perspective of the second area is not reconstructed.
S5,显示所述第一图像。S5. Display the first image.
在一些实施例中,上述S2至S4可以由VR眼镜中的处理器执行,即,摄像头采集到第二图像(即S1)之后,将第二图像发送给处理器,处理器执行S2至S4得到第一图像,处理器将第一图像通过显示屏显示。In some embodiments, the above S2 to S4 can be executed by the processor in the VR glasses, that is, after the camera captures the second image (ie S1), the second image is sent to the processor, and the processor executes S2 to S4 to obtain For the first image, the processor displays the first image through the display screen.
基于相同的构思,图19所示为本申请提供的一种电子设备1900。该电子设备1900可以是前文中的VR穿戴设备(如,VR眼镜)。如图19所示,电子设备1900可以包括:一个或多个处理器1901;一个或多个存储器1902;通信接口1903,以及一个或多个计算机程序1904,上述各器件可以通过一个或多个通信总线1905连接。其中该一个或多个计算机程序1904被存储在上述存储器1902中并被配置为被该一个或多个处理器1901执行,该一个或多个计算机程序1904包括指令,上述指令可以用于执行如上面相应实施例中VR穿戴设备的相关步骤。通信接口1903用于实现与其他设备的通信,比如通信接口可以是收发器。Based on the same idea, FIG. 19 shows an electronic device 1900 provided by this application. The electronic device 1900 may be the aforementioned VR wearable device (eg, VR glasses). As shown in Figure 19, an electronic device 1900 may include: one or more processors 1901; one or more memories 1902; a communication interface 1903, and one or more computer programs 1904, and each of the above devices may communicate through one or more bus 1905 connection. Wherein the one or more computer programs 1904 are stored in the memory 1902 and are configured to be executed by the one or more processors 1901, the one or more computer programs 1904 include instructions, and the instructions can be used to perform the above-mentioned Related steps of the VR wearable device in the corresponding embodiment. The communication interface 1903 is used to implement communication with other devices, for example, the communication interface may be a transceiver.
上述本申请提供的实施例中,从电子设备(例如VR穿戴设备)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。In the above-mentioned embodiments provided in the present application, the methods provided in the embodiments of the present application are introduced from the perspective of an electronic device (for example, a VR wearable device) as an execution subject. In order to realize the various functions in the method provided by the above embodiments of the present application, the electronic device may include a hardware structure and/or a software module, and realize the above-mentioned functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above-mentioned functions is executed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
以上实施例中所用,根据上下文,术语“当…时”或“当…后”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。另外,在上述实施例中,使用诸如第一、第二之类的关系术语来区份一个实体和另一个实体,而并不限制这些实体之间的任何实际的关系和顺序。As used in the above embodiments, depending on the context, the terms "when" or "after" may be interpreted to mean "if" or "after" or "in response to determining..." or "in response to detecting ...". Similarly, depending on the context, the phrases "in determining" or "if detected (a stated condition or event)" may be interpreted to mean "if determining..." or "in response to determining..." or "on detecting (a stated condition or event)" or "in response to detecting (a stated condition or event)". In addition, in the above embodiments, relational terms such as first and second are used to distinguish one entity from another, without limiting any actual relationship and order between these entities.
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。Reference to "one embodiment" or "some embodiments" or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," "in other embodiments," etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless specifically stated otherwise. The terms "including", "comprising", "having" and variations thereof mean "including but not limited to", unless specifically stated otherwise.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。在不冲突的情况下,以上各实施例的方案都可以组合使用。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or part of the processes or functions described in this embodiment will be generated. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)). In the case of no conflict, the solutions of the above embodiments can be used in combination.
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。It should be pointed out that a part of the patent application documents contains content protected by copyright. Copyright is reserved by the copyright owner other than to make copies of the contents of the patent file or records of the Patent Office.

Claims (20)

  1. 一种显示方法,其特征在于,应用于穿戴设备,所述穿戴设备上包括至少一个显示屏和至少一个摄像头;包括:A display method, characterized in that it is applied to a wearable device, and the wearable device includes at least one display screen and at least one camera; including:
    通过所述显示屏向用户展示第一图像;presenting a first image to a user through the display screen;
    所述第一图像上第一对象与第二图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一图像上第二对象与所述第二图像上所述第二对象的显示位置和形态均相同;所述第二图像是所述摄像头采集的图像;The first object on the first image is different from the first object on the second image in at least one of display position or form, and the second object on the first image is different from the first object on the second image The display position and shape of the second object are the same; the second image is an image collected by the camera;
    其中,所述第一对象处于用户注视点所在区域,所述第二对象处于用户注视点所在区域以外的区域。Wherein, the first object is located in an area where the user's gaze point is located, and the second object is located in an area other than the area where the user's gaze point is located.
  2. 根据权利要求1所述的方法,其特征在于,The method according to claim 1, characterized in that,
    所述第一图像上所述第一对象的第一显示位置与第二图像上所述第一对象的第二显示位置之间的位移偏移量与所述摄像头和所述显示屏之间的距离相关。The displacement offset between the first display position of the first object on the first image and the second display position of the first object on the second image and the distance between the camera and the display screen Distance is relevant.
  3. 根据权利要求1或2所述的方法,其特征在于,The method according to claim 1 or 2, characterized in that,
    所述第一图像上所述第一对象的第一显示位置与第二图像上所述第一对象的第二显示位置之间的偏移方向与所述摄像头和所述显示屏之间的位置关系相关。The offset direction between the first display position of the first object on the first image and the second display position of the first object on the second image and the position between the camera and the display screen Relationship related.
  4. 根据权利要求1-3任一所述的方法,其特征在于,The method according to any one of claims 1-3, characterized in that,
    所述第一图像上所述第一对象相对于所述第二图像上所述第二对象的位置偏移量为第一偏移量;The position offset of the first object on the first image relative to the second object on the second image is a first offset;
    所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;The position offset of the third object on the first image relative to the third object on the second image is a second offset;
    所述第三对象处于所述用户注视点所在区域内,且比所述第一对象靠近所述注视点所在区域的边缘;The third object is located in the area where the user's gaze point is located, and is closer to the edge of the area where the gaze point is located than the first object;
    所述第二偏移量小于所述第一偏移量。The second offset is smaller than the first offset.
  5. 根据权利要求1-4任一所述的方法,其特征在于,The method according to any one of claims 1-4, characterized in that,
    所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的形态变化程度比所述第一图像上第三对象相对于所述第二图像上所述第三对象的形态变化程度大;The degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the third object on the second image. The degree of shape change of the object is large;
    所述第三对象处于所述用户注视点所在区域内,且所述第三对象比所述第一对象靠近所述注视点所在区域的边缘。The third object is located in the region where the gaze point of the user is located, and the third object is closer to an edge of the region where the gaze point is located than the first object.
  6. 根据权利要求1-5任一所述的方法,其特征在于,The method according to any one of claims 1-5, characterized in that,
    所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移量为第一偏移量;The position offset of the first object on the first image relative to the first object on the second image is a first offset;
    所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;The position offset of the third object on the first image relative to the third object on the second image is a second offset;
    所述第三对象处于所述用户注视点所在区域内,且所述第三对象处于所述第一对象的第一方向范围内,所述第一方向范围包括所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移方向;The third object is in the area where the user's gaze point is located, and the third object is in a first direction range of the first object, and the first direction range includes the first direction range on the first image. an object relative to the position of the first object on the second image in an offset direction;
    所述第二偏移量大于所述第一偏移量。The second offset is greater than the first offset.
  7. 根据权利要求1-6任一所述的方法,其特征在于,The method according to any one of claims 1-6, characterized in that,
    所述第一图像上包括第一像素点、第二像素点和第三像素点,所述第一像素点和所述 第二像素点处于所述用户注视点所在区域中,且所述第一像素点比所述第二像素点靠近所述用户注视点所在区域的边缘;所述第三图像信息是与处于用户注视点所在区域以外的区域;The first image includes a first pixel point, a second pixel point and a third pixel point, the first pixel point and the second pixel point are located in the area where the user's gaze point is located, and the first pixel point The pixel point is closer to the edge of the area where the user's gaze point is located than the second pixel point; the third image information is related to the area outside the area where the user's gaze point is located;
    所述第一像素点的图像信息位于所述第二像素点的图像信息和所述第三像素点的图像信息之间。The image information of the first pixel is located between the image information of the second pixel and the image information of the third pixel.
  8. 根据权利要求7所述的方法,其特征在于,所述图像信息包括:分辨率、颜色、亮度、色温中的至少一种。The method according to claim 7, wherein the image information includes: at least one of resolution, color, brightness, and color temperature.
  9. 根据权利要求1-8任一所述的方法,其特征在于,所述至少一个摄像头包括第一摄像头和第二摄像头,所述至少一个显示屏包括第一显示屏和第二显示屏;所述第一显示屏被配置为显示所述第一摄像头采集的图像;所述第二显示屏被配置为显示所述第二摄像头的图像;The method according to any one of claims 1-8, wherein the at least one camera includes a first camera and a second camera, and the at least one display screen includes a first display screen and a second display screen; The first display screen is configured to display the image captured by the first camera; the second display screen is configured to display the image of the second camera;
    在所述第一显示屏与所述第一摄像头的位置不同的情况下,所述第一显示屏显示的图像上第一对象与所述第一摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一显示屏显示的图像上第二对象与所述第一摄像头采集的第二图像上所述第二对象的显示位置和形态均相同;In the case where the positions of the first display screen and the first camera are different, the display of the first object on the image displayed on the first display screen and the first object on the image captured by the first camera At least one of position or form is different, and the display position and form of the second object on the image displayed on the first display screen and the second object on the second image captured by the first camera are the same;
    在所述第二显示屏与所述第二摄像头的位置不同的情况下,所述第二显示屏显示的图像上第一对象与所述第二摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第二显示屏显示的图像上第二对象与所述第二摄像头采集的图像上所述第二对象的显示位置和形态均相同。In the case where the positions of the second display screen and the second camera are different, the display of the first object on the image displayed on the second display screen and the first object on the image captured by the second camera At least one of position or form is different, and the display position and form of the second object on the image displayed on the second display screen and the second object on the image captured by the second camera are the same.
  10. 根据权利要求1-9任一所述的方法,其特征在于,所述第一图像上第一对象与第二图像上所述第一对象的形态不同,包括:The method according to any one of claims 1-9, wherein the shape of the first object on the first image is different from that of the first object on the second image, including:
    所述第二图像上所述第一对象的边缘轮廓比所述第一图像上所述第一对象的边缘轮廓平整。The edge contour of the first object on the second image is flatter than the edge contour of the first object on the first image.
  11. 一种显示方法,其特征在于,应用于穿戴设备,所述穿戴设备上包括至少一个显示屏、至少一个摄像头和处理器;所述摄像头被配置为将其采集的图像传输给所述处理器,所述图像经由所述处理器在所述显示屏上显示,包括:A display method, characterized in that it is applied to a wearable device, and the wearable device includes at least one display screen, at least one camera and a processor; the camera is configured to transmit the image it collects to the processor, The image is displayed on the display screen via the processor, including:
    通过所述显示屏向用户展示第一图像;presenting a first image to a user through the display screen;
    所述第一图像上第一对象与第二图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一图像上第二对象与所述第二图像上所述第二对象的显示位置和形态均相同;所述第二图像是所述摄像头处采集的图像;The first object on the first image is different from the first object on the second image in at least one of display position or form, and the second object on the first image is different from the first object on the second image The display position and form of the second object are the same; the second image is an image collected by the camera;
    其中,所述第一对象处于用户注视点所在区域,所述第二对象处于用户注视点所在区域以外的区域。Wherein, the first object is located in an area where the user's gaze point is located, and the second object is located in an area other than the area where the user's gaze point is located.
  12. 根据权利要求11所述的方法,其特征在于,The method according to claim 11, characterized in that,
    所述第一图像上所述第一对象相对于所述第二图像上所述第二对象的位置偏移量为第一偏移量;The position offset of the first object on the first image relative to the second object on the second image is a first offset;
    所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;The position offset of the third object on the first image relative to the third object on the second image is a second offset;
    所述第三对象处于所述用户注视点所在区域内,且比所述第一对象靠近所述注视点所在区域的边缘;The third object is located in the area where the user's gaze point is located, and is closer to the edge of the area where the gaze point is located than the first object;
    所述第二偏移量小于所述第一偏移量。The second offset is smaller than the first offset.
  13. 根据权利要求11-12任一所述的方法,其特征在于,The method according to any one of claims 11-12, characterized in that,
    所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的形态变化程度比所述第一图像上第三对象相对于所述第二图像上所述第三对象的形态变化程度大;The degree of morphological change of the first object on the first image relative to the first object on the second image is greater than that of the third object on the first image relative to the third object on the second image. The degree of shape change of the object is large;
    所述第三对象处于所述用户注视点所在区域内,且所述第三对象比所述第一对象靠近所述注视点所在区域的边缘。The third object is located in the region where the gaze point of the user is located, and the third object is closer to an edge of the region where the gaze point is located than the first object.
  14. 根据权利要求11-13任一所述的方法,其特征在于,The method according to any one of claims 11-13, characterized in that,
    所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移量为第一偏移量;The position offset of the first object on the first image relative to the first object on the second image is a first offset;
    所述第一图像上第三对象相对于所述第二图像上所述第三对象的位置偏移量为第二偏移量;The position offset of the third object on the first image relative to the third object on the second image is a second offset;
    所述第三对象处于所述用户注视点所在区域内,且所述第三对象处于所述第一对象的第一方向范围内,所述第一方向范围包括所述第一图像上所述第一对象相对于所述第二图像上所述第一对象的位置偏移方向;The third object is in the area where the user's gaze point is located, and the third object is in a first direction range of the first object, and the first direction range includes the first direction range on the first image. an object relative to the position of the first object on the second image in an offset direction;
    所述第二偏移量大于所述第一偏移量。The second offset is greater than the first offset.
  15. 根据权利要求11-14任一所述的方法,其特征在于,The method according to any one of claims 11-14, characterized in that,
    所述第一图像上包括第一像素点、第二像素点和第三像素点,所述第一像素点和所述第二像素点处于所述用户注视点所在区域中,且所述第一像素点比所述第二像素点靠近所述用户注视点所在区域的边缘;所述第三图像信息是与处于用户注视点所在区域以外的区域;The first image includes a first pixel point, a second pixel point and a third pixel point, the first pixel point and the second pixel point are located in the area where the user's gaze point is located, and the first pixel point The pixel point is closer to the edge of the area where the user's gaze point is located than the second pixel point; the third image information is related to the area outside the area where the user's gaze point is located;
    所述第一像素点的图像信息位于所述第二像素点的图像信息和所述第三像素点的图像信息之间。The image information of the first pixel is located between the image information of the second pixel and the image information of the third pixel.
  16. 根据权利要求15所述的方法,其特征在于,所述图像信息包括:分辨率、颜色、亮度、色温中的至少一种。The method according to claim 15, wherein the image information includes: at least one of resolution, color, brightness, and color temperature.
  17. 根据权利要求11-16任一所述的方法,其特征在于,所述至少一个摄像头包括第一摄像头和第二摄像头,所述至少一个显示屏包括第一显示屏和第二显示屏;所述第一显示屏被配置为显示所述第一摄像头采集的图像;所述第二显示屏被配置为显示所述第二摄像头的图像;The method according to any one of claims 11-16, wherein the at least one camera includes a first camera and a second camera, and the at least one display screen includes a first display screen and a second display screen; the The first display screen is configured to display the image captured by the first camera; the second display screen is configured to display the image of the second camera;
    在所述第一显示屏与所述第一摄像头的位置不同的情况下,所述第一显示屏显示的图像上第一对象与所述第一摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第一显示屏显示的图像上第二对象与所述第一摄像头采集的第二图像上所述第二对象的显示位置和形态均相同;In the case where the positions of the first display screen and the first camera are different, the display of the first object on the image displayed on the first display screen and the first object on the image captured by the first camera At least one of position or form is different, and the display position and form of the second object on the image displayed on the first display screen and the second object on the second image captured by the first camera are the same;
    在所述第二显示屏与所述第二摄像头的位置不同的情况下,所述第二显示屏显示的图像上第一对象与所述第二摄像头采集的图像上所述第一对象的显示位置或形态中的至少一项不同,且,所述第二显示屏显示的图像上第二对象与所述第二摄像头采集的图像上所述第二对象的显示位置和形态均相同。In the case where the positions of the second display screen and the second camera are different, the display of the first object on the image displayed on the second display screen and the first object on the image captured by the second camera At least one of position or form is different, and the display position and form of the second object on the image displayed on the second display screen and the second object on the image captured by the second camera are the same.
  18. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    处理器,存储器,以及,一个或多个程序;processor, memory, and, one or more programs;
    其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如权利要求1-17任一项所述的方法步骤。Wherein, the one or more programs are stored in the memory, and the one or more programs include instructions, which, when executed by the processor, cause the electronic device to perform the functions described in claim 1- 17. The method steps of any one of.
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至17中任意一项所述的方法。A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program, and when the computer program runs on a computer, the computer executes any one of claims 1 to 17. method described in the item.
  20. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述权利要求1-17中任意一项所述的方法。A computer program product, characterized in that it includes a computer program, and when the computer program is run on a computer, the computer is made to execute the method according to any one of claims 1-17.
PCT/CN2022/113692 2021-09-09 2022-08-19 Display method and electronic device WO2023035911A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111056782.7A CN115793841A (en) 2021-09-09 2021-09-09 Display method and electronic equipment
CN202111056782.7 2021-09-09

Publications (1)

Publication Number Publication Date
WO2023035911A1 true WO2023035911A1 (en) 2023-03-16

Family

ID=85473500

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113692 WO2023035911A1 (en) 2021-09-09 2022-08-19 Display method and electronic device

Country Status (2)

Country Link
CN (1) CN115793841A (en)
WO (1) WO2023035911A1 (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160179193A1 (en) * 2013-08-30 2016-06-23 Beijing Zhigu Rui Tuo Tech Co., Ltd. Content projection system and content projection method
CN106484116A (en) * 2016-10-19 2017-03-08 腾讯科技(深圳)有限公司 The treating method and apparatus of media file
CN106959759A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of data processing method and device
US20170358141A1 (en) * 2016-06-13 2017-12-14 Sony Interactive Entertainment Inc. HMD Transitions for Focusing on Specific Content in Virtual-Reality Environments
US20180227470A1 (en) * 2013-09-03 2018-08-09 Tobii Ab Gaze assisted field of view control
US20180270417A1 (en) * 2017-03-15 2018-09-20 Hiroshi Suitoh Image processing apparatus, image capturing system, image processing method, and recording medium
US20180350032A1 (en) * 2017-06-05 2018-12-06 Google Llc Smoothly varying foveated rendering
CN109478345A (en) * 2016-07-13 2019-03-15 株式会社万代南梦宫娱乐 Simulation system, processing method and information storage medium
US20190139246A1 (en) * 2016-07-14 2019-05-09 Tencent Technology (Shenzhen) Company Limited Information processing method, wearable electronic device, and processing apparatus and system
CN110121885A (en) * 2016-12-29 2019-08-13 索尼互动娱乐股份有限公司 For having recessed video link using the wireless HMD video flowing transmission of VR, the low latency of watching tracking attentively
US20190286227A1 (en) * 2018-03-14 2019-09-19 Apple Inc. Image Enhancement Devices With Gaze Tracking
CN112468796A (en) * 2020-11-23 2021-03-09 平安科技(深圳)有限公司 Method, system and equipment for generating fixation point

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160179193A1 (en) * 2013-08-30 2016-06-23 Beijing Zhigu Rui Tuo Tech Co., Ltd. Content projection system and content projection method
US20180227470A1 (en) * 2013-09-03 2018-08-09 Tobii Ab Gaze assisted field of view control
US20170358141A1 (en) * 2016-06-13 2017-12-14 Sony Interactive Entertainment Inc. HMD Transitions for Focusing on Specific Content in Virtual-Reality Environments
CN109478345A (en) * 2016-07-13 2019-03-15 株式会社万代南梦宫娱乐 Simulation system, processing method and information storage medium
US20190139246A1 (en) * 2016-07-14 2019-05-09 Tencent Technology (Shenzhen) Company Limited Information processing method, wearable electronic device, and processing apparatus and system
CN106484116A (en) * 2016-10-19 2017-03-08 腾讯科技(深圳)有限公司 The treating method and apparatus of media file
CN110121885A (en) * 2016-12-29 2019-08-13 索尼互动娱乐股份有限公司 For having recessed video link using the wireless HMD video flowing transmission of VR, the low latency of watching tracking attentively
US20180270417A1 (en) * 2017-03-15 2018-09-20 Hiroshi Suitoh Image processing apparatus, image capturing system, image processing method, and recording medium
CN106959759A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of data processing method and device
US20180350032A1 (en) * 2017-06-05 2018-12-06 Google Llc Smoothly varying foveated rendering
US20190286227A1 (en) * 2018-03-14 2019-09-19 Apple Inc. Image Enhancement Devices With Gaze Tracking
CN112468796A (en) * 2020-11-23 2021-03-09 平安科技(深圳)有限公司 Method, system and equipment for generating fixation point

Also Published As

Publication number Publication date
CN115793841A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
WO2020192458A1 (en) Image processing method and head-mounted display device
US10009542B2 (en) Systems and methods for environment content sharing
US10908421B2 (en) Systems and methods for personal viewing devices
US9626564B2 (en) System for enabling eye contact in electronic images
JP2014115457A (en) Information processor and recording medium
CN106302132A (en) A kind of 3D instant communicating system based on augmented reality and method
WO2019053997A1 (en) Information processing device, information processing method, and program
EP2583131B1 (en) Personal viewing devices
JPWO2019031005A1 (en) Information processing apparatus, information processing method, and program
CN114255204A (en) Amblyopia training method, device, equipment and storage medium
CN103018914A (en) Glasses-type head-wearing computer with 3D (three-dimensional) display
WO2023001113A1 (en) Display method and electronic device
WO2023035911A1 (en) Display method and electronic device
WO2020044916A1 (en) Information processing device, information processing method, and program
EP3402410A1 (en) Detection system
CN114416237A (en) Display state switching method, device and system, electronic equipment and storage medium
WO2023082980A1 (en) Display method and electronic device
CN116194792A (en) Connection evaluation system
WO2023016302A1 (en) Display method for virtual input element, electronic device, and readable storage medium
CN116934584A (en) Display method and electronic equipment
CN112558847B (en) Method for controlling interface display and head-mounted display
WO2023116541A1 (en) Eye tracking apparatus, display device, and storage medium
WO2022247482A1 (en) Virtual display device and virtual display method
WO2022233256A1 (en) Display method and electronic device
CN115063565B (en) Wearable article try-on method and device and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22866384

Country of ref document: EP

Kind code of ref document: A1