WO2023142264A1 - 一种图像显示方法、装置、ar头戴设备及存储介质 - Google Patents

一种图像显示方法、装置、ar头戴设备及存储介质 Download PDF

Info

Publication number
WO2023142264A1
WO2023142264A1 PCT/CN2022/084579 CN2022084579W WO2023142264A1 WO 2023142264 A1 WO2023142264 A1 WO 2023142264A1 CN 2022084579 W CN2022084579 W CN 2022084579W WO 2023142264 A1 WO2023142264 A1 WO 2023142264A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference object
shadow
straight line
line segment
image
Prior art date
Application number
PCT/CN2022/084579
Other languages
English (en)
French (fr)
Inventor
刘云
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Publication of WO2023142264A1 publication Critical patent/WO2023142264A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models

Definitions

  • the present application relates to the technical field of image processing, and in particular to an image display method, device, AR head-mounted device, and storage medium.
  • electronic devices such as AR (Augmented Reality, enhanced display) headsets, VR (Virtual Reality, virtual reality) headsets, smart phones and other electronic devices usually display text, images and other information on the display device in order to provide users with the required information .
  • AR Augmented Reality, enhanced display
  • VR Virtual Reality, virtual reality
  • smart phones and other electronic devices usually display text, images and other information on the display device in order to provide users with the required information .
  • the electronic device can create and display virtual objects. When displaying a virtual object, the combination effect of the virtual object and the real environment is particularly important.
  • the purpose of this application is to provide an image display method, device, AR head-mounted device and storage medium, which can improve the imaging quality of displayed virtual objects and reduce the difference between virtual objects and real environments.
  • the present application provides an image display method, the image display method comprising:
  • the virtual object is irradiated with scene light to obtain a target image, and the target image is displayed.
  • determining the reference object body and the reference object shadow in the environment image includes:
  • Image recognition is performed on the environment image, and the upright object in the environment image is set as the reference object body according to the image recognition result, and the shadow of the upright object is set as the reference object shadow.
  • determining the reference object body and the reference object shadow in the environment image includes:
  • the standard information includes the outline of the reference object body and the outline of the reference object shadow;
  • the reference object body and the reference object shadow in the environment image are determined according to the annotation information.
  • determining a light source elevation angle and a shadow offset direction angle according to a positional relationship between the reference object body and the reference object shadow in the environment image includes:
  • the light source elevation angle and the shadow offset direction angle are determined according to the positional relationship between the first straight line segment and the second straight line segment.
  • determining the light source elevation angle and the shadow offset direction angle according to the positional relationship between the first straight line segment and the second straight line segment includes:
  • a space coordinate system is established according to the positional relationship between the first straight line segment and the second straight line segment; wherein, the origin of the space coordinate system is the intersection point of the first straight line segment and the second straight line segment, and the The first straight line segment coincides with the Y axis of the space coordinate system, and the plane where the first straight line segment and the second straight line segment are located coincides with the XY axis plane of the space coordinate system;
  • the angle between the target straight line segment and the Y axis is set as the light source elevation angle; wherein, the target straight line segment is the first target end point of the first straight line segment and the second target end point of the second straight line segment , the first target endpoint and the second target endpoint are not intersection points of the first straight line segment and the second straight line segment;
  • the method before irradiating the virtual object with scene light according to the light source elevation angle and the shadow offset direction angle to obtain the target image, the method further includes:
  • the present application also provides an image display device, which includes:
  • a reference object determination module configured to acquire an environment image, and determine the reference object body and reference object shadow in the environment image
  • An angle calculation module configured to determine a light source elevation angle and a shadow offset direction angle according to the positional relationship between the reference object body and the reference object shadow in the environment image;
  • a display module configured to irradiate the virtual object with scene light according to the light source elevation angle and the shadow offset direction angle to obtain a target image, and display the target image.
  • the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed, the steps performed by the above-mentioned image display method are realized.
  • the present application also provides an AR head-mounted device, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps performed by the above image display method when calling the computer program in the memory.
  • the present application provides an image display method, including: acquiring an environment image, and determining the reference object body and reference object shadow in the environment image; according to the reference object body and the reference object shadow in the environment image According to the positional relationship of the light source and the shadow offset direction angle, the virtual object is irradiated with scene light according to the light source altitude angle and the shadow offset direction angle to obtain a target image, and the target image is displayed.
  • This application acquires the environment image and determines the reference object body and the reference object shadow in the environment image. According to the positional relationship between the reference object body and the reference object shadow in the environment image, the height angle of the light source and the shadow offset direction angle when shooting the environment image can be determined. .
  • the corresponding scene light can be determined according to the height angle of the light source and the shadow offset direction angle, and the scene light is irradiated on the virtual object to obtain the image to be displayed.
  • the scene light irradiated on the virtual object has the same azimuth and incident angle as the light source in the real world, which can improve the imaging quality of the displayed virtual object and reduce the difference between the virtual object and the real environment.
  • the present application also provides an image display device, a storage medium, and an AR head-mounted device at the same time, which have the above-mentioned beneficial effects and will not be repeated here.
  • FIG. 1 is a flowchart of an image display method provided by an embodiment of the present application
  • Fig. 2 is a comparison diagram before and after a straight line detection provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of a method for determining a light source elevation angle and a shadow offset direction angle provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of imaging of an AR scene provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an image display device provided by an embodiment of the present application.
  • FIG. 1 is a flow chart of an image display method provided by an embodiment of the present application.
  • S101 Acquire the environment image, and determine the reference object body and the reference object shadow in the environment image
  • this embodiment can be applied to electronic devices such as AR head-mounted devices, VR head-mounted devices, and smart phones.
  • electronic devices such as AR head-mounted devices, VR head-mounted devices, and smart phones.
  • the environment image can be taken, and the reference object body and reference object shadow can be determined from the environment image.
  • the reference object in this embodiment can be a slender upright object, such as a flagpole, a utility pole, a tree trunk, etc. .
  • S102 Determine a light source elevation angle and a shadow offset direction angle according to the positional relationship between the reference object body and the reference object shadow in the environment image;
  • this embodiment can be based on the positional relationship between the reference object body and the reference object shadow in the environment image, since the reference object shadow is generated by light irradiating the reference object body , so this step can deduce the elevation angle of the light source and the offset direction angle of the shadow according to the positional relationship between the body of the reference object and the shadow of the reference object.
  • the height angle of the light source refers to the angle between the incident light of the light source and the vertical direction in the environment image;
  • the shadow offset direction angle refers to the angle between the shadow of the reference object and the horizontal direction in the environment image.
  • S103 Irradiate the virtual object with scene light according to the height angle of the light source and the shadow offset direction angle to obtain a target image, and display the target image.
  • the orientation of the light source in the real scene can be determined according to the height angle of the light source and the shadow offset direction angle, and then based on the orientation of the light source in the real scene, a virtual light source is added to the virtual scene where the virtual object is located, so that the scene light of the virtual scene Same light as in real scene.
  • the target image can be obtained by rendering the virtual object with scene light according to the height angle of the light source and the shadow offset direction angle, and then display the target image on the display screen. Since there is a virtual object illuminated by scene light, the target image includes the virtual object body and shadow.
  • This embodiment acquires the environment image and determines the reference object body and the reference object shadow in the environment image. According to the positional relationship between the reference object body and the reference object shadow in the environment image, the elevation angle of the light source and the shadow offset direction when shooting the environment image can be determined horn. The corresponding scene light can be determined according to the height angle of the light source and the shadow offset direction angle, and the scene light is irradiated on the virtual object to obtain the image to be displayed. In this solution, the scene light irradiated on the virtual object has the same azimuth and incident angle as the light source in the real world, which can improve the imaging quality of the displayed virtual object and reduce the difference between the virtual object and the real environment.
  • the above embodiment can determine the reference object body and the reference object shadow in at least the following two ways:
  • Mode 1 Perform image recognition on the environment image, set the upright object in the environment image as the reference object body according to the image recognition result, and set the shadow of the upright object as the reference object shadow.
  • target detection models such as Faster R-CNN and YOLO can be used to identify upright objects in environmental images.
  • Mode 2 Transmitting the environmental image to the human-computer interaction interface, and receiving the user's annotation information on the environmental image; wherein, the standard information includes the outline of the reference object body and the outline of the reference object shadow ; Determine the reference object body and the reference object shadow in the environment image according to the annotation information.
  • the user can mark the outline of the reference object body and the outline of the reference object shadow in the environment image, which can reduce the calculation amount of the machine.
  • the embodiment corresponding to FIG. 1 can determine the light source elevation angle and shadow offset direction angle in the following manner: using a straight line detection algorithm to calculate the first straight line segment corresponding to the reference object body, and the A second straight line segment corresponding to the reference object shadow; determining the positional relationship between the first straight line segment and the second straight line segment according to the positional relationship between the reference object body and the reference object shadow in the environment image; The light source elevation angle and the shadow offset direction angle are determined according to the positional relationship between the first straight line segment and the second straight line segment.
  • the above process can establish a space coordinate system according to the positional relationship between the first straight line segment and the second straight line segment; wherein, the origin of the space coordinate system is the first straight line segment and the second straight line
  • the intersection point of the segment, the first straight line segment coincides with the Y axis of the space coordinate system, and the plane where the first straight line segment and the second straight line segment are located is on the XY axis plane of the space coordinate system (that is, the space In the coordinate system, the plane (XOY) where the X axis and the Y axis are located coincides;
  • the angle between the target straight line segment and the Y axis is set as the light source elevation angle; wherein, the target straight line segment is the first straight line segment of the first a line connecting a target endpoint and a second target endpoint of the second straight line segment, the first target endpoint and the second target endpoint are not intersection points of the first straight line segment and the second straight line segment;
  • the shadow offset direction angle is calculated according to the
  • Fig. 2 is a comparison diagram before and after a straight line detection provided by the embodiment of the present application.
  • the upright object displaying the scene can be set as the reference object body, A is the reference object body, and A' is the reference object.
  • the object shadow, through the line detection, the line detection result of the reference object body and the line detection result of the reference object shadow can be obtained.
  • the upright objects and shadows in the scene are detected by the Hough transform line detection algorithm, and the height h of the reference object body and the length 1 of the reference object shadow can be determined according to the length of the detected line segment.
  • Fig. 3 is a schematic diagram of a method for determining the elevation angle of the light source and the shadow offset direction angle provided by the embodiment of the present application. As shown in Fig. 3, this embodiment establishes a space coordinate system and measures the shadow mapping of the reference object For the length d on the X coordinate axis, calculate the corresponding light source elevation angle ⁇ and the offset azimuth angle ⁇ in the XZ coordinate system. Calculated as follows:
  • this embodiment can take pictures of the environment and re-determine the elevation angle of the light source and the direction angle of the shadow offset in the above-mentioned manner.
  • the ambient light intensity may also be determined according to the environmental image, according to the The ambient light intensity adjusts the illumination intensity of the scene light to improve the imaging fidelity of the target image.
  • the intensity of the scene light is positively related to the intensity of the ambient light.
  • the above image display scheme based on scene light direction detection can be used in AR headsets, allowing users to Get a more realistic user experience when using an AR headset.
  • an ambient light direction detection method can be added to the AR scene to improve the fusion effect of virtual and reality.
  • the line segment of the reference object and its shadow is detected, combined with the detection result, the incident angle of the ambient light is calculated through the corresponding algorithm, and it is applied to the rendering and fusion of the AR scene.
  • the shadows caused by light are combined to make them have a consistent light and dark state with the objects in the actual scene, which improves the authenticity of augmented reality, thereby improving the sense of user experience.
  • FIG. 4 is a schematic diagram of imaging of an AR scene provided by an embodiment of the present application.
  • the cube in FIG. 4 is a virtual object, and the shadow part is the shadow of the virtual object.
  • the 3D modeling of the virtual object is combined with the projection direction of the light to generate real shadows and light and dark states, and render them into the real scene, thus Achieve a more realistic user experience.
  • the lighting settings in the scene can be matched with the real lighting conditions, so that the 3D modeling and rendering effect in the real scene is more realistic, and the user's product experience is improved.
  • FIG. 5 is a schematic structural diagram of an image display device provided in an embodiment of the present application.
  • the device may include:
  • a reference object determination module 501 configured to acquire an environment image, and determine the reference object body and reference object shadow in the environment image;
  • An angle calculation module 502 configured to determine a light source elevation angle and a shadow offset direction angle according to the positional relationship between the reference object body and the reference object shadow in the environment image;
  • the display module 503 is configured to irradiate the virtual object with scene light according to the light source elevation angle and the shadow offset direction angle to obtain a target image, and display the target image.
  • This embodiment acquires the environment image and determines the reference object body and the reference object shadow in the environment image. According to the positional relationship between the reference object body and the reference object shadow in the environment image, the elevation angle of the light source and the shadow offset direction when shooting the environment image can be determined horn. The corresponding scene light can be determined according to the height angle of the light source and the shadow offset direction angle, and the scene light is irradiated on the virtual object to obtain the image to be displayed. In this solution, the scene light irradiated on the virtual object has the same azimuth and incident angle as the light source in the real world, which can improve the imaging quality of the displayed virtual object and reduce the difference between the virtual object and the real environment.
  • the reference object determining module 501 is used to perform image recognition on the environment image, set the upright object in the environment image as the reference object body according to the image recognition result, and set the shadow of the upright object as the reference object body. Reference object shadow.
  • the reference object determination module 501 is configured to transmit the environment image to the human-computer interaction interface, and receive the user's annotation information on the environment image; wherein, the standard information includes the outline of the reference object body, and The outline of the reference object shadow; it is also used to determine the reference object body and the reference object shadow in the environment image according to the annotation information.
  • the angle calculation module 502 is used to calculate the first straight line segment corresponding to the reference object body and the second straight line segment corresponding to the reference object shadow by using a straight line detection algorithm;
  • the positional relationship of the shadow of the reference object in the environment image determines the positional relationship between the first straight line segment and the second straight line segment;
  • the positional relationship determines the elevation angle of the light source and the offset direction angle of the shadow.
  • the process of the angle calculation module 502 determining the light source elevation angle and the shadow offset direction angle according to the positional relationship between the first straight line segment and the second straight line segment includes: according to the first straight line segment and the The positional relationship of the second straight line segment establishes a space coordinate system; wherein, the origin of the space coordinate system is the intersection of the first straight line segment and the second straight line segment, and the first straight line segment and the space The Y axis of the coordinate system coincides, and the plane where the first straight line segment and the second straight line segment are located coincides with the XY axis plane of the space coordinate system; the angle between the target straight line segment and the Y axis is set as the light source Altitude angle; wherein, the target straight line segment is a line connecting the first target end point of the first straight line segment and the second target end point of the second straight line segment, and the first target end point and the second target end point The endpoint is not the intersection of the first straight line segment and the second straight line segment; the shadow offset direction angle is calculated
  • An update module configured to determine whether the light source in the environmental image is a natural light source after displaying the target image; if so, determine a new light source elevation angle and a new shadow deviation direction angle, so as to irradiate the virtual object with scene light according to the new light source elevation angle and the new shadow shift direction angle to obtain a new target image and display the new target image.
  • a light intensity adjustment module configured to determine the ambient light intensity according to the environmental image before irradiating the virtual object with scene light according to the light source elevation angle and the shadow offset direction angle to obtain the target image, and determine the ambient light intensity according to the ambient light intensity Adjusts the light intensity of said scene light.
  • the present application also provides a storage medium on which a computer program is stored. When the computer program is executed, the steps provided in the above-mentioned embodiments can be realized.
  • the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the present application also provides an AR head-mounted device, which may include a memory and a processor, where a computer program is stored in the memory, and when the processor invokes the computer program in the memory, it can realize the functions provided by the above-mentioned embodiments. step.
  • the AR head-mounted device may also include various network interfaces, power supplies and other components.

Abstract

一种图像显示方法、装置、存储介质及AR头戴设备,所述方法包括:获取环境图像,并确定所述环境图像中的参照物本体和参照物影子(S101);根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角(S102);根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像,并对所述目标图像进行显示(S103)。该方法能够提高显示虚拟物体的成像质量,降低虚拟物体与现实环境的差异。

Description

一种图像显示方法、装置、AR头戴设备及存储介质
本申请要求于2022年1月28日提交中国专利局、申请号为202210108601.9、发明名称为“一种图像显示方法、装置、AR头戴设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别涉及一种图像显示方法、装置、AR头戴设备及存储介质。
背景技术
用户主要通过视觉信息与电子设备进行人机交互,良好的人机交互能够极大提高用户的使用体验。例如AR(Augmented Reality,增强显示)头戴设备、VR(Virtual Reality,虚拟现实)头戴设备、智能手机等电子设备通常在显示装置上显示文字、图像等信息,以便为用户提供所需的信息。为了提高显示效果,电子设备可以创建虚拟物体并进行显示。在显示虚拟物体时,虚拟物体和现实环境的结合效果尤为重要。
因此,如何提高显示虚拟物体的成像质量,降低虚拟物体与现实环境的差异是本领域技术人员目前需要解决的技术问题。
发明内容
本申请的目的是提供一种图像显示方法、装置、AR头戴设备及存储介质,能够提高显示虚拟物体的成像质量,降低虚拟物体与现实环境的差异。
为解决上述技术问题,本申请提供一种图像显示方法,该图像显示方法包括:
获取环境图像,并确定所述环境图像中的参照物本体和参照物影子;
根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角;
根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射 得到目标图像,并对所述目标图像进行显示。
可选的,确定所述环境图像中的参照物本体和参照物影子,包括:
对所述环境图像进行图像识别,根据图像识别结果将所述环境图像中的直立物体设置为参照物本体,并将所述直立物体的影子设置为所述参照物影子。
可选的,确定所述环境图像中的参照物本体和参照物影子,包括:
将所述环境图像传输至人机交互界面,并接收用户对所述环境图像的标注信息;其中,所述标准信息包括所述参照物本体的轮廓,以及所述参照物影子的轮廓;
根据所述标注信息确定所述环境图像中的所述参照物本体和所述参照物影子。
可选的,根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角,包括:
利用直线检测算法分别计算所述参照物本体对应的第一直线段,以及所述参照物影子对应的第二直线段;
根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定所述第一直线段和所述第二直线段的位置关系;
根据所述第一直线段和所述第二直线段的位置关系确定所述光源高度角和所述影子偏移方向角。
可选的,根据所述第一直线段和所述第二直线段的位置关系确定所述光源高度角和所述影子偏移方向角,包括:
根据所述第一直线段和所述第二直线段的位置关系建立空间坐标系;其中,所述空间坐标系的原点为所述第一直线段和所述第二直线段的交点,所述第一直线段与所述空间坐标系的Y轴重合,所述第一直线段和所述第二直线段所在平面与所述空间坐标系的XY轴平面重合;
将目标直线段与Y轴上的夹角设置为所述光源高度角;其中,所述目标直线段为所述第一直线段的第一目标端点和所述第二直线段的第二目标端点的连线,所述第一目标端点和所述第二目标端点不为所述第一直线段和所述第二直线段的交点;
根据所述第二直线段在X轴上的投影长度和所述第二直线段的长度计算 所述影子偏移方向角。
可选的,在对所述目标图像进行显示之后,还包括:
判断所述环境图像中的光源是否为自然光源;
若是,则在延时预设时长后,确定新的光源高度角和新的影子偏移方向角,以便根据所述新的光源高度角和所述新的影子偏移方向角对虚拟物体进行场景光照射得到新的目标图像并显示所述新的目标图像。
可选的,在根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像之前,还包括:
根据所述环境图像确定环境光强,根据所述环境光强调整所述场景光的光照强度。
本申请还提供了一种图像显示装置,该装置包括:
参照物确定模块,用于获取环境图像,并确定所述环境图像中的参照物本体和参照物影子;
角度计算模块,用于根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角;
显示模块,用于根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像,并对所述目标图像进行显示。
本申请还提供了一种存储介质,其上存储有计算机程序,所述计算机程序执行时实现上述图像显示方法执行的步骤。
本申请还提供了一种AR头戴设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时实现上述图像显示方法执行的步骤。
本申请提供了一种图像显示方法,包括:获取环境图像,并确定所述环境图像中的参照物本体和参照物影子;根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角;根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像,并对所述目标图像进行显示。
本申请获取环境图像并确定环境图像中的参照物本体和参照物影子,根据参照物本体和参照物影子在环境图像中的位置关系可以确定拍摄环境图像 时的光源高度角和影子偏移方向角。根据光源高度角和影子偏移方向角可以确定相应的场景光,并对虚拟物体进行场景光照射得到需要显示的图像。本方案中对虚拟物体照射的场景光与现实世界中光源的方位和入射角相同,能够提高显示虚拟物体的成像质量,降低虚拟物体与现实环境的差异。本申请同时还提供了一种图像显示装置、一种存储介质和一种AR头戴设备,具有上述有益效果,在此不再赘述。
附图说明
为了更清楚地说明本申请实施例,下面将对实施例中所需要使用的附图做简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例所提供的一种图像显示方法的流程图;
图2为本申请实施例所提供的一种直线检测前后的对照图;
图3为本申请实施例所提供的一种光源高度角和影子偏移方向角的确定方法示意图;
图4为本申请实施例所提供的一种AR场景的成像示意图;
图5为本申请实施例所提供的一种图像显示装置的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
下面请参见图1,图1为本申请实施例所提供的一种图像显示方法的流程图。
具体步骤可以包括:
S101:获取环境图像,并确定环境图像中的参照物本体和参照物影子;
其中,本实施例可以应用于AR头戴设备、VR头戴设备、智能手机等电子设备,通过使用本方案能够提高上述电子设备的虚拟模型的显示真实度,使得虚拟模型的显示效果更加逼真。
本步骤可以拍摄环境图像,并从环境图像中确定参照物本体和参照物影子,为了提高图像显示效果,本实施例中的参照物可以为细长的直立物体,如旗杆、电线杆、树干等。
S102:根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角;
其中,在确定环境图像中的参照物本体和参照物影子之后,本实施例可以根据参照物本体和参照物影子在所述环境图像中的位置关系,由于参照物影子由光线照射参照物本体生成,因此本步骤可以根据参照物本体和参照物影子的位置关系推导出光源高度角和影子偏移方向角。光源高度角指:光源的入射光线与环境图像中的竖直方向的夹角;影子偏移方向角指:参照物影子与环境图像中的水平方向的夹角。
S103:根据光源高度角和影子偏移方向角对虚拟物体进行场景光照射得到目标图像,并对所述目标图像进行显示。
其中,根据光源高度角和影子偏移方向角可以确定光源在现实场景中的方位,进而基于光源在现实场景中的方位在虚拟物体所在的虚拟场景中添加虚拟光源,以使虚拟场景的场景光与现实场景中的光相同。本实施例可以根据光源高度角和影子偏移方向角对虚拟物体进行场景光照射渲染得到目标图像,进而在显示屏中显示目标图像,由于存在场景光照射的虚拟物体,目标图像包括存在虚拟物体的本体和影子。
本实施例获取环境图像并确定环境图像中的参照物本体和参照物影子,根据参照物本体和参照物影子在环境图像中的位置关系可以确定拍摄环境图像时的光源高度角和影子偏移方向角。根据光源高度角和影子偏移方向角可以确定相应的场景光,并对虚拟物体进行场景光照射得到需要显示的图像。本方案中对虚拟物体照射的场景光与现实世界中光源的方位和入射角相同,能够提高显示虚拟物体的成像质量,降低虚拟物体与现实环境的差异。
作为对于图1对应实施例的进一步介绍,上述实施例至少可以通过以下 两种方式确定参照物本体和参照物影子:
方式1:对所述环境图像进行图像识别,根据图像识别结果将所述环境图像中的直立物体设置为参照物本体,并将所述直立物体的影子设置为所述参照物影子。具体的,可以利用Faster R-CNN、YOLO等目标检测模型对环境图像中的直立物体进行识别。
方式2:将所述环境图像传输至人机交互界面,并接收用户对所述环境图像的标注信息;其中,所述标准信息包括所述参照物本体的轮廓,以及所述参照物影子的轮廓;根据所述标注信息确定所述环境图像中的所述参照物本体和所述参照物影子。在方式2中,可以由用户在环境图像中标注参照物本体的轮廓和参照物影子的轮廓,可以降低机器的计算量。
作为一种可行的实施方式,图1对应的实施例可以通过以下方式确定光源高度角和影子偏移方向角包括:利用直线检测算法分别计算所述参照物本体对应的第一直线段,以及所述参照物影子对应的第二直线段;根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定所述第一直线段和所述第二直线段的位置关系;根据所述第一直线段和所述第二直线段的位置关系确定所述光源高度角和所述影子偏移方向角。
具体的,上述过程可以根据所述第一直线段和所述第二直线段的位置关系建立空间坐标系;其中,所述空间坐标系的原点为所述第一直线段和所述第二直线段的交点,所述第一直线段与所述空间坐标系的Y轴重合,所述第一直线段和所述第二直线段所在平面与所述空间坐标系的XY轴平面(即,空间坐标系中X轴和Y轴所在的平面XOY)重合;将目标直线段与Y轴上的夹角设置为所述光源高度角;其中,所述目标直线段为所述第一直线段的第一目标端点和所述第二直线段的第二目标端点的连线,所述第一目标端点和所述第二目标端点不为所述第一直线段和所述第二直线段的交点;根据所述第二直线段在X轴上的投影长度和所述第二直线段的长度计算所述影子偏移方向角。
请参见图2,图2为本申请实施例所提供的一种直线检测前后的对照图,本实施例可以将显示场景的直立物体设置为参照物本体,A为参照物本体,A’为参照物影子,通过直线检测可以得到参照物本体的直线检测结果和参照物 影子的直线检测结果。对场景中的直立物体及阴影利用Hough变换直线检测算法进行检测,根据测量检测到的线段长度可以确定参照物本体的高度h及参照物影子的长度1。
请参见图3,图3为本申请实施例所提供的一种光源高度角和影子偏移方向角的确定方法示意图,如图3所示,本实施例建立空间坐标系,测量参照物影子映射在X坐标轴上的长度d,计算相应的光源高度角α以及XZ坐标系中的偏移方位角度β。计算公式如下:
Figure PCTCN2022084579-appb-000001
作为一种可行的实施方式,在对所述目标图像进行显示之后,还可以判断所述环境图像中的光源是否为自然光源;若是,则在延时预设时长后,确定新的光源高度角和新的影子偏移方向角,以便根据所述新的光源高度角和所述新的影子偏移方向角对虚拟物体进行场景光照射得到新的目标图像并显示所述新的目标图像。由于自然光源的位置会随时间发生变化,本实施例可以拍摄环境图片并通过上述方式重新确定光源高度角和影子偏移方向角。
作为一种可行的实施方式,在根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像之前,还可以根据所述环境图像确定环境光强,根据所述环境光强调整所述场景光的光照强度,以提高目标图像的成像真实度。场景光的光照强度与环境光强的大小正相关。
由于光照一致性对虚拟场景和现实场景融合的真实性至关重要,而光源方向是光照信息的基础,因此上述基于场景光方向检测的图像显示方案可以在AR头戴设备中,可使用户在使用AR头戴设备时获得更加真实的用户体验。具体的,可以在AR场景中加入环境光方向检测方法,提高虚拟与现实融合效果。通过利用Hough直线检测算法,检测出参照物体及其影子的线段,结合检测结果,通过相应算法计算得到环境光的入射角度,并将其应用到AR场景渲染和融合中。在虚拟物体的3D建模中结合光线造成的阴影,使其和实际场景中的物体拥有一致的明暗状态,提高增强现实的真实性,从而提高用户体验感。
请参见图4,图4为本申请实施例所提供的一种AR场景的成像示意图,图4中的正方体为虚拟物体,阴影部分为虚拟物体的影子。通过在AR场景 中设置一致的光照高度角α和方位角β,将虚拟物体的3D建模中结合光线的投射方向,生成真实的阴影及明暗状态,并将其渲染到现实的场景中,从而实现更加真实的用户体验。本方案应用在AR头戴设备时,可以使其场景中的光照设置与真实光照情况相匹配,使3D建模及其渲染在真实场景中的效果更佳真实,提高用户的产品体验感。
请参见图5,图5为本申请实施例所提供的一种图像显示装置的结构示意图,该装置可以包括:
参照物确定模块501,用于获取环境图像,并确定所述环境图像中的参照物本体和参照物影子;
角度计算模块502,用于根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角;
显示模块503,用于根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像,并对所述目标图像进行显示。
本实施例获取环境图像并确定环境图像中的参照物本体和参照物影子,根据参照物本体和参照物影子在环境图像中的位置关系可以确定拍摄环境图像时的光源高度角和影子偏移方向角。根据光源高度角和影子偏移方向角可以确定相应的场景光,并对虚拟物体进行场景光照射得到需要显示的图像。本方案中对虚拟物体照射的场景光与现实世界中光源的方位和入射角相同,能够提高显示虚拟物体的成像质量,降低虚拟物体与现实环境的差异。
进一步的,参照物确定模块501用于对所述环境图像进行图像识别,根据图像识别结果将所述环境图像中的直立物体设置为参照物本体,并将所述直立物体的影子设置为所述参照物影子。
进一步的,参照物确定模块501用于将所述环境图像传输至人机交互界面,并接收用户对所述环境图像的标注信息;其中,所述标准信息包括所述参照物本体的轮廓,以及所述参照物影子的轮廓;还用于根据所述标注信息确定所述环境图像中的所述参照物本体和所述参照物影子。
进一步的,角度计算模块502用于利用直线检测算法分别计算所述参照物本体对应的第一直线段,以及所述参照物影子对应的第二直线段;还用于根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定所 述第一直线段和所述第二直线段的位置关系;还用于根据所述第一直线段和所述第二直线段的位置关系确定所述光源高度角和所述影子偏移方向角。
进一步的,角度计算模块502根据所述第一直线段和所述第二直线段的位置关系确定所述光源高度角和所述影子偏移方向角的过程包括:根据所述第一直线段和所述第二直线段的位置关系建立空间坐标系;其中,所述空间坐标系的原点为所述第一直线段和所述第二直线段的交点,所述第一直线段与所述空间坐标系的Y轴重合,所述第一直线段和所述第二直线段所在平面与所述空间坐标系的XY轴平面重合;将目标直线段与Y轴上的夹角设置为所述光源高度角;其中,所述目标直线段为所述第一直线段的第一目标端点和所述第二直线段的第二目标端点的连线,所述第一目标端点和所述第二目标端点不为所述第一直线段和所述第二直线段的交点;根据所述第二直线段在X轴上的投影长度和所述第二直线段的长度计算所述影子偏移方向角。
进一步的,还包括:
更新模块,用于在对所述目标图像进行显示之后,判断所述环境图像中的光源是否为自然光源;若是,则在延时预设时长后,确定新的光源高度角和新的影子偏移方向角,以便根据所述新的光源高度角和所述新的影子偏移方向角对虚拟物体进行场景光照射得到新的目标图像并显示所述新的目标图像。
进一步的,还包括:
光强调整模块,用于在根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像之前,根据所述环境图像确定环境光强,根据所述环境光强调整所述场景光的光照强度。
由于装置部分的实施例与方法部分的实施例相互对应,因此装置部分的实施例请参见方法部分的实施例的描述,这里暂不赘述。
本申请还提供了一种存储介质,其上存有计算机程序,该计算机程序被执行时可以实现上述实施例所提供的步骤。该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请还提供了一种AR头戴设备,可以包括存储器和处理器,所述存储器中存有计算机程序,所述处理器调用所述存储器中的计算机程序时,可 以实现上述实施例所提供的步骤。当然所述AR头戴设备还可以包括各种网络接口,电源等组件。
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的状况下,由语句“包括一个......”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (10)

  1. 一种图像显示方法,其特征在于,包括:
    获取环境图像,并确定所述环境图像中的参照物本体和参照物影子;
    根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角;
    根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像,并对所述目标图像进行显示。
  2. 根据权利要求1所述图像显示方法,其特征在于,确定所述环境图像中的参照物本体和参照物影子,包括:
    对所述环境图像进行图像识别,根据图像识别结果将所述环境图像中的直立物体设置为参照物本体,并将所述直立物体的影子设置为所述参照物影子。
  3. 根据权利要求1所述图像显示方法,其特征在于,确定所述环境图像中的参照物本体和参照物影子,包括:
    将所述环境图像传输至人机交互界面,并接收用户对所述环境图像的标注信息;其中,所述标准信息包括所述参照物本体的轮廓,以及所述参照物影子的轮廓;
    根据所述标注信息确定所述环境图像中的所述参照物本体和所述参照物影子。
  4. 根据权利要求1所述图像显示方法,其特征在于,根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角,包括:
    利用直线检测算法分别计算所述参照物本体对应的第一直线段,以及所述参照物影子对应的第二直线段;
    根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定所述第一直线段和所述第二直线段的位置关系;
    根据所述第一直线段和所述第二直线段的位置关系确定所述光源高度角和所述影子偏移方向角。
  5. 根据权利要求4所述图像显示方法,其特征在于,根据所述第一直线段和所述第二直线段的位置关系确定所述光源高度角和所述影子偏移方向 角,包括:
    根据所述第一直线段和所述第二直线段的位置关系建立空间坐标系;其中,所述空间坐标系的原点为所述第一直线段和所述第二直线段的交点,所述第一直线段与所述空间坐标系的Y轴重合,所述第一直线段和所述第二直线段所在平面与所述空间坐标系的XY轴平面重合;
    将目标直线段与Y轴上的夹角设置为所述光源高度角;其中,所述目标直线段为所述第一直线段的第一目标端点和所述第二直线段的第二目标端点的连线,所述第一目标端点和所述第二目标端点不为所述第一直线段和所述第二直线段的交点;
    根据所述第二直线段在X轴上的投影长度和所述第二直线段的长度计算所述影子偏移方向角。
  6. 根据权利要求1所述图像显示方法,其特征在于,在对所述目标图像进行显示之后,还包括:
    判断所述环境图像中的光源是否为自然光源;
    若是,则在延时预设时长后,确定新的光源高度角和新的影子偏移方向角,以便根据所述新的光源高度角和所述新的影子偏移方向角对虚拟物体进行场景光照射得到新的目标图像并显示所述新的目标图像。
  7. 根据权利要求1至6任一项所述图像显示方法,其特征在于,在根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像之前,还包括:
    根据所述环境图像确定环境光强,根据所述环境光强调整所述场景光的光照强度。
  8. 一种图像显示装置,其特征在于,包括:
    参照物确定模块,用于获取环境图像,并确定所述环境图像中的参照物本体和参照物影子;
    角度计算模块,用于根据所述参照物本体和所述参照物影子在所述环境图像中的位置关系确定光源高度角和影子偏移方向角;
    显示模块,用于根据所述光源高度角和所述影子偏移方向角对虚拟物体进行场景光照射得到目标图像,并对所述目标图像进行显示。
  9. 一种AR头戴设备,其特征在于,包括存储器和处理器,所述存储器 中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时实现如权利要求1至7任一项所述图像显示方法的步骤。
  10. 一种存储介质,其特征在于,所述存储介质中存储有计算机可执行指令,所述计算机可执行指令被处理器加载并执行时,实现如权利要求1至7任一项所述图像显示方法的步骤。
PCT/CN2022/084579 2022-01-28 2022-03-31 一种图像显示方法、装置、ar头戴设备及存储介质 WO2023142264A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210108601.9A CN114494659A (zh) 2022-01-28 2022-01-28 一种图像显示方法、装置、ar头戴设备及存储介质
CN202210108601.9 2022-01-28

Publications (1)

Publication Number Publication Date
WO2023142264A1 true WO2023142264A1 (zh) 2023-08-03

Family

ID=81476160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084579 WO2023142264A1 (zh) 2022-01-28 2022-03-31 一种图像显示方法、装置、ar头戴设备及存储介质

Country Status (2)

Country Link
CN (1) CN114494659A (zh)
WO (1) WO2023142264A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116974417B (zh) * 2023-07-25 2024-03-29 江苏泽景汽车电子股份有限公司 显示控制方法及装置、电子设备、存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062752A (zh) * 2017-12-13 2018-05-22 网易(杭州)网络有限公司 判断主光源的方位的方法、介质、装置和计算设备
CN108154549A (zh) * 2017-12-25 2018-06-12 太平洋未来有限公司 一种三维图像处理方法
CN108525298A (zh) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN109255841A (zh) * 2018-08-28 2019-01-22 百度在线网络技术(北京)有限公司 Ar图像呈现方法、装置、终端及存储介质
JP2019212062A (ja) * 2018-06-05 2019-12-12 株式会社セガゲームス 情報処理装置及びプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062752A (zh) * 2017-12-13 2018-05-22 网易(杭州)网络有限公司 判断主光源的方位的方法、介质、装置和计算设备
CN108154549A (zh) * 2017-12-25 2018-06-12 太平洋未来有限公司 一种三维图像处理方法
CN108525298A (zh) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备
JP2019212062A (ja) * 2018-06-05 2019-12-12 株式会社セガゲームス 情報処理装置及びプログラム
CN109255841A (zh) * 2018-08-28 2019-01-22 百度在线网络技术(北京)有限公司 Ar图像呈现方法、装置、终端及存储介质

Also Published As

Publication number Publication date
CN114494659A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
WO2021208648A1 (zh) 虚拟对象调整方法、装置、存储介质与增强现实设备
US11495017B2 (en) Virtualization of tangible interface objects
CN104183014B (zh) 一种面向城市增强现实的高融合度信息标注方法
CN108304075B (zh) 一种在增强现实设备进行人机交互的方法与设备
US9202309B2 (en) Methods and apparatus for digital stereo drawing
US20150229838A1 (en) Photo composition and position guidance in a camera or augmented reality system
CN104346612B (zh) 信息处理装置和显示方法
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
WO2023284713A1 (zh) 一种三维动态跟踪方法、装置、电子设备和存储介质
CN111652123B (zh) 图像处理和图像合成方法、装置和存储介质
WO2020259682A1 (zh) 基于三维点云的初始视角控制和呈现方法及系统
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
JP2005135355A (ja) データオーサリング処理装置
JP2023029984A (ja) 仮想イメージを生成するための方法、装置、電子機器及び可読記憶媒体
WO2018080849A1 (en) Simulating depth of field
US20220358735A1 (en) Method for processing image, device and storage medium
WO2023142264A1 (zh) 一种图像显示方法、装置、ar头戴设备及存储介质
WO2020134925A1 (zh) 人脸图像的光照检测方法、装置、设备和存储介质
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
WO2022088927A1 (zh) 基于图像的光效处理方法、装置、设备及存储介质
CN115965672A (zh) 三维物体的显示方法、装置、设备及介质
CN113934297A (zh) 一种基于增强现实的交互方法、装置、电子设备及介质
CN109949396A (zh) 一种渲染方法、装置、设备和介质
CN109669541A (zh) 一种用于配置增强现实内容的方法与设备
CN110021067B (zh) 一种基于镜面反射梯度偏振光构建三维人脸法线的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923082

Country of ref document: EP

Kind code of ref document: A1