CN111176452B - Method and apparatus for determining display area, computer system, and readable storage medium - Google Patents

Method and apparatus for determining display area, computer system, and readable storage medium Download PDF

Info

Publication number
CN111176452B
CN111176452B CN201911422580.2A CN201911422580A CN111176452B CN 111176452 B CN111176452 B CN 111176452B CN 201911422580 A CN201911422580 A CN 201911422580A CN 111176452 B CN111176452 B CN 111176452B
Authority
CN
China
Prior art keywords
information
spatial
brightness
space
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911422580.2A
Other languages
Chinese (zh)
Other versions
CN111176452A (en
Inventor
邓建
邹成刚
盛兴东
钟将为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911422580.2A priority Critical patent/CN111176452B/en
Publication of CN111176452A publication Critical patent/CN111176452A/en
Application granted granted Critical
Publication of CN111176452B publication Critical patent/CN111176452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The present disclosure provides a method of determining a display area, comprising: acquiring an environment image within a user visual angle range, and determining brightness information of the environment image; acquiring spatial information within a user visual angle range; dividing the space within the user visual angle range based on the brightness information and the space information of the environment image to obtain at least two space areas; determining whether each of the at least two spatial regions satisfies a predetermined brightness condition, and using the spatial region satisfying the predetermined brightness condition as a target spatial region for displaying a virtual image; and displaying the virtual image in the target space area. The present disclosure also provides an apparatus for determining a display area, a computer system, and a computer-readable storage medium.

Description

Method and apparatus for determining display area, computer system, and readable storage medium
Technical Field
The present disclosure relates to a method of determining a display area, an apparatus for determining a display area, a computer system, and a computer-readable storage medium.
Background
With the development of scientific technology, virtual display devices gradually come into people's lives, such as AR (augmented reality) devices, AR technology can superimpose virtual images into a real environment, and users can see the real environment and virtual objects simultaneously in the same picture and space through the AR devices. The display effect of the virtual image is affected by the brightness of the environment light, but in the related art, the influence of the lighting condition is not considered when the display position of the virtual image is determined, and in the process of displaying the virtual image, the virtual image is often displayed in an unsuitable space area, so that the display effect of the virtual image is poor, and the user experience is affected.
Disclosure of Invention
One aspect of the present disclosure provides a method of determining a display area, including: acquiring an environment image within a user visual angle range, and determining brightness information of the environment image; acquiring spatial information within the user visual angle range; dividing the space within the user visual angle range based on the brightness information and the space information of the environment image to obtain at least two space areas; determining whether each of the at least two spatial regions satisfies a predetermined brightness condition, and using the spatial region satisfying the predetermined brightness condition as a target spatial region for displaying a virtual image; and displaying the virtual image in the target space region.
Optionally, the method further comprises: acquiring the intensity of ambient light; under the condition that the ambient light intensity meets a preset intensity range, triggering the operation of acquiring an ambient image within the user visual angle range and determining a target space area; or starting a function of determining a display area under the condition that the ambient light intensity meets a preset intensity range, wherein the function of determining the display area is used for determining the target space area.
Optionally, the determining the brightness information of the environment image includes: and determining brightness information of each pixel unit in the environment image.
Optionally, the dividing the space within the user view angle range based on the brightness information and the spatial information of the environment image to obtain at least two spatial regions includes: corresponding the brightness information of the environment image with the space information; and dividing the spatial information based on the brightness information to obtain at least two groups of spatial information, wherein each group of spatial information corresponds to a spatial region.
Optionally, the dividing the space within the user view angle range based on the brightness information and the spatial information of the environment image to obtain at least two spatial regions includes: dividing the environment image into at least two plane areas based on brightness information of the environment image; obtaining at least two spatial regions corresponding to the at least two planar regions based on the spatial information.
Optionally, the determining whether each of the at least two spatial regions satisfies a predetermined brightness condition comprises: determining whether overall brightness information of the space region is smaller than a preset brightness threshold value, wherein the overall brightness comprises a brightness mean value obtained based on the brightness information of the space region; or determining whether the luminance information within a certain proportion of the spatial region is less than a predetermined luminance threshold.
Optionally, the method further comprises: and under the condition that the size of the virtual image does not match the size of the target space region, scaling the virtual image so as to enable the virtual image to be matched with the target space region.
Another aspect of the present disclosure provides an apparatus for determining a display area, including: the image acquisition module is used for acquiring an environment image within a user visual angle range and determining the brightness information of the environment image; the space acquisition module is used for acquiring space information within the user visual angle range; the dividing module is used for dividing the space within the user visual angle range based on the brightness information of the environment image and the space information to obtain at least two space areas; a determining module, configured to determine whether each of the at least two spatial regions satisfies a predetermined brightness condition, and use the spatial region satisfying the predetermined brightness condition as a target spatial region for displaying a virtual image; and the display module is used for displaying the virtual image in the target space area.
Another aspect of the present disclosure provides an electronic device including:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the brightness information may be determined according to the acquired image, and the space within the user viewing angle range may be divided into a plurality of space regions having different brightness conditions according to the brightness information and the space information, and one space region suitable for displaying the virtual image may be selected from the plurality of space regions as a target space region according to the display requirement of the virtual image, and then the virtual image may be displayed in the target space region. Based on the mode, the influence of the illumination condition is considered when the display position of the virtual image is determined, the virtual image can be displayed in the space area with the proper brightness condition, the virtual image has a better display effect, and the user experience is improved.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a method of determining a display area according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of determining a display area according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a diagram of an environment image within a user perspective, in accordance with an embodiment of the present disclosure;
FIG. 4 schematically shows a schematic diagram of spatial partitioning according to an embodiment of the present disclosure;
fig. 5A and 5B schematically illustrate a schematic diagram of image partitioning according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a block diagram of an apparatus for determining a display area according to an embodiment of the present disclosure; and
FIG. 7 schematically illustrates a block diagram of an electronic device suitable for implementing a method of determining a display area according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
An embodiment of the present disclosure provides a method of determining a display area, the method including: and acquiring an environment image within the visual angle range of the user, and determining the brightness information of the environment image. And acquiring spatial information within the visual angle range of the user. And dividing the space within the user visual angle range based on the brightness information and the space information of the environment image to obtain at least two space areas. Determining whether each of the at least two spatial regions satisfies a predetermined luminance condition, and regarding the spatial region satisfying the predetermined luminance condition as a target spatial region for displaying a virtual image. And displaying the virtual image in the target space area.
Fig. 1 schematically illustrates an application scenario of a method of determining a display area according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the method for determining a display area according to the embodiment of the present disclosure may be applied to, for example, AR glasses 100, and the AR glasses 100 may superimpose a virtual image on a real environment within a viewing angle range of a user, where the user can see not only the real environment in front of the user through the glasses but also the virtual image superimposed on the real environment.
According to the method for determining the display area, the space in the visual angle range of the user can be divided according to the ambient brightness, a space area meeting the brightness condition is obtained, and the virtual image is displayed in the space area.
Fig. 2 schematically illustrates a flow chart of a method of determining a display area according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S250.
In operation S210, an environment image within a user' S viewing angle range is acquired, and brightness information of the environment image is determined.
For example, an image of the environment in front of the user may be acquired in real time by a camera on the AR device. The camera may be an RGB camera, for example.
According to an embodiment of the present disclosure, determining luminance information of an ambient image includes: luminance information of each pixel unit in the environment image is determined.
For example, the gray scale information may be used as the brightness information, and determining the brightness information of each pixel unit in the environment image may refer to determining the gray scale value of each pixel point in the environment image.
Fig. 3 schematically shows a schematic diagram of an environment image within a user viewing angle range according to an embodiment of the present disclosure.
As shown in fig. 3, for example, an environment image within the user viewing angle range is displayed with a wall surface 301 and a window 302, the brightness of each pixel point is determined according to the gray scale value of the image, the brightness of the window 302 area is larger, and the brightness of the wall surface 301 area is smaller.
In operation S220, spatial information within a user' S viewing angle range is acquired.
For example, the spatial information in the user view angle range may include coordinate information, depth information, angle information, and the like of each object in the user view angle range, and may reflect the position of each object in front of the user and the distance between each object and the user. For example, referring to fig. 3, the user has a wall 301 and a window 302 within the viewing angle range, and the spatial information may include the coordinate positions of the wall 301 and the window 302 and the distances of the wall 301 and the window 302 from the AR device 303. The spatial information may be acquired by means of a laser sensor or the like on the AR device.
The AR equipment can acquire the space information and the image information of the space where the user is located in real time, and construct a three-dimensional map by utilizing an SLAM algorithm based on the acquired information.
In operation S230, a space within a user viewing angle range is divided based on the luminance information and the spatial information of the environment image, resulting in at least two spatial regions.
Fig. 4 schematically shows a schematic diagram of spatial partitioning according to an embodiment of the present disclosure.
As shown in fig. 4, since the wall surface 301 and the window 302 have different brightness, the space within the user viewing angle range can be divided into two space regions according to the wall surface 301 and the window 302, the space a corresponding to the wall surface 301 is used as one space region, and the space B corresponding to the window 302 is used as the other space region.
In operation S240, it is determined whether each of the at least two spatial regions satisfies a predetermined luminance condition, and the spatial region satisfying the predetermined luminance condition is used as a target spatial region for displaying the virtual image.
The predetermined brightness condition may be used to select a spatial region whose brightness condition satisfies the display requirement of the virtual image.
For example, in a bright light condition, if a virtual image is displayed in an area with high ambient light intensity, the ambient background light of the virtual image is too strong, which may cause the virtual image to be displayed in an unclear manner. Thus, the predetermined brightness condition may be used to select a darker spatial region of the at least two spatial regions, for example, the predetermined brightness condition may refer to a spatial region having an overall brightness smaller than a preset brightness threshold, or may refer to a spatial region having a lowest overall brightness of the at least two spatial regions. Based on the mode, the virtual image can be displayed in a darker space area, the influence of strong light on the virtual image can be reduced, and a better display effect is obtained.
In operation S250, a virtual image is displayed in a target spatial region.
After the target space area satisfying the brightness condition is selected, the virtual image can be displayed in the target space area to obtain a better display effect.
According to the embodiment of the disclosure, the brightness information may be determined according to the acquired image, and the space within the user viewing angle range may be divided into a plurality of space regions having different brightness conditions according to the brightness information and the space information, and one space region suitable for displaying the virtual image may be selected from the plurality of space regions as a target space region according to the display requirement of the virtual image, and then the virtual image may be displayed in the target space region. Based on the mode, the influence of the illumination condition is considered when the display position of the virtual image is determined, the virtual image can be displayed in the space area with the proper brightness condition, the virtual image has a better display effect, and the user experience is improved.
According to the embodiment of the present disclosure, when the spatial region is divided, the division may be performed based on the following two ways:
(1) according to an embodiment of the present disclosure, dividing the space within the user view angle range based on the luminance information and the spatial information of the environment image, and obtaining at least two spatial regions may include: dividing the environment image into at least two plane areas based on the brightness information of the environment image; and obtaining at least two spatial regions corresponding to the at least two planar regions based on the spatial information.
For example, the planar area may be regular in shape, such as a square. When performing plane division of an image, a regular area with a small difference in luminance value may be used as a plane area. For example, at least two brightness ranges may be preset, and the brightness values of most of the pixels located in the same plane region belong to the same brightness range, where the most of the pixels in the plane region may refer to, for example, 70% or more than 80% of the pixels in the plane region.
Fig. 5A and 5B schematically illustrate diagrams of image partitioning according to an embodiment of the present disclosure.
As shown in fig. 5A, a luminance threshold for division may be preset, and two plane regions 511 and 512 may be obtained by dividing the image 510 according to the luminance threshold, so that the luminance value of, for example, more than 70% of the pixels in the plane region 511 is smaller than the luminance threshold, and the luminance value of, for example, more than 70% of the pixels in the plane region 512 is greater than the luminance threshold.
As shown in fig. 5B, similarly, three plane areas 521, 522, and 523 can be obtained by dividing the image 510 according to the brightness threshold, so that the brightness values of more than 70% of the pixels in the plane areas 521 and 523 are greater than the brightness threshold, and the brightness values of more than 70% of the pixels in the plane area 512 are less than the brightness threshold.
The brightness threshold may be preset, or may also be determined according to the overall brightness of a specific image, for example, an average brightness value of the entire image may be used as the brightness threshold, or may be determined in other ways.
After obtaining the plurality of plane areas, the spatial information may be combined to obtain a spatial area corresponding to each plane area.
Based on the above manner, the image can be divided according to the brightness information to obtain a plurality of plane areas, and then the plane areas are corresponding to the space areas by combining the space information.
(2) According to an embodiment of the present disclosure, dividing the space within the user view angle range based on the luminance information and the spatial information of the environment image, and obtaining at least two spatial regions may include: and corresponding the brightness information of the environment image with the spatial information, and dividing the spatial information based on the brightness information to obtain at least two groups of spatial information, wherein each group of spatial information corresponds to a spatial region.
Specifically, when performing the spatial division, a three-dimensional model may be constructed based on the spatial information, and then the luminance information of the environment image may be associated with the spatial information, for example, the luminance information of the environment image may be mapped into the three-dimensional model, so as to obtain the three-dimensional model with the luminance information.
The spatial information is divided according to the luminance information of each region to obtain at least two sets of spatial information, for example, a regular region with a small luminance difference in the three-dimensional model may be used as a spatial region. For example, at least two brightness ranges may be preset, so that the brightness values of most of the pixels in the same divided spatial region belong to the same brightness range, where the most of the pixels may refer to, for example, 70% or more than 80% of the pixels.
Based on the above manner, the luminance information may be mapped into the geometric space, and then the geometric space is divided into a plurality of spatial regions according to the luminance of each region in the geometric space.
According to the embodiment of the disclosure, based on the two space division modes, a plurality of space areas can be quickly obtained through division according to the brightness conditions, the brightness of most of the space areas in each space area is consistent, and which areas are brighter and which areas are darker in the space within the visual angle range of a user can be distinguished according to the divided space areas, so that the space areas with proper brightness conditions can be selected according to the requirements of virtual images.
According to an embodiment of the present disclosure, a target spatial region satisfying a predetermined luminance condition may be determined in two ways:
(1) and determining whether the overall brightness information of the space region is smaller than a preset brightness threshold value, wherein the overall brightness comprises a brightness mean value obtained based on the brightness information of the space region.
For example, the luminance average value of the luminance values of the pixel points included in each spatial region may be calculated, and whether the luminance average value of each spatial region is smaller than a predetermined luminance threshold value is determined, where the predetermined luminance threshold value may be the same as or different from the luminance threshold value for dividing. One spatial region with the luminance mean value smaller than the predetermined luminance threshold value may be used as the target spatial region, and if there are a plurality of spatial regions with the luminance mean values smaller than the predetermined luminance threshold value, one spatial region with the smallest luminance mean value may be used as the target spatial region.
(2) And determining whether the brightness information in a certain proportion range in the space region is smaller than a preset brightness threshold value.
For example, the certain proportion range may be 70% or 80%, if the luminance value of 70% or more of the pixels included in a certain spatial region is smaller than the predetermined luminance threshold, the spatial region may be regarded as the target spatial region, and if the number of spatial regions satisfying the condition is plural, one spatial region having the smallest luminance mean value may be regarded as the target spatial region.
Based on the above, a spatial region having a smaller overall brightness may be selected from the plurality of spatial regions as the target spatial region. The virtual image is displayed in a darker space area in the current visual angle, so that the influence of strong environmental light on the display effect of the virtual image is at least partially avoided, and the virtual image can have a clearer display effect.
According to the embodiment of the present disclosure, the method of determining the display area may further acquire the ambient light intensity before the spatial division is performed, for example, the ambient light intensity may be acquired once every predetermined time period (for example, 30 seconds). An optical sensor may be disposed on the AR device and the ambient light intensity may be obtained by the optical sensor.
And under the condition that the intensity of the ambient light meets a preset intensity range, triggering the operation of acquiring the ambient image within the visual angle range of the user and determining the target space area.
For example, if the ambient light intensity is greater than the preset light intensity threshold, indicating that the current ambient light intensity is greater, the display effect of the virtual image will be significantly affected by the ambient light, and in this case, the above operations related to space division and selecting the target space region may be triggered, so that the virtual image is displayed in the better display region under the bright light condition. Based on the mode, the operation of space division and target space area selection can be triggered when the ambient light intensity is increased, and the situation that the virtual image is not clearly displayed due to the increase of the ambient light intensity is avoided.
Alternatively, in the case where the ambient light intensity satisfies the predetermined intensity range, the function of determining the display area is activated, and the function of determining the display area is used to determine the target spatial area.
For example, if the ambient light intensity is greater than the preset light intensity threshold value, it indicates that the current ambient light intensity is greater, and the user may be located in a bright environment such as outdoors, and the display effect of the virtual image may be continuously affected by the ambient light. Based on the mode, after the function of the display area is started, the operation of space division and target space area selection can be continuously carried out, so that the situation that the virtual image display is not clear due to the fact that the user is in a bright environment for a long time is avoided.
According to an embodiment of the present disclosure, the method of determining a display area may further include: in case the size of the virtual image does not match the size of the target spatial region, the virtual image is scaled to fit the virtual image to the target spatial region.
For example, the virtual image may be adjusted according to the size and depth of the target spatial region to fit the virtual image to the target spatial region, and based on this, when the virtual image moves from another region to the target spatial region, the virtual image may fit the size of the target spatial region, so that the virtual image has a more real display effect.
According to the embodiment of the disclosure, a better display effect can be achieved by adjusting the color of the virtual image. For example, when the user is in an outdoor bright environment, the color of the virtual image may be adjusted to a color that is greatly different from the color of the environment, such as red, so that the virtual image can be clearly displayed in a bright condition.
Another aspect of the embodiments of the present disclosure also provides an apparatus for determining a display area.
Fig. 6 schematically shows a block diagram of an apparatus for determining a display area according to an embodiment of the present disclosure.
As shown in fig. 6, the apparatus 600 includes an image acquisition module 610, a space acquisition module 620, a division module 630, a determination module 640, and a display module 650.
The image obtaining module 610 is configured to obtain an environment image within a user viewing angle range, and determine brightness information of the environment image.
The space obtaining module 620 is configured to obtain space information within a user viewing angle range.
The dividing module 630 is configured to divide the space within the user view range based on the brightness information and the space information of the environment image to obtain at least two space regions.
The determining module 640 is configured to determine whether each of the at least two spatial regions satisfies a predetermined brightness condition, and use the spatial region satisfying the predetermined brightness condition as a target spatial region for displaying the virtual image.
The display module 650 is used for displaying the virtual image in the target space region.
According to an embodiment of the present disclosure, the apparatus may further include an ambient light brightness module, the ambient light brightness module being configured to: acquiring the intensity of ambient light; under the condition that the ambient light intensity meets a preset intensity range, triggering the operation of acquiring an ambient image within a user visual angle range and determining a target space area; or in the case that the ambient light intensity satisfies the predetermined intensity range, a function of determining the display area is activated, and the function of determining the display area is used for determining the target spatial area.
According to an embodiment of the present disclosure, determining luminance information of an ambient image includes: luminance information of each pixel unit in the environment image is determined.
According to an embodiment of the present disclosure, the dividing module is further configured to correspond luminance information of the environment image to the spatial information; and dividing the spatial information based on the brightness information to obtain at least two groups of spatial information, wherein each group of spatial information corresponds to a spatial region.
According to an embodiment of the disclosure, the partitioning module is further configured to: dividing the environment image into at least two plane areas based on the brightness information of the environment image; at least two spatial regions corresponding to the at least two planar regions are obtained based on the spatial information.
According to an embodiment of the disclosure, the determining module is further configured to: determining whether the overall brightness information of the space region is smaller than a preset brightness threshold value, wherein the overall brightness comprises a brightness mean value obtained based on the brightness information of the space region; or determining whether the luminance information within a certain proportion of the spatial region is less than a predetermined luminance threshold.
According to an embodiment of the disclosure, the display module is further configured to: in case the size of the virtual image does not match the size of the target spatial region, the virtual image is scaled to fit the virtual image to the target spatial region.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the image acquisition module 610, the space acquisition module 620, the division module 630, the determination module 640, the display module 650, and the ambient light level module may be combined in one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the image acquisition module 610, the space acquisition module 620, the division module 630, the determination module 640, the display module 650, and the ambient light level module may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging circuits, or in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the image acquisition module 610, the space acquisition module 620, the dividing module 630, the determining module 640, the display module 650, and the ambient light level module may be at least partially implemented as a computer program module that, when executed, may perform corresponding functions.
It should be noted that, a device portion for determining a display area in the embodiment of the present disclosure corresponds to a method portion for determining a display area in the embodiment of the present disclosure, and the description of the device portion for determining a display area specifically refers to the method portion for determining a display area, and is not repeated herein.
Fig. 7 schematically shows a block diagram of an electronic device adapted to implement the above described method according to an embodiment of the present disclosure. The computer system illustrated in FIG. 7 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 7, the electronic device 700 includes a processor 710, a computer-readable storage medium 720, a photographing apparatus 730, and a spatial information acquiring apparatus 740. The electronic device 700 may perform a method according to an embodiment of the present disclosure.
In particular, processor 710 may comprise, for example, a general purpose microprocessor, an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 710 may also include on-board memory for caching purposes. Processor 710 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
Computer-readable storage medium 720, for example, may be a non-volatile computer-readable storage medium, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 720 may include a computer program 721, which computer program 721 may include code/computer-executable instructions that, when executed by the processor 710, cause the processor 710 to perform a method according to an embodiment of the disclosure, or any variation thereof.
The computer program 721 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 721 may include one or more program modules, including 721A, modules 721B, … …, for example. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 710 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 710.
According to an embodiment of the present disclosure, the processor 710 may interact with the photographing device 730 and the spatial information acquiring device 740 to perform a method according to an embodiment of the present disclosure or any variation thereof.
According to an embodiment of the present invention, at least one of the image acquisition module 610, the space acquisition module 620, the division module 630, the determination module 640, the display module 650, and the ambient light level module may be implemented as a computer program module described with reference to fig. 7, which, when executed by the processor 710, may implement the corresponding operations described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (7)

1. A method of determining a display area, comprising:
acquiring an environment image within a user visual angle range, and determining brightness information of each pixel unit in the environment image;
acquiring spatial information within the user visual angle range;
dividing the space within the user visual angle range based on the brightness information and the space information of the environment image to obtain at least two space areas;
determining whether each of the at least two spatial regions satisfies a predetermined brightness condition, and using the spatial region satisfying the predetermined brightness condition as a target spatial region for displaying a virtual image;
displaying the virtual image in the target space area to enable the virtual image to have a clear display effect;
wherein the dividing the space within the user view range based on the brightness information and the spatial information of the environment image to obtain at least two spatial regions comprises:
dividing the environment image into at least two plane areas based on brightness information of the environment image;
obtaining at least two spatial regions corresponding to the at least two planar regions based on the spatial information;
wherein the determining whether each of the at least two spatial regions satisfies a predetermined brightness condition comprises:
determining whether overall brightness information of the space region is smaller than a preset brightness threshold value, wherein the overall brightness comprises a brightness mean value obtained based on the brightness information of the space region; or
Determining whether the luminance information within a certain proportion of the spatial region is less than a predetermined luminance threshold.
2. The method of claim 1, further comprising:
acquiring the intensity of ambient light;
under the condition that the ambient light intensity meets a preset intensity range, triggering the operation of acquiring an ambient image within the user visual angle range and determining a target space area; or
And starting a function of determining a display area under the condition that the ambient light intensity meets a preset intensity range, wherein the function of determining the display area is used for determining the target space area.
3. The method of claim 1, wherein the dividing the space within the user perspective range based on the brightness information and the spatial information of the environment image to obtain at least two spatial regions comprises:
corresponding the brightness information of the environment image with the space information;
and dividing the spatial information based on the brightness information to obtain at least two groups of spatial information, wherein each group of spatial information corresponds to a spatial region.
4. The method of claim 1, further comprising:
and under the condition that the size of the virtual image does not match the size of the target space region, scaling the virtual image so as to enable the virtual image to be matched with the target space region.
5. An apparatus for determining a display area, comprising:
the image acquisition module is used for acquiring an environment image within a user visual angle range and determining the brightness information of each pixel unit in the environment image;
the space acquisition module is used for acquiring space information within the user visual angle range;
the dividing module is used for dividing the space within the user visual angle range based on the brightness information of the environment image and the space information to obtain at least two space areas;
a determining module, configured to determine whether each of the at least two spatial regions satisfies a predetermined brightness condition, and use the spatial region satisfying the predetermined brightness condition as a target spatial region for displaying a virtual image;
the display module is used for displaying the virtual image in the target space area;
the dividing module is further used for dividing the environment image into at least two plane areas based on the brightness information of the environment image; obtaining at least two spatial regions corresponding to the at least two planar regions based on the spatial information;
the determining module is further configured to determine whether overall brightness information of the spatial region is smaller than a predetermined brightness threshold, where the overall brightness includes a brightness mean obtained based on the brightness information of the spatial region; or determining whether the luminance information within a certain proportion of the spatial region is less than a predetermined luminance threshold.
6. An electronic device, comprising:
one or more processors;
a computer-readable storage medium for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
7. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 4.
CN201911422580.2A 2019-12-30 2019-12-30 Method and apparatus for determining display area, computer system, and readable storage medium Active CN111176452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911422580.2A CN111176452B (en) 2019-12-30 2019-12-30 Method and apparatus for determining display area, computer system, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911422580.2A CN111176452B (en) 2019-12-30 2019-12-30 Method and apparatus for determining display area, computer system, and readable storage medium

Publications (2)

Publication Number Publication Date
CN111176452A CN111176452A (en) 2020-05-19
CN111176452B true CN111176452B (en) 2022-03-25

Family

ID=70652466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911422580.2A Active CN111176452B (en) 2019-12-30 2019-12-30 Method and apparatus for determining display area, computer system, and readable storage medium

Country Status (1)

Country Link
CN (1) CN111176452B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662295A (en) * 2012-05-18 2012-09-12 海信集团有限公司 Method and device for adjusting projection display screen size of projector
CN103761763A (en) * 2013-12-18 2014-04-30 微软公司 Method for constructing reinforced reality environment by utilizing pre-calculation
CN104216517A (en) * 2014-08-25 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN106020760A (en) * 2016-05-31 2016-10-12 珠海市魅族科技有限公司 Multi-display-brightness data display method and device
CN106201254A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106842570A (en) * 2017-01-18 2017-06-13 上海乐蜗信息科技有限公司 A kind of wear-type mixed reality device and control method
CN107852488A (en) * 2015-05-22 2018-03-27 三星电子株式会社 System and method for showing virtual image by HMD device
CN108027519A (en) * 2015-04-30 2018-05-11 索尼公司 Display device
CN110021071A (en) * 2018-12-25 2019-07-16 阿里巴巴集团控股有限公司 Rendering method, device and equipment in a kind of application of augmented reality

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6459380B2 (en) * 2014-10-20 2019-01-30 セイコーエプソン株式会社 Head-mounted display device, head-mounted display device control method, and computer program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662295A (en) * 2012-05-18 2012-09-12 海信集团有限公司 Method and device for adjusting projection display screen size of projector
CN103761763A (en) * 2013-12-18 2014-04-30 微软公司 Method for constructing reinforced reality environment by utilizing pre-calculation
CN104216517A (en) * 2014-08-25 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN108027519A (en) * 2015-04-30 2018-05-11 索尼公司 Display device
CN107852488A (en) * 2015-05-22 2018-03-27 三星电子株式会社 System and method for showing virtual image by HMD device
CN106020760A (en) * 2016-05-31 2016-10-12 珠海市魅族科技有限公司 Multi-display-brightness data display method and device
CN106201254A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106842570A (en) * 2017-01-18 2017-06-13 上海乐蜗信息科技有限公司 A kind of wear-type mixed reality device and control method
CN110021071A (en) * 2018-12-25 2019-07-16 阿里巴巴集团控股有限公司 Rendering method, device and equipment in a kind of application of augmented reality

Also Published As

Publication number Publication date
CN111176452A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
KR102340934B1 (en) Method and device to display background image
US10083540B2 (en) Virtual light in augmented reality
US20150262412A1 (en) Augmented reality lighting with dynamic geometry
US10950039B2 (en) Image processing apparatus
US20200066030A1 (en) Digital 3d model rendering based on actual lighting conditions in a real environment
US11893701B2 (en) Method for simulating natural perception in virtual and augmented reality scenes
US20120155744A1 (en) Image generation method
US20140139519A1 (en) Method for augmenting reality
US20170169613A1 (en) Displaying an object with modified render parameters
KR20160072547A (en) 3d rendering method and apparatus
Maurus et al. Realistic heatmap visualization for interactive analysis of 3D gaze data
KR101652594B1 (en) Apparatus and method for providingaugmented reality contentents
US20200258312A1 (en) Image processing method and device
US11562545B2 (en) Method and device for providing augmented reality, and computer program
JP2019527355A (en) Computer system and method for improved gloss rendering in digital images
CN116194958A (en) Selective coloring of thermal imaging
US10088678B1 (en) Holographic illustration of weather
CN108475434B (en) Method and system for determining characteristics of radiation source in scene based on shadow analysis
US20190066366A1 (en) Methods and Apparatus for Decorating User Interface Elements with Environmental Lighting
US11461978B2 (en) Ambient light based mixed reality object rendering
CN111176452B (en) Method and apparatus for determining display area, computer system, and readable storage medium
CN115511870A (en) Object detection method and device, electronic equipment and storage medium
US20210258560A1 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
JP2020504836A (en) System and method for correction of reflection on a display device
CN116569091A (en) Dynamic quality proxy complex camera image blending

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant