CN114564108A - Image display method, device and storage medium - Google Patents

Image display method, device and storage medium Download PDF

Info

Publication number
CN114564108A
CN114564108A CN202210204883.2A CN202210204883A CN114564108A CN 114564108 A CN114564108 A CN 114564108A CN 202210204883 A CN202210204883 A CN 202210204883A CN 114564108 A CN114564108 A CN 114564108A
Authority
CN
China
Prior art keywords
image
target
radius
display screen
superposed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210204883.2A
Other languages
Chinese (zh)
Inventor
杜琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210204883.2A priority Critical patent/CN114564108A/en
Publication of CN114564108A publication Critical patent/CN114564108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The present disclosure relates to a method, an apparatus and a storage medium for displaying images, which relate to the field of AR technology, and include: acquiring an image to be blocked corresponding to a target virtual image; acquiring the pupil radius of a user and the viewing distance between the eyes of the user and the first display screen; determining an image to be superposed according to the pupil radius, the viewing distance and the image to be shielded; superposing the image to be superposed and the image to be fixedly shielded to obtain a target shielding image; and displaying the target virtual image at a first target position of the second display screen, and displaying the target occlusion image at a second target position corresponding to the first target position in the first display screen. Therefore, the target shielding images with the consistent effect can be superposed behind the target virtual images with different projection distances, so that the edge halo effect of the target virtual image finally seen by a user is kept consistent, and better visual experience is brought to the user.

Description

Image display method, device and storage medium
Technical Field
The present disclosure relates to the field of AR technologies, and in particular, to a method and an apparatus for displaying an image, and a storage medium.
Background
The AR equipment better fuses the virtual world and the real world together by displaying the target virtual image in the real world in an overlapping manner, so that the effect of augmented reality is achieved. Since the target virtual image is generally incident on the retina of the user together with external ambient light by the principle of optical path superposition. However, at the same time, the target virtual image cannot block the incidence of external ambient light, which will cause the target virtual image finally seen by the user to present a semitransparent ghost effect.
At present, a layer of transparent LCD (Liquid Crystal Display) screen is usually added on the back of the virtual image Display screen, and the area covered by the target virtual image is shielded by the target shielding image. However, when the projection distance of the target virtual image changes, the size of the edge halo of the target virtual image seen by the user cannot be kept consistent, thereby affecting the visual experience of the user.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, apparatus, and storage medium for image presentation.
According to a first aspect of the embodiments of the present disclosure, there is provided an image displaying method applied to an image displaying apparatus, where the image displaying apparatus includes a first display screen and a second display screen that are correspondingly disposed, the first display screen is disposed on a side close to a back of the second display screen, and the method includes: acquiring an image to be blocked corresponding to a target virtual image; acquiring the pupil radius of a user and the viewing distance between the eyes of the user and the first display screen; determining an image to be superposed according to the pupil radius, the viewing distance and the image to be occluded; superposing the image to be superposed and the image to be fixedly shielded to obtain the target shielding image; and displaying the target virtual image at a first target position of the second display screen, and displaying the target occlusion image at a second target position corresponding to the first target position in the first display screen.
Optionally, the determining an image to be superimposed according to the pupil radius, the viewing distance, and the image to be occluded includes: determining a first radius of a first diffusion circle of the image to be superposed according to the pupil radius and the viewing distance; acquiring a second radius of a second diffusion circle in the image to be fixedly shielded; and determining the image to be superposed according to the first radius, the second radius and the image to be blocked.
Optionally, the determining, according to the first radius, the second radius, and the image to be occluded, the image to be superimposed includes: determining a difference between the first radius and the second radius; taking the difference value as the width of the image to be superposed; and determining the image to be superposed according to the first radius, the width and the image to be occluded.
Optionally, the obtaining a second radius of a second circle of confusion in the image to be occluded includes: acquiring a target projection distance of the target virtual object; and determining a second radius of a second circle of confusion in the image to be fixedly shielded according to the target projection distance, the pupil radius and the viewing distance.
Optionally, the method further comprises: acquiring the illumination intensity of external environment light; and adjusting the gray value of the target shielding image according to the illumination intensity.
Optionally, the adjusting the gray value of the target occlusion image according to the illumination intensity includes: determining a target gray value corresponding to the illumination intensity from a plurality of gray values according to the illumination intensity and through a preset gray corresponding relation; the preset gray level corresponding relationship comprises a corresponding relationship between at least one illumination intensity value and a gray level value; and adjusting the gray value of the target shielding image according to the target gray value.
Optionally, the method further comprises: acquiring the irradiation direction of external ambient light; moving the image to be superposed according to a preset distance along the irradiation direction; the step of superposing the image to be superposed and the image to be fixedly shaded to obtain the target shading image comprises the following steps: and superposing the moved image to be superposed and the image to be fixedly shielded to obtain the target shielding image.
Optionally, the method further comprises: acquiring current time information and geographical position information; determining a target moving direction according to the time information and the geographical position information; moving the image to be superposed according to a preset distance along the target moving direction; the step of superposing the image to be superposed and the image to be fixedly shaded to obtain the target shading image comprises the following steps: and superposing the moved image to be superposed and the image to be fixedly shielded to obtain the target shielding image.
Optionally, the method further comprises: and adjusting the brightness value of the first display screen according to the illumination intensity.
According to a second aspect of the embodiments of the present disclosure, there is provided an image display apparatus applied to an image display apparatus, the image display apparatus includes a first display screen and a second display screen which are correspondingly disposed, the first display screen is disposed on a side close to a back of the second display screen, and the apparatus includes: the first acquisition module is configured to acquire an image to be occluded corresponding to the target virtual image; a second acquisition module configured to acquire a pupil radius of a user and a viewing distance between an eye of the user and the first display screen; an image determining module configured to determine an image to be superimposed according to the pupil radius, the viewing distance and the image to be occluded; the image superposition module is configured to superpose the image to be superposed and the image to be fixedly shielded to obtain the target shielding image; an image presentation module configured to present the target virtual image at a first target position of the second display screen and present the target occlusion image at a second target position in the first display screen corresponding to the first target position.
Optionally, the image determination module comprises: a first determining submodule configured to determine a first radius of a first circle of confusion of the image to be superimposed according to the pupil radius and the viewing distance; the obtaining submodule is configured to obtain a second radius of a second circle of confusion in the image to be fixedly shielded; a second determining submodule configured to determine the image to be superimposed according to the first radius, the second radius and the image to be occluded.
Optionally, the second determining submodule is configured to determine a difference between the first radius and the second radius; taking the difference value as the width of the image to be superposed; and determining the image to be superposed according to the first radius, the width and the image to be occluded.
Optionally, the obtaining sub-module is configured to obtain a target projection distance of the target virtual object; and determining a second radius of a second diffusion circle in the image to be blocked according to the target projection distance, the pupil radius and the viewing distance.
Optionally, the apparatus further comprises: the third acquisition module is configured to acquire the illumination intensity of the external environment light; and the adjusting module is configured to adjust the gray value of the target shielding image according to the illumination intensity.
Optionally, the adjusting module is configured to determine, according to the illumination intensity, a target gray value corresponding to the illumination intensity from a plurality of gray values through a preset gray correspondence; the preset gray level corresponding relationship comprises a corresponding relationship between at least one illumination intensity value and a gray level value; and adjusting the gray value of the target shielding image according to the target gray value.
Optionally, the apparatus further comprises: a fourth acquisition module configured to acquire an irradiation direction of external ambient light; the first moving module is configured to move the image to be superposed according to a preset distance along the irradiation direction; the image superposition module is configured to superpose the moved image to be superposed and the image to be fixedly occluded to obtain the target occlusion image.
Optionally, the apparatus further comprises: a fifth obtaining module configured to obtain current time information and geographical location information; a direction determination module configured to determine a target moving direction according to the time information and the geographic position information; the second moving module is configured to move the image to be superposed according to a preset distance along the target moving direction; the image superposition module is configured to superpose the moved image to be superposed and the image to be fixedly occluded to obtain the target occlusion image.
Optionally, the adjusting module is configured to adjust a brightness value of the first display screen according to the illumination intensity.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for image presentation, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the steps of the method of image presentation provided by the first aspect of the present disclosure upon invoking executable instructions stored on the memory.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of image presentation provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the image display device provided by the present disclosure includes a first display screen and a second display screen which are correspondingly arranged, wherein the first display screen is arranged on one side close to the back of the second display screen. Firstly, acquiring an image to be blocked corresponding to a target virtual image; acquiring the pupil radius of a user and the viewing distance between the eyes of the user and the first display screen; secondly, determining an image to be superposed according to the pupil radius, the viewing distance and the image to be shielded; superposing the image to be superposed and the image to be fixedly shielded to obtain a target shielding image; and finally, displaying the target virtual image at a first target position of the second display screen, and displaying the target occlusion image at a second target position corresponding to the first target position in the first display screen. Through the technical scheme, the image to be superposed can be generated according to the pupil radius, the viewing distance and the image to be shielded of the user, so that the image to be superposed and the image to be shielded are superposed to obtain the target shielding image, and the target shielding image and the target virtual image are displayed on the first display screen and the second display screen respectively. Therefore, the target shielding images with the consistent effect can be superposed behind the target virtual images with different projection distances, so that the edge halo effect of the target virtual image finally seen by a user is kept consistent, and better visual experience is brought to the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a diagram illustrating an application scenario in accordance with an exemplary embodiment;
FIG. 2 is a block diagram illustrating one principle of optical path incidence according to an exemplary embodiment;
FIG. 3a is a diagram illustrating another application scenario in accordance with an illustrative embodiment;
FIG. 3b is a diagram illustrating another application scenario in accordance with an illustrative embodiment;
FIG. 4 is a flow diagram illustrating a method of image presentation in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a method for determining a location of a second target in accordance with an exemplary embodiment;
FIG. 6 is a flow diagram illustrating another method of image presentation in accordance with an exemplary embodiment;
FIG. 7a is a diagram illustrating another application scenario in accordance with an illustrative embodiment;
FIG. 7b is a diagram illustrating another application scenario in accordance with an illustrative embodiment;
FIG. 8 is a flow diagram illustrating another method of image presentation in accordance with an exemplary embodiment;
FIG. 9 is a flow diagram illustrating another method of image presentation in accordance with an exemplary embodiment;
FIG. 10 is a flow diagram illustrating another method of image presentation in accordance with an exemplary embodiment;
FIG. 11 is a flow diagram illustrating another method of image presentation in accordance with an exemplary embodiment;
FIG. 12 is a block diagram illustrating an apparatus for image presentation in accordance with an exemplary embodiment;
FIG. 13 is a block diagram illustrating another apparatus for image presentation in accordance with an exemplary embodiment;
FIG. 14 is a block diagram illustrating another apparatus for image presentation in accordance with an exemplary embodiment;
FIG. 15 is a block diagram illustrating another apparatus for image presentation in accordance with an exemplary embodiment;
FIG. 16 is a block diagram illustrating another apparatus for image presentation in accordance with an exemplary embodiment;
FIG. 17 is a block diagram illustrating an apparatus for image presentation according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
In the description that follows, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
Before describing the method, apparatus, and storage medium for image presentation provided by the present disclosure, an application scenario related to various embodiments of the present disclosure is first described. The method is applied to a scene of displaying the target virtual image, and the target virtual image is generally incident on the retina of a user together with external environment light through the principle of light path superposition. However, at the same time, the target virtual image cannot block the incidence of the external environment light, which will cause the target virtual image finally seen by the user to present a semitransparent ghost effect, for example, as shown in fig. 1, the flower (i.e. the target virtual image) in the drawing seen by the user presents a semitransparent ghost effect under the irradiation of the external environment light.
At present, a layer of transparent LCD screen is usually added on the back of the virtual image display screen, and the region covered by the target virtual image is shielded by the target shielding image, so as to avoid the ghost effect caused by the incidence of external ambient light. However, in actual application, the target virtual image may be projected at different depths due to different application scenes. For example, in an office scene, the target virtual image is usually projected at a depth distance of about 0.5 m, so that a user can quickly switch to the target virtual image when reading a real object such as a computer screen or a book. As another example, in a driving scenario, the target virtual image is typically projected on a real road that is more than 10 meters, so that the user quickly switches between the real road and the target virtual image. In scenes such as travel shopping, the target virtual image is usually projected on or near the surface of the relevant object, and the projection distance varies from tens of centimeters to tens of meters. When the projection distance of the target virtual image changes, the radius of the circle of confusion of the target shielding image also changes. For example, as shown in fig. 2, a point a in fig. 2 is a projection point of the target virtual image, a point B is a display point of the target occlusion image, a 'point is an imaging point of the target virtual image, and B' point is an imaging point of the target occlusion image, it can be known that as the projection distance of the target virtual image (i.e. the distance between the points a and B in the figure) is larger, the distance between the imaging point of the target virtual image (the point a 'in the figure) and the imaging point of the target occlusion image (the point B' in the figure) is larger, so that the radius of the circle of confusion of the target occlusion image (the point d in the figure) is larger. The size of the edge halo of the target virtual image seen by the user cannot be kept consistent under the target virtual images with different projection distances, the visual experience of the user is affected, and particularly, the problem of inconsistent edge halos is more obvious in an outdoor strong light environment. For example, as shown in fig. 3a, the flower (i.e. the target virtual image) in the graph is projected in an indoor scene, the projection distance is short, the radius of the circle of confusion of the target occlusion image is small, that is, the width of the halo at the edge of the flower seen by the user is narrow. As shown in FIG. 3b, when the flower in the figure is projected in an outdoor scene, the projection distance is long, the radius of the circle of confusion of the target occlusion image is large, that is, the width of the halo at the edge of the flower seen by the user is wide. Referring to fig. 3a and 3b, when a user sees a target virtual image with different projection distances, the edge halo effect of the seen target virtual image is inconsistent, which affects the visual experience of the user.
In addition, some technical solutions are to shield the area covered by the target virtual image by using a multi-layer LCD screen or a micro-lens array, but such solutions have the problems of complex implementation and high cost, and the device is difficult to be light and thin, which is not beneficial to daily use of users.
In order to solve the above problem, the present disclosure provides an image display method, an image display device, and a storage medium, which are capable of generating an image to be superimposed according to a pupil radius of a user, a viewing distance, and an image to be occluded, so as to superimpose the image to be superimposed and the image to be occluded, obtain a target occlusion image, and display the target occlusion image and a target virtual image on a first display screen and a second display screen, respectively. Therefore, the target shielding images with the consistent effect can be superposed behind the target virtual images with different projection distances, so that the edge halo effect of the target virtual image finally seen by a user is kept consistent, and better visual experience is brought to the user.
The present disclosure is described below with reference to specific examples.
Fig. 1 is a flowchart illustrating an image displaying method according to an exemplary embodiment, which is applied to an image displaying apparatus including a first display screen and a second display screen that are correspondingly disposed, wherein the first display screen is disposed on a side close to a back surface of the second display screen. The image display device may be, for example, an optical see-through near-to-eye display device such as AR glasses, the first display screen may be, for example, an LCD transparent screen or a PEDOT transparent film, the second display screen may be a half-mirror, and the first display screen and the second display screen are correspondingly disposed. When the user uses the image display device, external ambient light can sequentially pass through the second display screen and the first display screen and then enter retinas of the user. The first display screen can be arranged on one side close to the back of the second display screen, and the back of the second display screen is the other side of the second display screen, which is used for displaying the front image of the target virtual image. As shown in fig. 4, the method comprises the steps of:
in step 101, an image to be occluded corresponding to a target virtual image is acquired.
The image to be blocked is obtained according to the target virtual image, and the shape of the image to be blocked is the same as that of the target virtual image. For example, as shown in fig. 5, X in the figure is a target virtual image obtained according to a preset projection distance, and images to be occluded (m and n in the figure) of the left eye and the right eye on the first display screen (Y in the figure) can be obtained according to the visual cone directions (the directions of the user's sight lines) of the pupils of both eyes of the user and the target virtual image, respectively. The visual cone directions of the pupils of the two eyes of the user can be obtained according to the pupil positions of the user, the pupil positions of the user can be preset positions, and the pupil positions of the two eyes of the user can be obtained in real time through an eye tracking method.
In step 102, a pupil radius of a user and a viewing distance between the user's eye and the first display screen are obtained.
For example, the pupil radius of the user may be obtained in real time by an eye tracking method, or may be a preset pupil radius. The watching distance between the eyes of the user and the first display screen can also be acquired in real time by an eye tracking method; the viewing distance may also be a predetermined distance.
In step 103, an image to be superimposed is determined according to the pupil radius, the viewing distance and the image to be occluded.
In this step, an image to be superimposed may be obtained according to the pupil radius, the viewing distance, and the image to be occluded, which are obtained in the above steps, and the shape of the image to be superimposed is the same as the shape of the image to be occluded.
As shown in fig. 6, the determining the image to be superimposed according to the pupil radius, the viewing distance and the image to be occluded in step 103 may include the following steps:
in step 1031, a first radius of a first circle of confusion of the image to be superimposed is determined according to the pupil radius and the viewing distance.
As shown in fig. 2, O in fig. 2 represents the position of the pupil of the user, Y represents the position of the first display screen, D in the figure represents the pupil radius of the user, and the viewing distance is the distance between the eyes of the user and the first display screen (U in the figure)b)。
For example, the first radius of the first circle of confusion of the image to be superimposed may be obtained from the pupil radius and the viewing distance by the following formula:
Figure BDA0003531059060000101
wherein d is1Is the first radius, V, of the first circle of confusionaDistance between target virtual images, D is pupil radius, UbIs the viewing distance.
As shown in FIG. 2, Z indicates the position of the user's retina, VaI.e. the distance from the pupil to the retina of the user, i.e. the distance of the above-mentioned target virtual image.
In step 1032, a second radius of a second circle of confusion in the image to be occluded is obtained.
The second radius of the first circle of confusion in the image to be blocked can reflect the degree of blurring of the image to be blocked, and the larger the second radius is, the more blurred the image to be blocked is.
Specifically, in step 1032, the obtaining of the second radius of the second circle of confusion in the image to be occluded may include the following steps:
and S1, acquiring the target projection distance of the target virtual object.
As shown in fig. 2, the target projection distance is a distance between a position (O in the figure) of the pupil of the user and a position (X in the figure) of the target virtual image, i.e. U in the figurea
And S2, determining a second radius of a second circle of confusion in the image to be blocked according to the target projection distance, the pupil radius and the viewing distance.
For example, according to the target projection distance, the pupil radius and the viewing distance, the second radius of the second circle of confusion in the image to be occluded can be obtained by the following formula:
Figure BDA0003531059060000111
wherein d is2Is the second radius of the second circle of confusion, VaDistance between the target virtual images, D is pupil radius, UbFor viewing distance, UaThe target throw distance.
In step S1033, the image to be superimposed is determined according to the first radius, the second radius, and the image to be occluded.
The width of the image to be superposed can be obtained according to the first radius and the second radius, and the shape of the image to be superposed is determined according to the image to be shielded.
Specifically, the step 1033 of determining the image to be superimposed according to the first radius, the second radius, and the image to be occluded may include the following steps:
and S1, determining the difference value between the first radius and the second radius, and taking the difference value as the width of the image to be superposed.
And S2, determining the image to be superposed according to the first radius, the width and the image to be blocked.
The width may be used as an image width of the image to be superimposed, and the first radius may be used as a radius of a first circle of confusion in the image to be superimposed. And determining the shape of the image to be superposed according to the shape of the image to be blocked. That is to say, the image to be superimposed obtained according to the above steps is an edge contour part surrounding the image to be occluded, so as to solve the problem that the halo effects of the edges of the target virtual images at different projection distances are not consistent.
Illustratively, the image to be superimposed may be obtained by a graphical algorithm, such as a Ray-Tracing algorithm, based on the first radius, the width, and the image to be occluded. And obtaining a processed image to be superposed by gradient blurring processing according to the first radius, the width and the image to be shielded. The image to be superimposed can be obtained through a pre-stored edge image template according to the first radius, the width and the image to be occluded, that is, the image to be superimposed can be matched according to the first radius, the width, the image to be occluded and data in the pre-stored edge image template (that is, the radius of a circle in confusion in each edge image, the image width and the corresponding image to be occluded), and the edge image template successfully matched is used as the image to be superimposed.
In step 104, the image to be superimposed and the image to be occluded are superimposed to obtain the target occlusion image.
In this step, the image to be superimposed and the image to be occluded obtained in the above step may be superimposed to obtain a target occlusion image, so that the edge halo effect of the target virtual image finally seen by the user is kept consistent. The colors of the target occlusion images corresponding to different target virtual images may be unified into a preset color, for example, the color of the target occlusion image may be black.
In step 105, the target virtual image is displayed at a first target position of the second display screen, and the target occlusion image is displayed at a second target position corresponding to the first target position in the first display screen.
Illustratively, as shown in fig. 7a, fig. 7a is a view of the scene seen by the end user after the processing of fig. 3a through the above method steps. Fig. 7b is a diagram of the scene seen by the end user after the processing of fig. 3b through the above method steps, as shown in fig. 7 b. As can be seen from fig. 7a and 7b, by the above method, the edge halo effect of the target virtual image at different projection distances, which is finally seen by the user, is consistent, and the visual experience of the user is improved.
In this step, the target virtual image is displayed on the first target position of the second display screen, wherein the target virtual image can be projected on the second display screen through the projection device. And displaying the target occlusion image on a second target position corresponding to the first target position in the first display screen. As shown in fig. 5, a second target position of the target-shielded image on the first display screen may be obtained according to the projection distance and the position of the target virtual image, that is, positions of m and n in fig. 5 are second target positions of the target-shielded image on the first display screen. As can be seen from the figure, the target virtual image and the target occlusion image are in the same view centrum direction.
By adopting the method, the image to be superposed can be generated according to the pupil radius, the viewing distance and the image to be shielded of the user, so that the image to be superposed and the image to be shielded are superposed to obtain the target shielded image, and the target shielded image and the target virtual image are respectively displayed on the first display screen and the second display screen. Therefore, the target shielding images with the consistent effect can be superposed behind the target virtual images with different projection distances, so that the edge halo effect of the target virtual image finally seen by a user is kept consistent, and better visual experience is brought to the user.
The method considers that when the illumination intensity of external environment light is too strong, the edge halo effect of the target virtual image seen by a user is not obvious, and the use experience of the user is influenced. Thus, as shown in fig. 8, the method may further include the steps of:
in step 106, the illumination intensity of the external ambient light is obtained.
In step 107, the gray-level value of the target occlusion image is adjusted according to the illumination intensity.
For example, a target gray value corresponding to the illumination intensity may be determined from a plurality of gray values according to the illumination intensity by presetting a gray correspondence relationship. The preset gray level corresponding relationship comprises at least one corresponding relationship between the illumination intensity value and the gray level value. Secondly, the gray value of the target shielding image can be adjusted according to the target gray value.
Therefore, the gray value of the target shielding image can be adjusted according to the external environment light with different illumination intensities, so that a user can clearly see the halo effect of the target virtual image outline, and the viewing experience of the user is improved.
In addition, in order to further improve the visual experience of the user, the position of the image to be superimposed can be moved according to the irradiation direction of the external environment light to form a certain shadow effect.
In a possible implementation manner, the irradiation direction of the external ambient light may be obtained by a direct acquisition manner, as shown in fig. 9, and the method may further include the following steps:
in step 108, the irradiation direction of the external ambient light is acquired.
In step 109, the image to be superimposed is moved by a preset distance along the illumination direction.
The image to be superimposed can be moved along the irradiation direction by a preset distance to form a shadow effect of edge halo of the target virtual image in the irradiation direction. The preset distance can be preset by the user, that is, the user can adjust the preset distance according to personal habits.
Correspondingly, the superimposing the image to be superimposed and the image to be occluded in the step 104 to obtain the target occlusion image may include: and superposing the moved image to be superposed and the image to be fixedly shielded to obtain the target shielding image.
In another possible implementation manner, the irradiation direction of the external ambient light may also be obtained by time and geographic location, as shown in fig. 10, and the method may further include the following steps:
in step 110, current time information and geographic location information are obtained.
In step 111, a target moving direction is determined according to the time information and the geographical position information.
The irradiation direction of the current external environment light, that is, the moving direction of the target, can be predicted according to the current time information and the geographic position information.
In step 112, the image to be superimposed is moved by a preset distance along the target moving direction.
The image to be superimposed can be moved according to a preset distance along the target moving direction to form a shadow effect of edge halo of the target virtual image in the target moving direction.
Correspondingly, the step 104 of superimposing the image to be superimposed and the image to be occluded by fixed occlusion to obtain the target occlusion image includes: and superposing the moved image to be superposed and the image to be fixedly shielded to obtain the target shielding image.
Therefore, the position of the image to be superposed can be moved according to the irradiation directions of different external environments, and the shadow effect of the target virtual image in the irradiation direction is formed, so that the halo around the target virtual image finally seen by the user is more real, and the visual experience of the user is improved.
In addition, it is considered that when the illumination intensity of the external environment light is too strong, the user may not be able to look directly at the external environment. Thus, as shown in fig. 11, the method may further include the steps of:
in step 113, the brightness value of the first display screen is adjusted according to the illumination intensity.
In this step, a target brightness value corresponding to the illumination intensity may be determined from a plurality of brightness values according to the illumination intensity and by presetting a brightness correspondence. The preset brightness corresponding relation comprises at least one corresponding relation between the illumination intensity value and the brightness value. Secondly, the brightness value of the first display screen can be adjusted according to the target brightness value.
Like this, can adjust the luminance value of this first display screen according to the external environment light of different illumination intensity for first display screen possesses the function of sunglasses when illumination is too strong, improves user's use and experiences.
By adopting the method, the image to be superposed can be generated according to the pupil radius, the viewing distance and the image to be shielded of the user, so that the image to be superposed and the image to be shielded are superposed to obtain the target shielding image, and the target shielding image and the target virtual image are respectively displayed on the first display screen and the second display screen. Therefore, the target shielding images with the consistent effect can be superposed behind the target virtual images with different projection distances, so that the edge halo effect of the target virtual image finally seen by a user is kept consistent, and better visual experience is brought to the user.
Fig. 12 is a diagram illustrating an image displaying apparatus according to an exemplary embodiment, applied to an image displaying apparatus, the image displaying apparatus including a first display screen and a second display screen, the first display screen being disposed at a side close to a back surface of the second display screen, as shown in fig. 12, the apparatus 200 includes:
a first obtaining module 201, configured to obtain an image to be occluded corresponding to a target virtual image;
a second obtaining module 202 configured to obtain a pupil radius of a user and a viewing distance between an eye of the user and the first display screen;
an image determining module 203 configured to determine an image to be superimposed according to the pupil radius, the viewing distance and the image to be occluded;
the image overlapping module 204 is configured to overlap the image to be overlapped and the image to be occluded, so as to obtain the target occlusion image;
an image presentation module 205 configured to present the target virtual image at a first target position of the second display screen and present the target occlusion image at a second target position corresponding to the first target position in the first display screen.
Alternatively, as shown in fig. 13, the image determining module 203 includes:
a first determining sub-module 2031 configured to determine a first radius of a first circle of confusion of the image to be superimposed according to the pupil radius and the viewing distance;
an obtaining sub-module 2032 configured to obtain a second radius of a second circle of confusion in the image to be definitively occluded;
a second determining submodule 2033 configured to determine the image to be superimposed according to the first radius, the second radius and the image to be occluded.
Optionally, the second determining submodule 2033 configured to determine a difference between the first radius and the second radius; taking the difference value as the width of the image to be superposed; and determining the image to be superposed according to the first radius, the width and the image to be blocked.
Optionally, the obtaining sub-module 2032 is configured to obtain a target projection distance of the target virtual object; and determining a second radius of a second diffusion circle in the image to be blocked according to the target projection distance, the pupil radius and the viewing distance.
Optionally, as shown in fig. 14, the apparatus 200 further includes:
a third obtaining module 206 configured to obtain the illumination intensity of the external ambient light;
and the adjusting module 207 is configured to adjust the gray value of the target occlusion image according to the illumination intensity.
Optionally, the adjusting module 207 is configured to determine, according to the illumination intensity, a target gray value corresponding to the illumination intensity from a plurality of gray values through a preset gray correspondence; the preset gray level corresponding relationship comprises a corresponding relationship between at least one illumination intensity value and a gray level value; and adjusting the gray value of the target shielding image according to the target gray value.
Optionally, as shown in fig. 15, the apparatus 200 further includes:
a fourth obtaining module 208 configured to obtain an irradiation direction of the external ambient light;
a first moving module 209 configured to move the image to be superimposed by a preset distance along the illumination direction;
the image overlaying module 204 is configured to overlay the moved image to be overlaid and the image to be occluded, so as to obtain the target occlusion image.
Optionally, as shown in fig. 16, the apparatus 200 further includes:
a fifth obtaining module 210 configured to obtain current time information and geographical location information;
a direction determining module 211 configured to determine a target moving direction according to the time information and the geographic position information;
a second moving module 212 configured to move the image to be superimposed by a preset distance along the target moving direction;
the image overlaying module 204 is configured to overlay the moved image to be overlaid and the image to be occluded, so as to obtain the target occlusion image.
Optionally, the adjusting module 207 is configured to adjust the brightness value of the first display screen according to the illumination intensity.
By adopting the device, the image to be superposed can be generated according to the pupil radius, the viewing distance and the image to be shielded of the user, so that the image to be superposed and the image to be shielded are superposed to obtain the target shielding image, and the target shielding image and the target virtual image are respectively displayed on the first display screen and the second display screen. Therefore, the target shielding images with the consistent effect can be superposed behind the target virtual images with different projection distances, so that the edge halo effect of the target virtual image finally seen by a user is kept consistent, and better visual experience is brought to the user.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of image presentation provided by the present disclosure.
Fig. 17 is a block diagram illustrating an apparatus 300 for image presentation according to an example embodiment. For example, the apparatus 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 17, the apparatus 300 may include one or more of the following components: a processing component 302, a memory 304, a power component 306, a multimedia component 308, an audio component 310, an interface for input/output (I/O) 312, a sensor component 314, and a communication component 316.
The processing component 302 generally controls overall operation of the device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 302 may include one or more processors 320 to execute instructions to perform all or a portion of the steps of the image presentation methods described above. Further, the processing component 302 can include one or more modules that facilitate interaction between the processing component 302 and other components. For example, the processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.
The memory 304 is configured to store various types of data to support operations at the apparatus 300. Examples of such data include instructions for any application or method operating on device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 304 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 306 provide power to the various components of device 300. The power components 306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 300.
The multimedia component 308 includes a screen that provides an output interface between the device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 310 is configured to output and/or input audio signals. For example, audio component 310 includes a Microphone (MIC) configured to receive external audio signals when apparatus 300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.
The I/O interface 312 provides an interface between the processing component 302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 314 includes one or more sensors for providing various aspects of state assessment for device 300. For example, sensor assembly 314 may detect an open/closed state of device 300, the relative positioning of components, such as a display and keypad of device 300, the change in position of device 300 or a component of device 300, the presence or absence of user contact with device 300, the orientation or acceleration/deceleration of device 300, and the change in temperature of device 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate wired or wireless communication between the apparatus 300 and other devices. The device 300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 316 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described image presentation methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 304 comprising instructions, executable by the processor 320 of the apparatus 300 to perform the method of image presentation described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In another exemplary embodiment, a computer program product is also provided, which contains a computer program executable by a programmable apparatus, the computer program having code portions for performing the method of image presentation described above when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. The method for displaying the image is characterized by being applied to an image display device, wherein the image display device comprises a first display screen and a second display screen which are correspondingly arranged, the first display screen is arranged on one side close to the back of the second display screen, and the method comprises the following steps:
acquiring an image to be blocked corresponding to a target virtual image;
acquiring the pupil radius of a user and the viewing distance between the eyes of the user and the first display screen;
determining an image to be superposed according to the pupil radius, the viewing distance and the image to be shielded;
superposing the image to be superposed and the image to be fixedly shielded to obtain the target shielding image;
and displaying the target virtual image at a first target position of the second display screen, and displaying the target occlusion image at a second target position corresponding to the first target position in the first display screen.
2. The method of claim 1, wherein determining an image to be superimposed based on the pupil radius, the viewing distance, and the image to be occluded comprises:
determining a first radius of a first diffusion circle of the image to be superposed according to the pupil radius and the viewing distance;
acquiring a second radius of a second circle of confusion in the image to be fixedly shielded;
and determining the image to be superposed according to the first radius, the second radius and the image to be occluded.
3. The method of claim 2, wherein the determining the image to be superimposed according to the first radius, the second radius, and the image to be occluded comprises:
determining a difference between the first radius and the second radius;
taking the difference value as the width of the image to be superposed;
and determining the image to be superposed according to the first radius, the width and the image to be occluded.
4. The method of claim 2, wherein the obtaining a second radius of a second circle of confusion in the image to be occluded comprises:
acquiring a target projection distance of the target virtual object;
and determining a second radius of a second diffusion circle in the image to be blocked according to the target projection distance, the pupil radius and the viewing distance.
5. The method of claim 1, further comprising:
acquiring the illumination intensity of external ambient light;
and adjusting the gray value of the target shielding image according to the illumination intensity.
6. The method of claim 5, wherein the adjusting the gray-level value of the target occlusion image according to the illumination intensity comprises:
determining a target gray value corresponding to the illumination intensity from a plurality of gray values according to the illumination intensity and through a preset gray corresponding relation; the preset gray level corresponding relationship comprises a corresponding relationship between at least one illumination intensity value and a gray level value;
and adjusting the gray value of the target shielding image according to the target gray value.
7. The method of claim 1, further comprising:
acquiring the irradiation direction of external ambient light;
moving the image to be superposed according to a preset distance along the irradiation direction;
the step of superposing the image to be superposed and the image to be fixedly shaded to obtain the target shading image comprises the following steps:
and superposing the moved image to be superposed and the image to be fixedly shielded to obtain the target shielding image.
8. The method of claim 1, further comprising:
acquiring current time information and geographical position information;
determining a target moving direction according to the time information and the geographical position information;
moving the image to be superposed according to a preset distance along the target moving direction;
the step of superposing the image to be superposed and the image to be fixedly shaded to obtain the target shading image comprises the following steps:
and superposing the moved image to be superposed and the image to be fixedly shielded to obtain the target shielding image.
9. The method of claim 5, further comprising:
and adjusting the brightness value of the first display screen according to the illumination intensity.
10. The utility model provides a device of image presentation which characterized in that is applied to image presentation device, image presentation device is including corresponding first display screen and the second display screen that sets up, first display screen sets up and is being close to one side at the second display screen back, the device includes:
the first acquisition module is configured to acquire an image to be occluded corresponding to the target virtual image;
a second acquisition module configured to acquire a pupil radius of a user and a viewing distance between an eye of the user and the first display screen;
an image determining module configured to determine an image to be superimposed according to the pupil radius, the viewing distance and the image to be occluded;
the image superposition module is configured to superpose the image to be superposed and the image to be fixedly shielded to obtain the target shielding image;
an image presentation module configured to present the target virtual image at a first target position of the second display screen and present the target occlusion image at a second target position in the first display screen corresponding to the first target position.
11. The apparatus of claim 10, wherein the image determination module comprises:
a first determining submodule configured to determine a first radius of a first circle of confusion of the image to be superimposed according to the pupil radius and the viewing distance;
the obtaining submodule is configured to obtain a second radius of a second circle of confusion in the image to be fixedly shielded;
a second determining submodule configured to determine the image to be superimposed according to the first radius, the second radius and the image to be occluded.
12. The apparatus of claim 11, wherein the second determination submodule is configured to determine a difference between the first radius and the second radius; taking the difference value as the width of the image to be superposed; and determining the image to be superposed according to the first radius, the width and the image to be occluded.
13. The apparatus of claim 11, wherein the acquisition sub-module is configured to acquire a target throw distance of the target virtual object; and determining a second radius of a second diffusion circle in the image to be blocked according to the target projection distance, the pupil radius and the viewing distance.
14. The apparatus of claim 10, further comprising:
the third acquisition module is configured to acquire the illumination intensity of the external environment light;
and the adjusting module is configured to adjust the gray value of the target shielding image according to the illumination intensity.
15. The apparatus according to claim 14, wherein the adjusting module is configured to determine a target gray-scale value corresponding to the illumination intensity from a plurality of gray-scale values according to the illumination intensity through a preset gray-scale correspondence relationship; the preset gray level corresponding relationship comprises a corresponding relationship between at least one illumination intensity value and a gray level value; and adjusting the gray value of the target shielding image according to the target gray value.
16. The apparatus of claim 10, further comprising:
a fourth acquisition module configured to acquire an irradiation direction of external ambient light;
the first moving module is configured to move the image to be superposed according to a preset distance along the irradiation direction;
the image superposition module is configured to superpose the moved image to be superposed and the image to be fixedly occluded to obtain the target occlusion image.
17. The apparatus of claim 10, further comprising:
a fifth obtaining module configured to obtain current time information and geographical location information;
a direction determination module configured to determine a target moving direction according to the time information and the geographic position information;
the second moving module is configured to move the image to be superposed according to a preset distance along the target moving direction;
the image superposition module is configured to superpose the moved image to be superposed and the image to be fixedly occluded to obtain the target occlusion image.
18. The apparatus of claim 14, wherein the adjusting module is configured to adjust a brightness value of the first display screen according to the illumination intensity.
19. An apparatus for image presentation, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the steps of the method of any one of claims 1 to 9 when invoking executable instructions stored on the memory.
20. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 9.
CN202210204883.2A 2022-03-03 2022-03-03 Image display method, device and storage medium Pending CN114564108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210204883.2A CN114564108A (en) 2022-03-03 2022-03-03 Image display method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210204883.2A CN114564108A (en) 2022-03-03 2022-03-03 Image display method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114564108A true CN114564108A (en) 2022-05-31

Family

ID=81716920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210204883.2A Pending CN114564108A (en) 2022-03-03 2022-03-03 Image display method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114564108A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016115874A1 (en) * 2015-01-21 2016-07-28 成都理想境界科技有限公司 Binocular ar head-mounted device capable of automatically adjusting depth of field and depth of field adjusting method
US20170301145A1 (en) * 2016-04-19 2017-10-19 Adobe Systems Incorporated Image Compensation for an Occluding Direct-View Augmented Reality System
CN107376349A (en) * 2016-05-10 2017-11-24 迪士尼企业公司 The virtual image being blocked is shown
CN108182730A (en) * 2018-01-12 2018-06-19 北京小米移动软件有限公司 Actual situation object synthetic method and device
US20180210208A1 (en) * 2017-01-25 2018-07-26 Samsung Electronics Co., Ltd. Head-mounted apparatus, and method thereof for generating 3d image information
CN111862866A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Image display method, device, equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016115874A1 (en) * 2015-01-21 2016-07-28 成都理想境界科技有限公司 Binocular ar head-mounted device capable of automatically adjusting depth of field and depth of field adjusting method
US20170301145A1 (en) * 2016-04-19 2017-10-19 Adobe Systems Incorporated Image Compensation for an Occluding Direct-View Augmented Reality System
CN107376349A (en) * 2016-05-10 2017-11-24 迪士尼企业公司 The virtual image being blocked is shown
US20180210208A1 (en) * 2017-01-25 2018-07-26 Samsung Electronics Co., Ltd. Head-mounted apparatus, and method thereof for generating 3d image information
CN108182730A (en) * 2018-01-12 2018-06-19 北京小米移动软件有限公司 Actual situation object synthetic method and device
CN111862866A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Image display method, device, equipment and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周雅;马晋涛;刘宪鹏;王红;常军;: "可寻址光线屏蔽机制光学透视式增强现实显示系统研究", 仪器仪表学报, no. 06, pages 1134 - 1138 *
张枝;: "基于图像后处理的虚拟系统景深模拟", 常州工学院学报, no. 06, pages 34 - 37 *
陈秉塬;丁斌;沈阳阳;: "基于GPU的部分遮挡景深绘制方法", 科学技术与工程, no. 35, pages 234 - 237 *

Similar Documents

Publication Publication Date Title
CN108182730B (en) Virtual and real object synthesis method and device
CN108986199B (en) Virtual model processing method and device, electronic equipment and storage medium
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
US10026381B2 (en) Method and device for adjusting and displaying image
CN113539192B (en) Ambient light detection method and device, electronic equipment and storage medium
CN107230428B (en) Curved screen display method and device and terminal
CN108810422B (en) Light supplementing method and device for shooting environment and computer readable storage medium
CN111970456A (en) Shooting control method, device, equipment and storage medium
CN107797662B (en) Viewing angle control method and device and electronic equipment
CN112365863A (en) Brightness adjusting method and device, display equipment and storage medium
KR20180132095A (en) Method and apparatus for gamut mapping
CN110992268B (en) Background setting method, device, terminal and storage medium
CN109934168B (en) Face image mapping method and device
CN109544698B (en) Image display method and device and electronic equipment
CN114564108A (en) Image display method, device and storage medium
CN115100253A (en) Image comparison method, device, electronic equipment and storage medium
CN112331158B (en) Terminal display adjusting method, device, equipment and storage medium
EP3848894A1 (en) Method and device for segmenting image, and storage medium
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
CN114070998B (en) Moon shooting method and device, electronic equipment and medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN111835977B (en) Image sensor, image generation method and device, electronic device, and storage medium
KR20150009744A (en) Method for providing user interface of mobile terminal using transparent flexible display
CN109413232B (en) Screen display method and device
CN107918514B (en) Display method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination