CN113132642A - Image display method and device and electronic equipment - Google Patents

Image display method and device and electronic equipment Download PDF

Info

Publication number
CN113132642A
CN113132642A CN202110463669.4A CN202110463669A CN113132642A CN 113132642 A CN113132642 A CN 113132642A CN 202110463669 A CN202110463669 A CN 202110463669A CN 113132642 A CN113132642 A CN 113132642A
Authority
CN
China
Prior art keywords
image
target
user
display
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110463669.4A
Other languages
Chinese (zh)
Other versions
CN113132642B (en
Inventor
吴文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110463669.4A priority Critical patent/CN113132642B/en
Publication of CN113132642A publication Critical patent/CN113132642A/en
Application granted granted Critical
Publication of CN113132642B publication Critical patent/CN113132642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The application discloses an image display method and device and electronic equipment, and belongs to the technical field of terminals. The method comprises the following steps: acquiring user eye parameters and position parameters of a user relative to a terminal device, wherein the user eye parameters comprise: at least one of a type of ametropia of the user and a diopter number of the user; collecting target scene information; determining display parameters of the target scene information on the terminal equipment based on the user eye parameters and the position parameters, wherein the display parameters comprise target display sizes; and displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size.

Description

Image display method and device and electronic equipment
Technical Field
The application belongs to the technical field of terminals, and particularly relates to an image display method and device and electronic equipment.
Background
Ametropia means that when the eye is not adjusted, parallel rays of light cannot form a clear object image on the retina after passing through the refractive action of the eye, and the object image is formed in front of or behind the retina. It includes hyperopia, myopia, astigmatism, and the like. It is estimated that the number of patients with ametropia is large, most people have some ametropia more or less, and most of the patients with ametropia cannot see distant scenes clearly, so that great inconvenience is brought to life.
Currently, wearing glasses is one of the simplest and safest ways to solve the ametropia problem, but some accidents often occur in life, such as: the glasses are damaged under unexpected conditions, cannot be worn, and cannot be successfully purchased in a short time. Or where the glasses are not intended to be placed at any time. For people with ametropia, whether the people leave the glasses for a short time or a long time, the people cannot see the distant scene clearly without the glasses.
Disclosure of Invention
The embodiment of the application aims to provide an image display method and device and electronic equipment, and the problem that people with ametropia can not clearly see distant scenes when leaving glasses in the prior art can be solved.
In a first aspect, an embodiment of the present application provides an image display method, where the method includes:
acquiring user eye parameters and position parameters of a user relative to a terminal device, wherein the user eye parameters comprise: at least one of a type of ametropia of the user and a diopter number of the user;
collecting target scene information;
determining display parameters of the target scene information on the terminal equipment based on the user eye parameters and the position parameters, wherein the display parameters comprise target display sizes;
and displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size.
In a second aspect, an embodiment of the present application provides an apparatus for displaying an image, the apparatus including:
an obtaining module, configured to obtain an eye parameter of a user and a position parameter of the user relative to a terminal device, where the eye parameter of the user includes: at least one of a type of ametropia of the user and a diopter number of the user;
the acquisition module is used for acquiring target scene information;
a parameter module, configured to determine display parameters of the target scene information on the terminal device based on the user eye parameters and the position parameters, where the display parameters include a target display size;
and the display module is used for displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the present application, the user eye parameter and the position parameter of the user relative to the terminal device may be obtained, where the user eye parameter includes: at least one of a type of ametropia of the user and a diopter number of the user. Determining whether the user's vision is impaired and, in the case of impaired vision, the extent to which the vision is impaired, based on the user's ocular parameters. And determining the position relation between the user and the terminal equipment when the user uses the terminal equipment based on the position parameters of the user relative to the terminal equipment. And collecting target scene information. And acquiring target scene information which a user wants to know through the terminal equipment. And determining display parameters of the target scene information on the terminal equipment based on the user eye parameters and the position parameters, wherein the display parameters comprise a target display size. And taking the vision damage condition of the user and the position relation between the user and the terminal equipment as a basis for calculating the target display size. And further displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size. The embodiment of the application presents the distant scene in the near terminal equipment through the target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even if the glasses are not worn.
Drawings
FIG. 1 is a flow chart of steps of a method of image display provided by an embodiment of the present application;
fig. 2 is one of application scenario presentation diagrams provided in an embodiment of the present application;
fig. 3 is a schematic view showing an eye image provided by an embodiment of the present application;
fig. 4 is a second schematic view of an application scenario display provided in the embodiment of the present application;
FIG. 5 is a schematic diagram of an application of the method for displaying an image in the embodiment of the present application;
FIG. 6 is a second schematic diagram illustrating an application of the method for displaying images in the embodiment of the present application;
FIG. 7 is a block diagram of an apparatus for displaying an image according to an embodiment of the present disclosure;
fig. 8 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail the method for displaying an image according to the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, a method for displaying an image according to an embodiment of the present application includes:
step 101: and acquiring the eye parameters of the user and the position parameters of the user relative to the terminal equipment.
In this step, the user eye parameters may represent whether the user has impaired vision and the condition of impaired vision, and specifically, the user eye parameters include: at least one of a type of ametropia of the user and a diopter number of the user. The ametropia types include myopia, astigmatism, hyperopia, presbyopia and the like. For those users who have no impaired vision, it can be considered a particular type of ametropia. Here, the user may input his or her own ocular parameters into the terminal device in advance, and of course, the ocular parameters input by the user in advance are not necessarily the user's own refractive error type and/or refractive power, and may be any refractive error type and/or refractive power.
Step 102: and collecting target scene information.
In this step, the terminal device is a device with an image capturing function, such as a mobile phone and a tablet computer. Here, the image acquisition unit of the terminal device acquiring the target scene information may be a camera, but is not limited thereto. Specifically, the terminal device may acquire target scene information in a target direction, wherein the target direction is associated with the attitude of the terminal device and the position of the image acquisition unit on the terminal device. The terminal device is taken as a mobile phone for explanation, the image acquisition unit can be a rear camera on the mobile phone, and the target direction is the direction towards the back of the mobile phone, namely the shooting direction of the rear camera. Here, the user can select the target direction according to the requirement so that the terminal device collects the scene information required by the user. As shown in FIG. 2, the user wants to view the image information in front of himself, and only needs to orient the rear camera of the mobile phone to the front of himself, and the scene information in front of himself can be collected by inputting the first input to the terminal device.
Step 103: and determining display parameters of the target scene information on the terminal equipment based on the user eye parameters and the position parameters.
In this step, the display parameter includes a target display size, where the target display size may indicate a display size of the target scene information when displayed on the display screen of the terminal device. It can be understood that the terminal device is provided with a preset display size, the preset display size is determined by the field of view of the terminal device, and when the scenes in the field of view are required to be displayed in the preset display size, all the scenes can be displayed in the display screen of the terminal device. The target display size may be larger than, smaller than, or equal to a preset display size of the terminal device. Here, in order to allow different visually impaired users to see the imaging of the target scene information on the terminal device clearly at different distances from the terminal device. And taking the eye parameters of the user and the position parameters of the user relative to the terminal equipment as a basis for calculating the target display size. Preferably, the display parameters further include at least one of a target resolution value and a target brightness value. The target resolution value can indicate the resolution of target scene information when the target scene information is displayed on a display screen of the terminal equipment; the target brightness value may indicate a brightness level of the target scene information when displayed on the display screen of the terminal device.
Step 104: and displaying the target image according to the display parameters.
In this step, the target image is an image corresponding to the target scene information. As can be seen from fig. 2, the scenery in the target scenery information includes a car, a house and a tree, and the target image displayed on the display screen of the terminal device includes respective images of the car, the house and the tree, so that the distant scenery is displayed on the terminal device. Here, the terminal device generates and displays an image based on the acquired target scene information and the calculated display parameters after acquiring the target scene information. Here, the target image may be displayed in a picture format or a video format in the terminal device. Preferably, the terminal device collects target scene information once every preset time length, and generates and displays a target image based on the newly collected target scene information and the calculated display parameters, where the preset time length is short, and may be, for example, 10 milliseconds, 100 milliseconds, and the like. Therefore, the terminal equipment can display the target image corresponding to the target scene information in real time.
The display size of a part of the image content or the entire image content of the target image is the same as the target display size. That is, the entire image content of the target image may be displayed at the target display size; a portion of the image content of the target image may be displayed at a target display size, with the display size of the remaining image content being different from the target display size. Optionally, when the terminal device displays the target image, the target image is displayed based on all the parameters included in the display parameters. For example, when the display parameter includes a target display size, a target resolution value, and a target brightness value, the display size when the target image is displayed is adjusted to the target display size, the display resolution is adjusted to the target resolution value, and the display brightness is adjusted to the target brightness value.
In the embodiment of the application, the user eye parameters and the position parameters of the user relative to the terminal device can be obtained, wherein the user eye parameters include: at least one of a type of ametropia of the user and a diopter number of the user. Determining whether the user's vision is impaired and, in the case of impaired vision, the extent to which the vision is impaired, based on the user's ocular parameters. And determining the position relation between the user and the terminal equipment when the user uses the terminal equipment based on the position parameters of the user relative to the terminal equipment. And collecting target scene information. And acquiring target scene information which a user wants to know through the terminal equipment. And determining display parameters of the target scene information on the terminal equipment based on the user eye parameters and the position parameters, wherein the display parameters comprise a target display size. And taking the vision damage condition of the user and the position relation between the user and the terminal equipment as a basis for calculating the target display size. And further displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size. The embodiment of the application presents the distant scene in the near terminal equipment through the target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even if the glasses are not worn.
Optionally, the location parameter includes a distance of the user relative to the terminal device, and step 103: determining display parameters of the target scene information on the terminal device based on the user eye parameters and the position parameters, which may include:
and calculating to obtain the target display size according to the eye parameters of the user and the distance between the user and the terminal equipment.
It should be noted that in most cases, the farther an object is from the user's eyes, the less easily the user can see the object. Likewise, the greater the user's diopter, the less readily it is possible to see the object. Of course different refractive error types have different effects on the user, with some people of refractive error types seeing objects more clearly than others, for example near sighted people with the same refractive power see more clearly than far sighted people when viewing objects at a closer distance (e.g. 30 cm). Therefore, the type of ametropia, the amount of refraction, and the distance between the user's eye and the terminal device all affect whether the user can see the first image content in the target image clearly. Therefore, part or all of the three can be used as the basis for calculating the target display size.
Preferably, the target display size is larger than a preset display size of the terminal device, where the preset display size is determined by a field of view of the terminal device, and when the scenes in the field of view are required to be displayed with the preset display size, all the scenes can be displayed on the display screen of the terminal device. That is, for a user with impaired vision, displaying the target image in the preset display size does not satisfy the user's need to see the target scene clearly. Therefore, a target display size larger than the preset display size needs to be calculated. Here, the target display size may be a target number times the preset display size, where the target number is greater than 1. The target value can be calculated based on the eye parameters of the user and the distance between the user and the terminal device, and then the target value times of the preset display size, namely the target display size, can be calculated based on the preset display size. For example, if the target value is 1.1 and the preset display size is a, the target display size is 1.1 a.
The process of calculating the target value based on the user's eye parameters and the distance of the user from the terminal device is described in detail below. Specifically, the first target formula can be used: psi ═ f (α) + k1d, calculated to give the target value; in the first objective formula, Ψ represents an objective value, α represents an eye parameter of the user, f represents a preset mapping function, k1 represents a scaling factor greater than 0, and d represents a distance of the user from the terminal device. In the case where the ocular parameters of the user include the type of refractive error and the refractive power, the higher the refractive power, the larger f (α) if the refractive type is unchanged; if the refractive power is not changed, the f (alpha) values corresponding to different refractive error types are different.
It is to be understood that the process of calculating the target value is not limited to the above-mentioned manner, and the target value may be calculated according to a preset weight corresponding to each parameter, for example. For example, the weight of the first parameter is set to a, the weight of the second parameter includes B1 and B2 … …, and the weight of the third parameter is set to C. The first parameter is the distance between the eyes of the user and the terminal equipment; the second parameter is the type of ametropia input by the user in advance, different ametropia types correspond to B1 and B2 … … respectively, and the third parameter is the diopter number input by the user in advance. In calculating the target value, the target value may be obtained according to a second target formula. The second target formula: m is sxa + B + nxc, where M denotes a target value, S denotes a distance between the user' S eye and the terminal device, a denotes a weight set in advance for the first parameter, B denotes a weight set for the second parameter, and the values of B are different for different ametropia types, N denotes an ametropia number input in advance by the user, and C denotes a weight set in advance for the third parameter.
In the embodiment of the application, at least one of the distance between the user and the terminal device, the ametropia type and the diopter number is used as a factor for influencing whether the user can see the target image clearly. Based on the method, the target display size with a reasonable numerical value can be obtained through calculation, and the requirement that different people can see the target image clearly is met.
Optionally, the location parameter includes a distance of the user relative to the terminal device, in step 103: determining the target scene information to be in front of the display parameters of the terminal equipment based on the user eye parameters and the position parameters, wherein the method further comprises the following steps:
an eye image of a user is acquired.
In this step, the eye image of the user may be acquired by using another image acquisition unit in the terminal device. For example, the terminal device is provided with a front camera, and an eye image of the user is acquired through the front camera.
And determining a focus area of the sight of the user on a display screen of the terminal equipment according to the position of the pupil in the eye image.
In this step, the focus area is a focus point of the user in the display screen, and specifically, the focus area may be understood as a preset range centered on a focus position in the display screen of the terminal device. The focal position comprises an intersection point position of the sight of the user and the display screen when the user watches the display screen. It will be appreciated that when the user is fixed in position relative to the terminal device and looks at a different location or area in the display, the location of the user's pupil in the eye will change accordingly. Assume that the pupil is at the very center of the eye when the user looks at the very center of the display screen. When the user looks at the top right corner of the display screen, the pupil will shift towards the top right corner of the eye; when the user looks at the lower left corner of the display screen, the pupil will shift towards the lower left corner of the eye. Based on this, the focus area of the user's sight line in the display screen can be determined according to the position of the pupil in the eye.
Specifically, the eyes and the display screen may be divided into the same number of regions based on the same region division rule, and each region in the eyes corresponds to one region in the display screen, and the positions of the two corresponding regions in the eyes and the display screen are the same. As shown in fig. 2 and 3, the eyes and display screen may be divided into a form of a squared figure, each containing nine different regions. As shown in fig. 2, when the user's eye 21 is looking at the first area 22 in the display screen, see fig. 3, the pupil 31 is located in the second area 33 of the eye 32. With continued reference to fig. 2, as the user's eye 21 gazes at the third area 23 in the display screen, with continued reference to fig. 3, the pupil 31 is located at the fourth area 34 of the eye 32. Based on this, when it is determined that the pupil 31 is located in the fourth region 34 of the eye 32 from the eye image of the user, the focal region is the third region 23 in fig. 2. In the case where it is determined that the pupil 31 is located in the second region 33 of the eye 32, the focus area is the first region 22 in fig. 2. Here, the description will be given by taking an example of dividing the eyes and the display screen into nine areas, but the division of the eyes and the display screen is not limited thereto.
It will be appreciated that the position of the pupil in the eye varies within a target range during the user's viewing of all positions on the display screen. Here, it is only necessary to divide the target range into regions. Specifically, a plurality of eye images of the user during daily use of the terminal device need to be collected in advance to determine the target range. The plurality of eye images are eye images shot when a user views each edge position of the display screen.
The distance between the user and the terminal equipment is inversely proportional to the size of the target range. That is, the closer the user is to the terminal device, the larger the target range is; the farther the user is from the terminal device, the farther the above target range is. Preferably, a plurality of eye images of the user when using the terminal device at different distance values from the terminal device are collected in advance, so that target ranges at different distance values are determined. When the focus area of the user sight on the display screen of the terminal device is determined according to the position of the pupil in the eye image, a corresponding target range is determined based on the distance of the user relative to the terminal device, then the eye image is subjected to region division in the target range, the position of the pupil in the target range is further determined, and finally the focus area of the user sight on the display screen of the terminal device is determined by the position of the pupil.
Accordingly, the step 103: determining display parameters of the target scene information on the terminal device based on the user eye parameters and the position parameters, which may include:
and calculating to obtain the target display size according to the eye parameters of the user and the distance between the user and the terminal equipment.
In this step, the process of calculating the target display size is the same as the process of calculating the target display size in the above embodiment, and is not described herein again.
And calculating to obtain the local display size according to the area of the focus area, the area of the display screen and the target display size.
In this step, the target display size is larger than the preset display size of the terminal device, and the local display size is smaller than the preset display size. The preset display size is determined by the field of view of the terminal device, and when the scenes in the field of view are required to be displayed in the preset display size, all the scenes can be displayed in the display screen of the terminal device.
The above step 104: displaying the target image according to the display parameters may include:
displaying the first image content by adopting a target display size and displaying the second image content by adopting a local display size according to the display parameters;
the first image content comprises image content of the target image located in the focus area, the second image content comprises image content of the target image except the first image content, and part or all of the second image content is displayed in the display screen.
It should be noted that after increasing the first image content, by reducing the second image content, the second image content can be made to be displayed in the display screen as much as possible. As shown in fig. 2 and 4, when the user views the tree in the target image, the display size of the tree in the display area is increased, and the display size of the house and the car is decreased.
Fig. 5 is a schematic diagram of an actual application of the image display method provided in the embodiment of the present application, and the description is performed by using a terminal device as a mobile phone, where the schematic diagram includes:
step 501: and receiving the target function triggered by the user and the input configuration information. The target function is a function of the method for realizing the image display on the mobile phone, and the configuration information comprises the following steps: the type of refractive error and the refractive power. For example, if the user is at 500 degrees myopia, the configuration information is entered as: myopia, 500.
Step 502: and controlling a rear camera of the mobile phone to acquire target scene information in the shooting direction.
Step 503: the eye distance and the eye image are detected. The eye distance comprises the distance between the eyes and the front camera of the mobile phone, and the eye image is the eye image obtained by shooting the face image by the front camera. And determining a focus area of the user on the display screen of the mobile phone based on the eye image, namely the area concerned by the user. And determining a target display size and a local display size based on the distance between the human eyes and the configuration information input by the user. The target display size is larger than the preset display size of the terminal equipment, and the local display size is smaller than the preset display size.
Step 504: the display size of the first image content of the target image is adjusted to a target display size, and the display size of the second image content is adjusted to a partial display size. Wherein the first image content comprises image content of the target image within the focus area and the second image content comprises image content of the target image other than the first image content.
Step 505: and displaying the processed target image.
In the embodiment of the application, the focus of attention of the user on the target image is determined based on the eye image when the user views the terminal device, and the display size of the focus of attention scenery on the target image is adjusted to the target display size, so that the target image can be zoomed more accurately. Further, the display size of the scenery which is not focused on is adjusted to be the local display size, so that the situation that other image contents are lost due to the fact that the display size of partial image contents in the target image is increased can be avoided.
Optionally, after displaying the first image content with the target display size and the second image content with the local display size according to the display parameter, the method may further include:
under the condition that the distance value between the focal region and the edge of the target side of the display screen is smaller than a preset threshold value, the shooting angle of a first image acquisition unit on the terminal equipment is adjusted to be deviated to the target side or an image acquisition unit for acquiring image information on the terminal equipment is adjusted to be a second image acquisition unit from the first image acquisition unit, wherein the first image acquisition unit is an image acquisition unit for acquiring target scene information, and the view field of the second image acquisition unit is larger than that of the first image acquisition unit.
In this step, the preset threshold is small, and the specific value can be set according to the user requirement. The condition that the distance value from the focal region to the edge of the target region is smaller than the preset threshold value can be regarded as that the scene information which the user wants to view is not presented in the target image. At this time, the terminal device may be adjusted to increase the scene information included in the target image, or directly acquire the scene information required by the user. The terminal equipment is used as a mobile phone for explanation, and the first image acquisition unit for acquiring the target scene information can be a common camera or a standard camera, and the field angle of the first image acquisition unit is smaller. The second image acquisition unit can be a wide-angle camera, the field angle of the second image acquisition unit is large, and compared with a common camera, a wider-range scene can be shot. Of course, other situations may also be preset, and it is considered that the scene information that the user wants to view is not presented in the target image, for example, the situation that the user does not have his line of sight on the display screen, that is, when the user views outside the display screen. In this case, the above operation is also performed, that is, the shooting angle of the first image capturing unit on the terminal device is adjusted or the image capturing unit capturing the image information on the terminal device is adjusted from the first image capturing unit to the second image capturing unit.
For example, when the distance value from the focal region to the upper edge of the mobile phone is smaller than the preset threshold, the shooting angle of the first image capturing unit is adjusted to shift upward, so that the scene information above the scene information corresponding to the target image can be included in the target image. As shown in fig. 6, a schematic diagram of an actual application of the embodiment of the present application includes: step 601 to step 607, wherein step 601 to step 603 are the same as step 501 to step 503 in fig. 5, and step 606 to step 607 are the same as step 504 to step 505 in fig. 5, and are not repeated here. Step 604: and judging whether the visible area is expanded, if so, executing step 605, and otherwise, executing step 606. Here, the visible region is a region photographed by the rear camera. And when the front camera detects that the sight of the human eyes extends out of the edge range of the mobile phone screen, judging that the user wants to further adjust the range of the visible area. Step 605: and (3) carrying out angle adjustment on the rear camera or automatically adjusting and replacing the working camera (for example, replacing the working camera with a wide-angle lens), and extracting scene information in a wider range.
In the embodiment of the application, the user can increase the visual range without moving the terminal equipment, and more scene information is displayed in the target image.
It should be noted that, in the method for displaying an image provided in the embodiment of the present application, the execution subject may be an apparatus for displaying an image, or a control module in the apparatus for displaying an image, which is used for executing the method for displaying an image. In the embodiment of the present application, a method for performing image display by an image display apparatus is taken as an example, and the image display apparatus provided in the embodiment of the present application is described.
As shown in fig. 7, an embodiment of the present application further provides an apparatus for displaying an image, including:
an obtaining module 71, configured to obtain an eye parameter of the user and a position parameter of the user relative to the terminal device, where the eye parameter of the user includes: at least one of a type of ametropia of the user and a diopter number of the user;
an acquisition module 72 for acquiring target scene information;
a parameter module 73, configured to determine, based on the user eye parameter and the position parameter, a display parameter of the target scene information on the terminal device, where the display parameter includes a target display size;
and a display module 74, configured to display a target image according to the display parameter, where the target image is an image corresponding to the target scene information, and a display size of a partial image content or an entire image content of the target image is the same as the target display size.
Optionally, the position parameter at least includes a distance between the user and the terminal device, and the parameter module 73 is specifically configured to calculate the target display size according to the eye parameter of the user and the distance between the user and the terminal device.
Optionally, the location parameter includes a distance of the user relative to the terminal device, and the apparatus further includes:
the focus module is used for acquiring an eye image of a user; determining a focus area of the sight of the user on a display screen of the terminal equipment according to the position of the pupil in the eye image;
the parameter module 73 is specifically configured to calculate a target display size according to the eye parameter of the user and a distance between the user and the terminal device; calculating to obtain a local display size according to the area of the focus area, the area of the display screen and the target display size;
a display module 74, configured to display the first image content with a target display size and display the second image content with a local display size according to the display parameter;
the first image content comprises image content of the target image located in the focus area, the second image content comprises image content of the target image except the first image content, and part or all of the second image content is displayed in the display screen.
Optionally, the apparatus further comprises:
the adjusting module is used for adjusting the shooting angle of a first image acquisition unit on the terminal equipment to shift towards the target side or adjusting the image acquisition unit for acquiring image information on the terminal equipment to be a second image acquisition unit from the first image acquisition unit under the condition that the distance value of the focal region from the target side edge of the display screen is smaller than a preset threshold value, wherein the first image acquisition unit is an image acquisition unit for acquiring target scenery information, and the view field of the second image acquisition unit is larger than that of the first image acquisition unit.
In the embodiment of the application, the user eye parameters and the position parameters of the user relative to the terminal device can be obtained, wherein the user eye parameters include: at least one of a type of ametropia of the user and a diopter number of the user. Determining whether the user's vision is impaired and, in the case of impaired vision, the extent to which the vision is impaired, based on the user's ocular parameters. And determining the position relation between the user and the terminal equipment when the user uses the terminal equipment based on the position parameters of the user relative to the terminal equipment. And collecting target scene information. And acquiring target scene information which a user wants to know through the terminal equipment. And determining display parameters of the target scene information on the terminal equipment based on the user eye parameters and the position parameters, wherein the display parameters comprise a target display size. And taking the vision damage condition of the user and the position relation between the user and the terminal equipment as a basis for calculating the target display size. And further displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size. The embodiment of the application presents the distant scene in the near terminal equipment through the target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even if the glasses are not worn.
The device for displaying an image in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 6, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the above-mentioned method for displaying an image, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 910 is configured to obtain an eye parameter of a user and a position parameter of the user relative to the electronic device 900, where the eye parameter of the user includes: at least one of a type of ametropia of the user and a diopter number of the user;
a sensor 905 for collecting target scene information;
the processor 910 is further configured to determine display parameters of the target scene information on the terminal device based on the user eye parameters and the position parameters, where the display parameters include a target display size;
a display unit 906, configured to display a target image according to the display parameter, where the target image is an image corresponding to the target scene information, and a display size of a partial image content or an entire image content of the target image is the same as the target display size.
In the embodiment of the application, a distant scene is presented in a near terminal device through a target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even if the glasses are not worn.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 909 can be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above method for displaying an image, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above method for displaying an image, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of image display, the method comprising:
acquiring user eye parameters and position parameters of a user relative to a terminal device, wherein the user eye parameters comprise: at least one of a type of ametropia of the user and a diopter number of the user;
collecting target scene information;
determining display parameters of the target scene information on the terminal equipment based on the user eye parameters and the position parameters, wherein the display parameters comprise target display sizes;
and displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size.
2. The method for displaying images according to claim 1, wherein the position parameter includes a distance of the user relative to the terminal device, and the determining the display parameter of the object scene information on the terminal device based on the eye parameter of the user and the position parameter includes:
and calculating to obtain the target display size according to the user eye parameters and the distance between the user and the terminal equipment.
3. The method of displaying an image according to claim 1, wherein the position parameter comprises a distance of the user relative to the terminal device, and wherein before determining the target scene information as a display parameter of the terminal device based on the user eye parameter and the position parameter, the method further comprises:
acquiring an eye image of the user;
determining a focus area of the sight of the user on a display screen of the terminal equipment according to the position of the pupil in the eye image;
the determining the display parameters of the target scene information on the terminal device based on the user eye parameters and the position parameters comprises:
calculating to obtain the target display size according to the user eye parameters and the distance between the user and the terminal equipment;
calculating to obtain a local display size according to the area of the focus area, the area of the display screen and the target display size;
the displaying the target image according to the display parameters comprises:
according to the display parameters, displaying first image content by adopting the target display size, and displaying second image content by adopting the local display size;
wherein the first image content comprises image content of the target image within the focus area, the second image content comprises image content of the target image except the first image content, and part or all of the second image content is displayed in the display screen.
4. The method of image display according to claim 3, wherein after displaying a first image content with the target display size and a second image content with the local display size according to the display parameter, the method further comprises:
under the condition that the distance value between the focal region and the edge of the target side of the display screen is smaller than a preset threshold value, the shooting angle of a first image acquisition unit on the terminal equipment is adjusted to the target side or the image acquisition unit for acquiring image information on the terminal equipment is adjusted to be a second image acquisition unit by the first image acquisition unit, wherein the first image acquisition unit is an image acquisition unit for acquiring target scenery information, and the view field of the second image acquisition unit is larger than that of the first image acquisition unit.
5. An apparatus for image display, comprising:
an obtaining module, configured to obtain an eye parameter of a user and a position parameter of the user relative to a terminal device, where the eye parameter of the user includes: at least one of a type of ametropia of the user and a diopter number of the user;
the acquisition module is used for acquiring target scene information;
a parameter module, configured to determine display parameters of the target scene information on the terminal device based on the user eye parameters and the position parameters, where the display parameters include a target display size;
and the display module is used for displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content or all image content of the target image is the same as the target display size.
6. The apparatus according to claim 5, wherein the position parameter at least includes a distance of the user from the terminal device, and the parameter module is specifically configured to calculate the target display size according to the eye parameter of the user and the distance of the user from the terminal device.
7. The apparatus for displaying an image according to claim 5, wherein the location parameter comprises a distance of the user with respect to the terminal device, the apparatus further comprising:
the focus module is used for acquiring the eye image of the user; determining a focus area of the sight of the user on a display screen of the terminal equipment according to the position of the pupil in the eye image;
the parameter module is specifically configured to calculate the target display size according to the user eye parameter and a distance between the user and the terminal device; calculating to obtain a local display size according to the area of the focus area, the area of the display screen and the target display size;
the display module is specifically configured to display a first image content with the target display size and a second image content with the local display size according to the display parameter;
wherein the first image content comprises image content of the target image within the focus area, the second image content comprises image content of the target image except the first image content, and part or all of the second image content is displayed in the display screen.
8. The apparatus for displaying an image according to claim 7, further comprising:
the adjusting module is used for adjusting the shooting angle of a first image acquisition unit on the terminal equipment to the target side or to the image acquisition unit for acquiring image information on the terminal equipment to be adjusted into a second image acquisition unit by the first image acquisition unit under the condition that the distance value of the focal region distance from the target side edge of the display screen is smaller than a preset threshold value, wherein the first image acquisition unit is used for acquiring the image acquisition unit of the target scenery information, and the view field of the second image acquisition unit is larger than that of the first image acquisition unit.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the method of image display according to any one of claims 1-4.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method of image display according to any one of claims 1-4.
CN202110463669.4A 2021-04-26 2021-04-26 Image display method and device and electronic equipment Active CN113132642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463669.4A CN113132642B (en) 2021-04-26 2021-04-26 Image display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463669.4A CN113132642B (en) 2021-04-26 2021-04-26 Image display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113132642A true CN113132642A (en) 2021-07-16
CN113132642B CN113132642B (en) 2023-09-26

Family

ID=76780464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463669.4A Active CN113132642B (en) 2021-04-26 2021-04-26 Image display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113132642B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253623A (en) * 2021-11-19 2022-03-29 惠州Tcl移动通信有限公司 Screen amplification processing method and device based on mobile terminal, terminal and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102610184A (en) * 2012-03-20 2012-07-25 华为技术有限公司 Method and device for adjusting display state
CN103760980A (en) * 2014-01-21 2014-04-30 Tcl集团股份有限公司 Display method, system and device for conducting dynamic adjustment according to positions of two eyes
US20140267284A1 (en) * 2013-03-14 2014-09-18 Broadcom Corporation Vision corrective display
CN104699250A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Display control method, display control device and electronic equipment
WO2016094928A1 (en) * 2014-12-18 2016-06-23 Halgo Pty Limited Replicating effects of optical lenses
WO2017026942A1 (en) * 2015-08-11 2017-02-16 Chai Wei Kuo Andrew Apparatus for display adjustment and method thereof
CN107942514A (en) * 2017-11-15 2018-04-20 青岛海信电器股份有限公司 A kind of image distortion correction method and device of virtual reality device
WO2020219711A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Light field display and vibrating light field shaping layer and vision testing and/or correction device
WO2020219446A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Digital display device comprising a complementary light field display or display portion, and vision correction system and method using same
WO2021004138A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Screen display method, terminal device, and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102610184A (en) * 2012-03-20 2012-07-25 华为技术有限公司 Method and device for adjusting display state
US20140267284A1 (en) * 2013-03-14 2014-09-18 Broadcom Corporation Vision corrective display
CN103760980A (en) * 2014-01-21 2014-04-30 Tcl集团股份有限公司 Display method, system and device for conducting dynamic adjustment according to positions of two eyes
WO2016094928A1 (en) * 2014-12-18 2016-06-23 Halgo Pty Limited Replicating effects of optical lenses
CN104699250A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Display control method, display control device and electronic equipment
WO2017026942A1 (en) * 2015-08-11 2017-02-16 Chai Wei Kuo Andrew Apparatus for display adjustment and method thereof
CN107942514A (en) * 2017-11-15 2018-04-20 青岛海信电器股份有限公司 A kind of image distortion correction method and device of virtual reality device
WO2020219711A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Light field display and vibrating light field shaping layer and vision testing and/or correction device
WO2020219446A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Digital display device comprising a complementary light field display or display portion, and vision correction system and method using same
WO2021004138A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Screen display method, terminal device, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253623A (en) * 2021-11-19 2022-03-29 惠州Tcl移动通信有限公司 Screen amplification processing method and device based on mobile terminal, terminal and medium
CN114253623B (en) * 2021-11-19 2024-01-19 惠州Tcl移动通信有限公司 Screen amplification processing method and device based on mobile terminal, terminal and medium

Also Published As

Publication number Publication date
CN113132642B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
US10129520B2 (en) Apparatus and method for a dynamic “region of interest” in a display system
CN107272904B (en) Image display method and electronic equipment
US9961257B2 (en) Imaging to facilitate object gaze
CN109343700B (en) Eye movement control calibration data acquisition method and device
JP6333801B2 (en) Display control device, display control program, and display control method
CN106843474B (en) Mobile terminal display processing method and system
WO2015043275A1 (en) Imaging for local scaling
US20180246320A1 (en) Lens position adjustment in a wearable device
CN111886564A (en) Information processing apparatus, information processing method, and program
CN106325510A (en) Information processing method and electronic equipment
CN109246463A (en) Method and apparatus for showing barrage
US20190064528A1 (en) Information processing device, information processing method, and program
CN107422844A (en) A kind of information processing method and electronic equipment
CN113132642B (en) Image display method and device and electronic equipment
US20200242847A1 (en) Information processing device and information processing method
KR101331055B1 (en) Visual aid system based on the analysis of visual attention and visual aiding method for using the analysis of visual attention
CN114895790A (en) Man-machine interaction method and device, electronic equipment and storage medium
CN112702533B (en) Sight line correction method and sight line correction device
CN111612780B (en) Human eye vision recognition method, device and computer storage medium
CN111913561A (en) Display method and device based on eye state, display equipment and storage medium
CN112533071B (en) Image processing method and device and electronic equipment
CN112925497A (en) Content display method and device, electronic equipment and readable storage medium
CN113824832B (en) Prompting method, prompting device, electronic equipment and storage medium
CN115834858A (en) Display method and device, head-mounted display equipment and storage medium
CN115877573A (en) Display method, head-mounted display device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant