CN113132642B - Image display method and device and electronic equipment - Google Patents

Image display method and device and electronic equipment Download PDF

Info

Publication number
CN113132642B
CN113132642B CN202110463669.4A CN202110463669A CN113132642B CN 113132642 B CN113132642 B CN 113132642B CN 202110463669 A CN202110463669 A CN 202110463669A CN 113132642 B CN113132642 B CN 113132642B
Authority
CN
China
Prior art keywords
target
image
user
display
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110463669.4A
Other languages
Chinese (zh)
Other versions
CN113132642A (en
Inventor
吴文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110463669.4A priority Critical patent/CN113132642B/en
Publication of CN113132642A publication Critical patent/CN113132642A/en
Application granted granted Critical
Publication of CN113132642B publication Critical patent/CN113132642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses an image display method and device and electronic equipment, and belongs to the technical field of terminals. The method comprises the following steps: acquiring eye parameters of a user and position parameters of the user relative to terminal equipment, wherein the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user; collecting target scenery information; determining display parameters of the target scene information on the terminal equipment based on the eye parameters of the user and the position parameters, wherein the display parameters comprise target display sizes; and displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of part or all of the image content of the target image is the same as the target display size.

Description

Image display method and device and electronic equipment
Technical Field
The application belongs to the technical field of terminals, and particularly relates to a method and a device for displaying images and electronic equipment.
Background
Refractive error is the inability of an eye to form a clear object image on the retina after parallel rays pass through the refractive effect of the eye without accommodation, but rather to image either anterior or posterior to the retina. It includes hyperopia, myopia, astigmatism, etc. It is estimated that the number of ametropia patients is large, most people have more or less some ametropia, especially most ametropia patients cannot see the distant scene clearly, and great inconvenience is brought to life.
At present, wearing glasses is one of the simplest and safe methods for solving the problem of ametropia, but accidents often occur in life, such as: the glasses are damaged under unexpected conditions, cannot be worn, and cannot successfully buy proper and satisfactory glasses in a short time. Or, one wants to not get up where the glasses are placed. For people with ametropia, neither short nor long-term leaving the glasses, the far scene cannot be seen without the glasses.
Disclosure of Invention
The embodiment of the application aims to provide an image display method and device and electronic equipment, which can solve the problem that people with wrong refraction leave glasses and cannot see a distant scene clearly in the prior art.
In a first aspect, an embodiment of the present application provides a method for displaying an image, including:
acquiring eye parameters of a user and position parameters of the user relative to terminal equipment, wherein the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user;
collecting target scenery information;
determining display parameters of the target scene information on the terminal equipment based on the eye parameters of the user and the position parameters, wherein the display parameters comprise target display sizes;
And displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of part or all of the image content of the target image is the same as the target display size.
In a second aspect, an embodiment of the present application provides an apparatus for displaying an image, including:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring eye parameters of a user and position parameters of the user relative to terminal equipment, and the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user;
the acquisition module is used for acquiring target scenery information;
the parameter module is used for determining the display parameters of the target scenery information on the terminal equipment based on the eye parameters of the user and the position parameters, wherein the display parameters comprise target display sizes;
and the display module is used for displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of part or all of the image content of the target image is the same as the target display size.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, the eye parameters of the user and the position parameters of the user relative to the terminal equipment can be obtained, wherein the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user. Whether the user's vision is impaired or not and the degree of vision impairment in the case of vision impairment are determined based on the user's ocular parameters. Based on the position parameters of the user relative to the terminal equipment, the position relation between the user and the terminal equipment when the user uses the terminal equipment is determined. Target scene information is collected. And acquiring target scene information which the user wants to know through the terminal equipment. And determining display parameters of the target scene information in the terminal equipment based on the eye parameters and the position parameters of the user, wherein the display parameters comprise target display sizes. And taking the vision impaired condition of the user and the position relation between the user and the terminal equipment as the basis for calculating the target display size. And displaying the target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of part or all of the image content of the target image is the same as the target display size. The embodiment of the application presents the far scenery in the near terminal equipment through the target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even without wearing glasses.
Drawings
FIG. 1 is a flow chart of steps of a method for displaying an image provided by an embodiment of the present application;
fig. 2 is one of application scenario display schematic diagrams provided in the embodiment of the present application;
FIG. 3 is a schematic view of an eye image presentation according to an embodiment of the present application;
FIG. 4 is a second schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of an application of the method for displaying images according to the embodiment of the present application;
FIG. 6 is a second application diagram of the method for displaying images according to the embodiment of the application;
fig. 7 is a block diagram of an apparatus for displaying an image according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 9 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The method for displaying the image provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, a method for displaying an image according to an embodiment of the present application includes:
step 101: and acquiring eye parameters of the user and position parameters of the user relative to the terminal equipment.
In this step, the user eye parameters may represent whether the user's vision is impaired and the condition of the vision impairment, specifically, the user eye parameters include: at least one of a type of refractive error of the user and a refractive power of the user. Among these, the types of ametropia include myopia, astigmatism, hyperopia, presbyopia, and the like. For those users whose vision is not impaired, they can be considered a special type of ametropia. Here, the user may input the eye parameter of the user into the terminal device in advance, and of course, the eye parameter input in advance by the user is not necessarily the type of refractive error and/or the refractive index of the user, and may be any type of refractive error and/or refractive index.
Step 102: target scene information is collected.
In this step, the terminal device is a device with an image acquisition function, for example, may be a mobile phone, a tablet computer, or the like. Here, the image capturing unit for capturing the object scene information by the terminal device may be a camera, but is not limited thereto. Specifically, the terminal device may collect target scene information in a target direction, where the target direction is associated with a pose of the terminal device and a position of the image collecting unit on the terminal device. The terminal equipment is taken as a mobile phone for explanation, the image acquisition unit can be a rear camera on the mobile phone, and the target direction is the direction of the back face of the mobile phone, namely the shooting direction of the rear camera. Here, the user can select the target direction according to the requirement, so that the terminal equipment can collect the scenery information required by the user. As shown in fig. 2, the user only needs to face the rear camera of the mobile phone to the front of the user to view the image information in front of the user, and the scene information in front of the user can be acquired by performing the first input to the terminal device.
Step 103: and determining the display parameters of the target scene information in the terminal equipment based on the eye parameters and the position parameters of the user.
In this step, the display parameter includes a target display size, where the target display size may indicate a display size of the target scene information when displayed on the terminal device display screen. It will be appreciated that the terminal device is provided with a predetermined display size, which is determined by the field of view of the terminal device, and may be displayed entirely on the display screen of the terminal device when a scene within the field of view is to be displayed with the predetermined display size. The target display size may be greater than, less than, or equal to a preset display size of the terminal device. Here, in order to enable different visually impaired users, the imaging of the target scene information on the terminal device can be seen with different distances from the terminal device. And taking the eye parameters of the user and the position parameters of the user relative to the terminal equipment as the basis for calculating the target display size. Preferably, the display parameter further comprises at least one of a target resolution value and a target brightness value. The target resolution value can indicate the resolution of the target scenery information when the target scenery information is displayed on a display screen of the terminal equipment; the target luminance value may indicate a luminance size of the target scene information when displayed on the terminal device display screen.
Step 104: and displaying the target image according to the display parameters.
In this step, the target image is an image corresponding to the target scene information. As can be seen from fig. 2, the scenery in the target scenery information includes an automobile, a house and a tree, and the target image displayed in the display screen of the terminal device includes images of the automobile, the house and the tree, so that the remote scenery is presented on the terminal device. Here, after collecting the target scene information, the terminal device generates an image based on the collected target scene information and the calculated display parameters and displays the image. Here, the target image may be displayed in a picture format or a video format in the terminal device. Preferably, the terminal device collects target scene information once every preset time period, and generates and displays a target image based on the latest collected target scene information and the calculated display parameters, wherein the preset time period is shorter, for example, 10 ms, 100 ms, and the like. Therefore, the terminal equipment can display the target image corresponding to the target scene information in real time.
The display size of part or all of the image contents of the target image is the same as the target display size. That is, the entire image content of the target image can be displayed in the target display size; the partial image content of the target image may be displayed at a target display size, and the display size of the remaining image content may be different from the target display size. Optionally, when the terminal device displays the target image, the target image is displayed based on all the parameters included in the display parameters. For example, when the display parameters include a target display size, a target resolution value, and a target luminance value, the display size at the time of displaying the target image is adjusted to the target display size, the resolution of the display is adjusted to the target resolution value, and the luminance of the display is adjusted to the target luminance value.
In the embodiment of the application, the eye parameters of the user and the position parameters of the user relative to the terminal equipment can be obtained, wherein the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user. Whether the user's vision is impaired or not and the degree of vision impairment in the case of vision impairment are determined based on the user's ocular parameters. Based on the position parameters of the user relative to the terminal equipment, the position relation between the user and the terminal equipment when the user uses the terminal equipment is determined. Target scene information is collected. And acquiring target scene information which the user wants to know through the terminal equipment. And determining display parameters of the target scene information in the terminal equipment based on the eye parameters and the position parameters of the user, wherein the display parameters comprise target display sizes. And taking the vision impaired condition of the user and the position relation between the user and the terminal equipment as the basis for calculating the target display size. And displaying the target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of part or all of the image content of the target image is the same as the target display size. The embodiment of the application presents the far scenery in the near terminal equipment through the target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even without wearing glasses.
Optionally, the location parameter includes a distance of the user relative to the terminal device, step 103 above: based on the eye parameters and the position parameters of the user, determining the display parameters of the target scene information in the terminal device may include:
and calculating to obtain the target display size according to the eye parameters of the user and the distance between the user and the terminal equipment.
It should be noted that in most cases, the further the object is from the user's eye, the less likely the user will see the object. Likewise, the greater the refractive power of the user, the less likely the object will be visible. Of course, different ametropia types have different effects on the user, with some people of ametropia types looking at objects more clearly than others, for example, near vision people with the same refractive power looking at objects at a closer distance (e.g. 30 cm) than far vision people. Therefore, the type of refractive error, the refractive power and the distance between the user's eye and the terminal device all affect whether the user can see the first image content in the target image clearly. Therefore, some or all of the three can be used as a basis for calculating the target display size.
Preferably, the target display size is larger than a preset display size of the terminal device, wherein the preset display size is determined by a field of view of the terminal device, and when a scene in the field of view needs to be displayed by adopting the preset display size, the scene can be displayed in a display screen of the terminal device. That is, for a visually impaired user, displaying the target image at the preset display size does not satisfy the user's need to see the target scene clearly. Therefore, it is necessary to calculate a target display size larger than the preset display size. Here, the target display size may be a target numerical multiple of the preset display size, wherein the target numerical is greater than 1. The target value may be calculated based on the eye parameter of the user and the distance of the user from the terminal device, and then the target value multiple of the preset display size, that is, the target display size, is calculated based on the preset display size. For example, if the target value is 1.1 and the preset display size is a, the target display size is 1.1a.
The process of calculating the target value based on the user's ocular parameters and the user's distance from the terminal device is described in detail below. Specifically, the first target formula may be: ψ=f (α) +k1d, a target value is calculated; in the first target formula, ψ represents a target value, α represents a user eye parameter, f represents a preset mapping function, k1 represents a scaling factor larger than 0, and d represents a distance between a user and a terminal device. In the case where the user's ocular parameters include the type of refractive error and the refractive power, the higher the refractive power, the greater the f (α); if the refractive power is unchanged, the f (alpha) values corresponding to different types of refractive errors are different.
It is to be understood that the process of calculating the target value is not limited to the above manner, and the target value may be calculated according to a preset weight corresponding to each parameter, for example. For example, the weight of the first parameter is preset to be a, the weight of the second parameter includes B1 and B2 … …, and the weight of the third parameter is C. The first parameter is the distance between the eyes of the user and the terminal equipment; the second parameter is the type of refractive error input by the user in advance, the different types of refractive error correspond to B1 and B2 … … respectively, and the third parameter is the refractive index input by the user in advance. When calculating the target value, the target value may be obtained according to a second target formula. A second target formula: m=s×a+b+n×c, where M represents a target value, S represents a distance between the user' S eye and the terminal device, a represents a weight preset for the first parameter, B represents a weight preset for the second parameter, the values of B corresponding to different types of ametropia are also different, N represents a refractive power input in advance by the user, and C represents a weight preset for the third parameter.
In the embodiment of the application, at least one of the distance between the user and the terminal equipment, the type of refractive error and the refractive index is taken as a factor affecting whether the user can see the target image clearly. Based on the method, the target display size with reasonable numerical value can be calculated, and the requirements of different people for seeing the target image clearly are met.
Optionally, the location parameter includes a distance of the user relative to the terminal device, in step 103 above: based on the user eye parameters and the position parameters, determining that the target scene information is before the display parameters of the terminal equipment, the method further comprises:
an eye image of a user is acquired.
In this step, an eye image of the user may be acquired by using a further image acquisition unit in the terminal device. For example, the terminal device is provided with a front camera, through which an eye image of the user is acquired.
And determining a focus area of the user sight on a display screen of the terminal equipment according to the position of the pupil in the eye image.
In this step, the focus area is a focus point of the user in the display screen, and specifically, the focus area may be understood as a preset range centered on a focus position in the display screen of the terminal device. The focal position comprises the intersection point position of the sight line of the user and the display screen when the user views the display screen. It will be appreciated that after the user has been fixed in position relative to the terminal device, the user looks at a different location or region in the display screen and the position of the user's pupil in the eye will change accordingly. Assuming that the user looks at the midpoint of the display, the pupil is located at the midpoint of the eye. When the user looks towards the upper right corner of the display screen, the pupil will be offset towards the upper right corner of the eye; when the user looks towards the lower left corner of the display screen, the pupil will be offset towards the lower left corner of the eye. Based on this, the focal area of the user's line of sight in the display screen may be determined from the position of the pupil in the eye.
Specifically, the eyes and the display screen may be divided into the same number of areas based on the same area division rule, and each area in the eyes corresponds to one area in the display screen, and the positions of the corresponding two areas in the eyes and the display screen are the same, respectively. As shown in fig. 2 and 3, the eyes and display screen may be divided into nine grid forms, each containing nine distinct regions. As shown in fig. 2, when the user's eye 21 is looking at the first region 22 in the display screen, see fig. 3, the pupil 31 is located in the second region 33 of the eye 32. With continued reference to fig. 2, when the user's eye 21 is looking at the third region 23 in the display screen, with continued reference to fig. 3, the pupil 31 is located in the fourth region 34 of the eye 32. Based on this, in the case where it is determined from the eye image of the user that the pupil 31 is located in the fourth region 34 of the eye 32, the focal region is the third region 23 in fig. 2. In the case where it is determined that the pupil 31 is located in the second region 33 of the eye 32, the focal region is the first region 22 in fig. 2. Here, the description is given by taking the example of dividing the eyes and the display screen into nine areas, but the description is not limited to this.
It will be appreciated that the position of the pupil in the eye will vary within a target range during the user's viewing of all positions on the display screen. Here, the target range need only be divided into regions. Specifically, a plurality of eye images of the user during daily use of the terminal device need to be collected in advance, and the target range is determined. The eye images are eye images shot when the user views the positions of the edges of the display screen.
The distance between the user and the terminal device is inversely proportional to the size of the target range. That is, the closer the user is to the terminal device, the larger the target range; the farther the user is from the terminal device, the farther the target range is. Preferably, a plurality of eye images of a user when using the terminal device under different distance values from the terminal device are collected in advance, so that a target range under the different distance values is determined. When determining a focus area of a user's sight on a display screen of a terminal device according to the position of a pupil in an eye image, determining a corresponding target range based on the distance between the user and the terminal device, then dividing the eye image in the target range, further determining the position of the pupil in the target range, and finally determining the focus area of the user's sight on the display screen of the terminal device by using the position of the pupil.
Correspondingly, step 103 above: based on the eye parameters and the position parameters of the user, determining the display parameters of the target scene information in the terminal device may include:
and calculating to obtain the target display size according to the eye parameters of the user and the distance between the user and the terminal equipment.
In this step, the process of calculating the target display size is the same as the process of calculating the target display size in the above embodiment, and will not be described again.
And calculating to obtain the local display size according to the area of the focus area, the area of the display screen and the target display size.
In this step, the target display size is larger than the preset display size of the terminal device, and the local display size is smaller than the preset display size. The preset display size is determined by the field of view of the terminal device, and when the scenery within the field of view is required to be displayed by adopting the preset display size, the scenery can be displayed in the display screen of the terminal device.
Step 104 described above: displaying the target image according to the display parameters may include:
displaying the first image content by adopting a target display size and displaying the second image content by adopting a local display size according to the display parameters;
wherein the first image content includes image content of the target image located in the focus area, the second image content includes image content of the target image other than the first image content, and part or all of the second image content is displayed in the display screen.
It should be noted that by reducing the second image content after the first image content is enlarged, the second image content can be displayed in the display screen as much as possible. As shown in fig. 2 and 4, when the user views the tree in the target image, the display size of the tree in the display area is increased, and the display sizes of houses and vehicles are reduced.
Fig. 5 is a schematic diagram of practical application of an image display method according to an embodiment of the present application, where a terminal device is used as a mobile phone to illustrate the method, including:
step 501: and receiving the target function triggered by the user and the input configuration information. The target function is a function of the method for realizing the image display on the mobile phone, and the configuration information comprises: type of ametropia and refractive power. For example, the user is near-sighted 500 degrees, the configuration information entered is: myopia, 500.
Step 502: and controlling a rear camera of the mobile phone to acquire target scene information in the shooting direction.
Step 503: human eye distance and eye image are detected. The eye distance comprises the distance between the eyes and the front camera of the mobile phone, and the eye image is a human eye image obtained by shooting a human face image by the front camera. And determining a focus area of the user on the mobile phone display screen, namely an area focused by the user, based on the eye image. And determining the target display size and the local display size based on the human eye distance and the configuration information input by the user. The target display size is larger than the preset display size of the terminal equipment, and the local display size is smaller than the preset display size.
Step 504: the display size of the first image content of the target image is adjusted to the target display size, and the display size of the second image content is adjusted to the partial display size. Wherein the first image content comprises image content of the target image that is located within the focal region and the second image content comprises image content of the target image other than the first image content.
Step 505: and displaying the processed target image.
In the embodiment of the application, the focus of attention of the user on the target image is determined based on the eye image when the user views the terminal equipment, and the display size of the scenery of the focus of attention on the target image is adjusted to be the target display size, so that the target image can be scaled more accurately. Further, the display size of the non-focused scenery is adjusted to be the local display size, so that the situation that other image contents are lost due to the fact that the display size of part of image contents in the target image is increased can be avoided.
Optionally, after displaying the first image content with the target display size and displaying the second image content with the partial display size according to the display parameter, the method may further include:
under the condition that the distance value of the focal area from the edge of the target side of the display screen is smaller than a preset threshold value, the shooting angle of a first image acquisition unit on the terminal equipment is adjusted to be deviated to the target side or the image acquisition unit for acquiring image information on the terminal equipment is adjusted to be a second image acquisition unit by the first image acquisition unit, wherein the first image acquisition unit is an image acquisition unit for acquiring target scene information, and the field of view of the second image acquisition unit is larger than that of the first image acquisition unit.
In this step, the preset threshold is smaller, and the specific value thereof can be set according to the user's requirement. The situation that the distance value of the focal region from the edge of the target region is smaller than the preset threshold value can be regarded as that scene information which the user wants to view is not presented in the target image. At this time, the terminal device may be adjusted to increase the scene information contained in the target image, or directly collect the scene information required by the user. The terminal device is taken as a mobile phone for explanation, the first image acquisition unit for acquiring the object scene information can be a common camera or a standard camera, and the angle of view is smaller. The second image acquisition unit can be a wide-angle camera, the field angle of view is larger, and compared with a common camera, a scene with a larger range can be shot. Of course, other situations can be preset, and the situation can be regarded as that the scene information which the user wants to view is not presented in the target image, for example, the situation that the user's sight does not fall to the display screen, namely, when the user views the display screen. In this case, the above-described operation is also performed, i.e., the shooting angle of the first image capturing unit on the terminal device is adjusted or the image capturing unit that captures the image information on the terminal device is adjusted from the first image capturing unit to the second image capturing unit.
For example, when the distance value of the focal area from the upper edge of the mobile phone is smaller than the preset threshold, the shooting angle of the first image acquisition unit is adjusted to be shifted upwards, so that the scene information above the scene information corresponding to the target image can be included in the target image. Fig. 6 is a schematic diagram of a practical application of an embodiment of the present application, including: steps 601 to 607, wherein steps 601 to 603 are the same as steps 501 to 503 in fig. 5, and steps 606 to 607 are the same as steps 504 to 505 in fig. 5, and are not described again. Step 604: it is determined whether the visible area expansion is performed, if yes, step 605 is executed, and if not, step 606 is executed. Here, the visible region is a region photographed by a rear camera. And if the front camera detects that the human eye sight extends beyond the edge range of the mobile phone screen, judging that the user wants to further adjust the visible area range. Step 605: and angle adjustment is performed on the rear camera or the work camera is automatically adjusted and replaced (such as a wide-angle lens is replaced), so that scene information in a larger range is extracted.
In the embodiment of the application, the user can enlarge the visual range without mobile terminal equipment, and more scene information is displayed in the target image.
It should be noted that, in the method for displaying an image provided in the embodiment of the present application, the execution subject may be an image display device, or a control module for executing the method for displaying an image in the image display device. In the embodiment of the present application, a method for executing image display by an image display device is taken as an example, and the image display device provided by the embodiment of the present application is described.
As shown in fig. 7, an embodiment of the present application further provides an apparatus for displaying an image, where the apparatus includes:
the obtaining module 71 is configured to obtain a user eye parameter and a user position parameter relative to the terminal device, where the user eye parameter includes: at least one of a type of refractive error of the user and a refractive power of the user;
an acquisition module 72 for acquiring target scene information;
a parameter module 73, configured to determine a display parameter of the target scene information on the terminal device based on the eye parameter and the position parameter of the user, where the display parameter includes a target display size;
the display module 74 is configured to display a target image according to the display parameter, where the target image is an image corresponding to the target scene information, and a display size of a part of image content or all of image content of the target image is the same as the target display size.
Optionally, the location parameter includes at least a distance between the user and the terminal device, and the parameter module 73 is specifically configured to calculate the target display size according to the eye parameter of the user and the distance between the user and the terminal device.
Optionally, the location parameter includes a distance of the user relative to the terminal device, and the apparatus further includes:
the focal module is used for acquiring an eye image of a user; determining a focus area of the user sight on a display screen of the terminal equipment according to the position of the pupil in the eye image;
the parameter module 73 is specifically configured to calculate a target display size according to the eye parameter of the user and the distance between the user and the terminal device; calculating to obtain a local display size according to the area of the focus area, the area of the display screen and the target display size;
the display module 74 is specifically configured to display, according to the display parameter, the first image content with the target display size, and display the second image content with the local display size;
wherein the first image content includes image content of the target image located in the focus area, the second image content includes image content of the target image other than the first image content, and part or all of the second image content is displayed in the display screen.
Optionally, the apparatus further comprises:
the adjusting module is used for adjusting the shooting angle of the first image acquisition unit on the terminal equipment to shift to the target side or adjusting the image acquisition unit for acquiring image information on the terminal equipment to be a second image acquisition unit under the condition that the distance value of the focal area from the target side edge of the display screen is smaller than a preset threshold value, wherein the first image acquisition unit is an image acquisition unit for acquiring target scene information, and the field of view of the second image acquisition unit is larger than that of the first image acquisition unit.
In the embodiment of the application, the eye parameters of the user and the position parameters of the user relative to the terminal equipment can be obtained, wherein the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user. Whether the user's vision is impaired or not and the degree of vision impairment in the case of vision impairment are determined based on the user's ocular parameters. Based on the position parameters of the user relative to the terminal equipment, the position relation between the user and the terminal equipment when the user uses the terminal equipment is determined. Target scene information is collected. And acquiring target scene information which the user wants to know through the terminal equipment. And determining display parameters of the target scene information in the terminal equipment based on the eye parameters and the position parameters of the user, wherein the display parameters comprise target display sizes. And taking the vision impaired condition of the user and the position relation between the user and the terminal equipment as the basis for calculating the target display size. And displaying the target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of part or all of the image content of the target image is the same as the target display size. The embodiment of the application presents the far scenery in the near terminal equipment through the target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even without wearing glasses.
The image display device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and embodiments of the present application are not limited in particular.
The image display device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image display device provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 6, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in the embodiment of the present application, which includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and capable of running on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the method embodiment of image display, and the same technical effects are achieved, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, and processor 910.
Those skilled in the art will appreciate that the electronic device 900 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 910 is configured to obtain a user eye parameter and a user position parameter relative to the electronic device 900, where the user eye parameter includes: at least one of a type of refractive error of the user and a refractive power of the user;
a sensor 905 for acquiring target scene information;
the processor 910 is further configured to determine, based on the eye parameter and the position parameter of the user, a display parameter of the target scene information in the terminal device, where the display parameter includes a target display size;
and a display unit 906, configured to display a target image according to the display parameter, where the target image is an image corresponding to the target scene information, and a display size of a part of or all of image contents of the target image is the same as the target display size.
In the embodiment of the application, a distant scenery is presented in a near terminal device through a target image. And determining the display size of the target image based on the eye parameters of the user and the position parameters of the user relative to the terminal equipment, so that the target image is clearly displayed in front of the user in a proper size. Especially for most people with ametropia, the far scene can be seen clearly even without wearing glasses.
It should be appreciated that in an embodiment of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, and the graphics processor 9041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 909 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 910 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned image display method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the method embodiment of image display are realized, the same technical effects can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (6)

1. A method of displaying an image, the method comprising:
acquiring eye parameters of a user and position parameters of the user relative to terminal equipment, wherein the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user;
collecting target scenery information; the target scenery information comprises data acquired by a rear camera on the terminal equipment;
determining display parameters of the target scene information on the terminal equipment based on the eye parameters of the user and the position parameters, wherein the display parameters comprise target display sizes;
displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of partial image content of the target image is the same as the target display size;
wherein the location parameter comprises a distance of the user relative to the terminal device, the method further comprising, prior to determining the display parameter of the target scene information at the terminal device based on the user eye parameter and the location parameter:
Acquiring an eye image of the user;
determining a focus area of the user sight on a display screen of the terminal equipment according to the position of the pupil in the eye image;
the determining, based on the user eye parameter and the position parameter, a display parameter of the target scene information in the terminal device includes:
according to the eye parameters of the user and the distance between the user and the terminal equipment, calculating to obtain the target display size;
calculating to obtain a local display size according to the area of the focus area, the area of the display screen and the target display size;
the displaying the target image according to the display parameters comprises the following steps:
displaying a first image content by adopting the target display size and displaying a second image content by adopting the local display size according to the display parameters;
wherein the first image content comprises image content of the target image, which is positioned in the focus area, and the second image content comprises image content of the target image except the first image content, and part or all of the second image content is displayed in the display screen; the target display size is larger than a preset display size, and the local display size is smaller than the preset display size; the preset display size is determined by the field of view of the terminal device.
2. The method of image display according to claim 1, wherein after displaying a first image content with the target display size and a second image content with the partial display size according to the display parameter, the method further comprises:
under the condition that the distance value of the focal area from the edge of the target side of the display screen is smaller than a preset threshold value, the shooting angle of a first image acquisition unit on the terminal equipment is adjusted to be deviated to the target side or the image acquisition unit for acquiring image information on the terminal equipment is adjusted to be a second image acquisition unit by the first image acquisition unit, wherein the first image acquisition unit is an image acquisition unit for acquiring the target scene information, and the field of view of the second image acquisition unit is larger than that of the first image acquisition unit.
3. An apparatus for displaying an image, the apparatus comprising:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring eye parameters of a user and position parameters of the user relative to terminal equipment, and the eye parameters of the user comprise: at least one of a type of refractive error of the user and a refractive power of the user;
The acquisition module is used for acquiring target scenery information; the target scenery information comprises data acquired by a rear camera on the terminal equipment;
the parameter module is used for determining the display parameters of the target scenery information on the terminal equipment based on the eye parameters of the user and the position parameters, wherein the display parameters comprise target display sizes;
the display module is used for displaying a target image according to the display parameters, wherein the target image is an image corresponding to the target scene information, and the display size of part of image content of the target image is the same as the target display size;
the location parameter comprises a distance of the user relative to the terminal device, the apparatus further comprising:
the focal module is used for acquiring an eye image of the user; determining a focus area of the user sight on a display screen of the terminal equipment according to the position of the pupil in the eye image;
the parameter module is specifically configured to calculate the target display size according to the eye parameter of the user and the distance between the user and the terminal device; calculating to obtain a local display size according to the area of the focus area, the area of the display screen and the target display size;
The display module is specifically configured to display, according to the display parameter, a first image content with the target display size, and display a second image content with the local display size;
wherein the first image content comprises image content of the target image, which is positioned in the focus area, and the second image content comprises image content of the target image except the first image content, and part or all of the second image content is displayed in the display screen; the target display size is larger than a preset display size, and the local display size is smaller than the preset display size; the preset display size is determined by the field of view of the terminal device.
4. An apparatus for displaying an image according to claim 3, wherein the apparatus further comprises:
the adjusting module is used for adjusting the shooting angle of the first image acquisition unit on the terminal equipment to shift to the target side or adjusting the image acquisition unit for acquiring image information on the terminal equipment to a second image acquisition unit from the first image acquisition unit under the condition that the distance value of the focal area from the target side edge of the display screen is smaller than a preset threshold value, wherein the first image acquisition unit is the image acquisition unit for acquiring the target scenery information, and the field of view of the second image acquisition unit is larger than that of the first image acquisition unit.
5. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the method of displaying an image as claimed in any one of claims 1-2.
6. A readable storage medium, characterized in that it stores thereon a program or instructions, which when executed by a processor, implement the steps of the method of displaying an image according to any of claims 1-2.
CN202110463669.4A 2021-04-26 2021-04-26 Image display method and device and electronic equipment Active CN113132642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463669.4A CN113132642B (en) 2021-04-26 2021-04-26 Image display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463669.4A CN113132642B (en) 2021-04-26 2021-04-26 Image display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113132642A CN113132642A (en) 2021-07-16
CN113132642B true CN113132642B (en) 2023-09-26

Family

ID=76780464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463669.4A Active CN113132642B (en) 2021-04-26 2021-04-26 Image display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113132642B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253623B (en) * 2021-11-19 2024-01-19 惠州Tcl移动通信有限公司 Screen amplification processing method and device based on mobile terminal, terminal and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102610184A (en) * 2012-03-20 2012-07-25 华为技术有限公司 Method and device for adjusting display state
CN103760980A (en) * 2014-01-21 2014-04-30 Tcl集团股份有限公司 Display method, system and device for conducting dynamic adjustment according to positions of two eyes
CN104699250A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Display control method, display control device and electronic equipment
WO2016094928A1 (en) * 2014-12-18 2016-06-23 Halgo Pty Limited Replicating effects of optical lenses
WO2017026942A1 (en) * 2015-08-11 2017-02-16 Chai Wei Kuo Andrew Apparatus for display adjustment and method thereof
CN107942514A (en) * 2017-11-15 2018-04-20 青岛海信电器股份有限公司 A kind of image distortion correction method and device of virtual reality device
WO2020219446A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Digital display device comprising a complementary light field display or display portion, and vision correction system and method using same
WO2020219711A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Light field display and vibrating light field shaping layer and vision testing and/or correction device
WO2021004138A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Screen display method, terminal device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9406253B2 (en) * 2013-03-14 2016-08-02 Broadcom Corporation Vision corrective display

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102610184A (en) * 2012-03-20 2012-07-25 华为技术有限公司 Method and device for adjusting display state
CN103760980A (en) * 2014-01-21 2014-04-30 Tcl集团股份有限公司 Display method, system and device for conducting dynamic adjustment according to positions of two eyes
WO2016094928A1 (en) * 2014-12-18 2016-06-23 Halgo Pty Limited Replicating effects of optical lenses
CN104699250A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Display control method, display control device and electronic equipment
WO2017026942A1 (en) * 2015-08-11 2017-02-16 Chai Wei Kuo Andrew Apparatus for display adjustment and method thereof
CN107942514A (en) * 2017-11-15 2018-04-20 青岛海信电器股份有限公司 A kind of image distortion correction method and device of virtual reality device
WO2020219446A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Digital display device comprising a complementary light field display or display portion, and vision correction system and method using same
WO2020219711A1 (en) * 2019-04-23 2020-10-29 Evolution Optiks Limited Light field display and vibrating light field shaping layer and vision testing and/or correction device
WO2021004138A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Screen display method, terminal device, and storage medium

Also Published As

Publication number Publication date
CN113132642A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US10271722B2 (en) Imaging to facilitate object observation
CN109633907B (en) Method for automatically adjusting brightness of monocular AR (augmented reality) glasses and storage medium
CN107272904B (en) Image display method and electronic equipment
US9961257B2 (en) Imaging to facilitate object gaze
US10642028B2 (en) Lens position adjustment in a wearable device
CN106843474B (en) Mobile terminal display processing method and system
CN107277375B (en) Self-photographing method and mobile terminal
WO2015043275A1 (en) Imaging for local scaling
CN111886564A (en) Information processing apparatus, information processing method, and program
CN110706283B (en) Calibration method and device for sight tracking, mobile terminal and storage medium
CN113132642B (en) Image display method and device and electronic equipment
CN107065190B (en) Method and device for displaying information on VR equipment and VR equipment
CN112702533B (en) Sight line correction method and sight line correction device
CN113495616A (en) Terminal display control method, terminal, and computer-readable storage medium
CN111857461A (en) Image display method and device, electronic equipment and readable storage medium
WO2023083279A1 (en) Photographing method and apparatus
CN114895790A (en) Man-machine interaction method and device, electronic equipment and storage medium
CN114816065A (en) Screen backlight adjusting method, virtual reality device and readable storage medium
CN112528107A (en) Content data display method and device and server
CN111198611A (en) Method for determining sight line landing point, terminal and computer readable storage medium
CN112533071B (en) Image processing method and device and electronic equipment
CN112925497A (en) Content display method and device, electronic equipment and readable storage medium
US20230244307A1 (en) Visual assistance
CN115834858A (en) Display method and device, head-mounted display equipment and storage medium
CN117130572A (en) Display method, display device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant