CN109313797B - Image display method and terminal - Google Patents
Image display method and terminal Download PDFInfo
- Publication number
- CN109313797B CN109313797B CN201680086444.9A CN201680086444A CN109313797B CN 109313797 B CN109313797 B CN 109313797B CN 201680086444 A CN201680086444 A CN 201680086444A CN 109313797 B CN109313797 B CN 109313797B
- Authority
- CN
- China
- Prior art keywords
- image
- user
- sensitivity
- terminal
- terminal screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000035945 sensitivity Effects 0.000 claims abstract description 263
- 238000012545 processing Methods 0.000 claims abstract description 24
- 230000008447 perception Effects 0.000 claims abstract description 11
- 230000001133 acceleration Effects 0.000 claims description 25
- 210000003128 head Anatomy 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 21
- 230000000007 visual effect Effects 0.000 abstract description 15
- 230000015654 memory Effects 0.000 description 20
- 238000005516 engineering process Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000005484 gravity Effects 0.000 description 6
- 210000001747 pupil Anatomy 0.000 description 5
- 230000006978 adaptation Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 208000004350 Strabismus Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000007430 reference method Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
An image display method and a terminal are provided, wherein the image display method comprises the following steps: determining a sensitivity degree, wherein the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen; selecting a target super-resolution algorithm corresponding to the sensitivity degree, wherein the higher the sensitivity degree is, the better the image quality fineness of the image obtained by the target super-resolution algorithm corresponding to the sensitivity degree is; and processing the input original image by using a target super-resolution algorithm to obtain a target image, and displaying the target image on a terminal screen. The screen display effect can be adjusted according to the sensitivity degree of the user to the image quality fineness of the terminal screen display image, and the balance between the visual experience of the user and the power consumption of the terminal can be realized, so that the power consumption of the terminal can be reduced to a certain degree.
Description
Technical Field
The present invention relates to the field of image display technologies, and in particular, to an image display method and a terminal.
Background
As the demand for high-quality, high-definition image information on terminals has increased, the screen resolution of the terminals has become higher and higher, and 2K (2560 × 1440) or even 4K (3840 × 2160) screens have appeared. In order to adapt to the output of the high-resolution screen content, the high-resolution image can be stored in the terminal and then finally output to the screen display by processing the stored high-resolution image, but the power consumption of the terminal is increased.
Therefore, in order to reduce the power consumption of the terminal and also adapt to the output of High Resolution image content, at present, a Super Resolution (SR) technology is mainly used to generate a High Resolution (HR) image or video from a Low Resolution (LR) image or video, so all the content stored in the terminal can be Low Resolution (720p, 1080p, etc.), and then the Low Resolution content is enlarged to 2K or 4K by the Super Resolution technology to adapt to the screen display, so that the power consumption of the terminal can be reduced and the display of the High Resolution screen can be ensured, but during the use, it is found that the Super Resolution algorithms are various, and the image quality, the required calculation amount and the brought terminal power consumption obtained by different Super Resolution algorithms are different, so that what kind of Super Resolution algorithm is used, the method can ensure high-resolution screen display and meet the requirement of users on image quality fineness, and can reduce the power consumption of the terminal, so that the method is a difficult problem to solve urgently in the industry.
Disclosure of Invention
The embodiment of the invention provides an image display method and a terminal, which can adjust the screen display effect according to the sensitivity of a user to the image quality fineness of a terminal screen display image, and can balance the visual experience of the user and the power consumption of the terminal, so that the power consumption of the terminal can be reduced to a certain extent.
The first aspect of the embodiments of the present invention discloses an image display method, including: determining a sensitivity degree, wherein the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen; selecting a target super-resolution algorithm corresponding to the sensitivity degree, wherein the higher the sensitivity degree is, the better the image quality fineness of the image obtained by the target super-resolution algorithm corresponding to the sensitivity degree is; and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image to the terminal screen.
In the embodiment, the screen display effect is adjusted according to the sensitivity of the user to the fineness of the image quality of the terminal screen display image, so that the balance between the visual experience of the user and the power consumption of the terminal can be realized, and the power consumption of the terminal can be reduced to a certain extent.
As an alternative embodiment, the determining a sensitivity level comprises: acquiring the distance between eyes of a user and a camera, acquiring the relative stability between the camera and the eyes of the user, and acquiring the difference value between the ambient light intensity value of the terminal and the brightness value of the terminal screen; determining a first sensitivity value of a user to the image quality fineness of the display image in the terminal screen according to the distance, determining a second sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the relative stability, and determining a third sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the difference value; and taking the sum of the first sensitivity value, the second sensitivity value and the product of the third sensitivity value and the corresponding weight coefficient as a target sensitivity value.
In this embodiment, the sensitivity of the user to the fineness of the image quality of the image displayed in the terminal screen can be determined by collecting the state information of the user or the state information of the environment where the terminal is located.
As an optional implementation, the acquiring the relative stability of the camera and the user's eyes includes: acquiring the distance between the eyes of the user and the camera and/or acquiring the angle between the eyes of the user and the camera; calculating a first variation of a distance between the eyes of the user and the camera within a first preset time and/or calculating a second variation of an angle between the eyes of the user and the camera within a second preset time; and determining the relative stability of the camera and the eyes of the user according to the first variation and/or the second variation, wherein the larger the first variation and/or the larger the second variation, the lower the relative stability.
In this embodiment, the relative stability of the camera and the user's eyes may be determined by the amount of change in the distance and/or angle between the user's eyes and the camera.
As an optional implementation, the acquiring the relative stability of the camera and the user's eyes includes: acquiring the acceleration of the terminal; calculating a third variation of the acceleration within a third preset time; and determining the relative stability of the camera and the eyes of the user according to the third variation, wherein the larger the third variation is, the lower the relative stability is.
In this embodiment, the relative stability of the camera and the eyes of the user can be determined by the amount of change in the acceleration of the terminal.
As an alternative embodiment, before said determining a sensitivity level, the method further comprises: detecting whether a virtual reality application program in the terminal is in a starting state or not; if the terminal screen is in the starting state, determining that the sensitivity of a user to the image quality fineness of the display image in the terminal screen is the highest; if not, the determination of a sensitivity level is performed.
The method has the beneficial effect that the sensitivity of the user to the fineness of the image quality of the display image in the terminal screen is determined by detecting whether the virtual reality application program in the terminal is in the starting state.
As an optional implementation manner, after the detecting whether the virtual reality application in the terminal is in the startup state and before the determining a sensitivity level, the method further includes: if the camera is not in the starting state, acquiring a target image in the field range of the camera; identifying whether the target image contains a face image or not; said determining a sensitivity level comprises: if the image does not contain the face image, determining that the sensitivity of a user to the image quality fineness of the image displayed in the terminal screen is the lowest; and if the terminal screen contains the face image, determining the sensitivity degree of a user corresponding to the face image to the image quality fineness of the display image in the terminal screen.
The embodiment has the beneficial effect of detecting whether a potential user is in the field of view of the front camera of the terminal, namely whether the potential user is in the range of the screen of the viewable terminal.
As an optional implementation manner, after the identifying whether the target image includes a face image, and before the determining the sensitivity of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen, the method further includes:
if the target image contains the face image, acquiring an included angle between the head orientation of the user corresponding to each face image contained in the target image and the terminal screen; calculating the number N of users with the included angles larger than a preset angle threshold; the determining the sensitivity degree of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen comprises: if the number N of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest; and if the number N of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images with the included angles larger than the preset angle threshold value to the image quality fineness of the display images in the terminal screen.
The embodiment has the advantage that whether the user possibly watches the terminal screen is determined through the included angle between the head orientation of the user and the terminal screen.
As an optional implementation manner, after the identifying whether the target image includes a face image, and before the determining the sensitivity of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen, the method further includes:
if the target image contains the face image, the sight line direction of the user corresponding to each face image contained in the target image is obtained; calculating the number M of users of which the sight directions do not deviate from the display area of the terminal screen; the determining the sensitivity degree of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen comprises: if the number M of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest; and if the number M of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images of which the sight directions do not deviate from the display area of the terminal screen to the image quality fineness of the display images in the terminal screen.
The embodiment has the advantage that whether the user is likely to look at the terminal screen is determined through the relative positions of the eyes of the user relative to the terminal screen and the sight direction of the user.
As an optional implementation, before the selecting the target super-resolution algorithm corresponding to the sensitivity level, the method further includes: acquiring the maximum sensitivity value in the sensitivity of a plurality of users to the image quality fineness of the display image in the terminal screen; the selecting the target super-resolution algorithm corresponding to the sensitivity degree comprises: and selecting a target super-resolution algorithm corresponding to the maximum sensitivity value.
The method has the advantage that when a plurality of users watch the terminal screen, the target super-resolution algorithm corresponding to the maximum sensitivity degree of each user to the image quality fineness of the terminal screen display image is determined.
A second aspect of the embodiments of the present invention discloses a terminal, which includes: the image processing device comprises a processor and a memory, wherein the memory stores executable program codes, an original image and a target image obtained by processing the original image; the processor is configured to perform the following operations: determining a sensitivity degree, wherein the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen; selecting a target super-resolution algorithm corresponding to the sensitivity degree, wherein the higher the sensitivity degree is, the better the image quality fineness of the image obtained by the target super-resolution algorithm corresponding to the sensitivity degree is; and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image to the terminal screen.
As an alternative embodiment, the processor determining a sensitivity level comprises: acquiring the distance between eyes of a user and a camera, acquiring the relative stability between the camera and the eyes of the user, and acquiring the difference value between the ambient light intensity value of the terminal and the brightness value of the terminal screen; determining a first sensitivity value of a user to the image quality fineness of the display image in the terminal screen according to the distance, determining a second sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the relative stability, and determining a third sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the difference value; and taking the sum of the first sensitivity value, the second sensitivity value and the product of the third sensitivity value and the corresponding weight coefficient as a target sensitivity value.
As an optional implementation, the processor obtains the relative stability of the camera and the user's eyes, and is specifically configured to perform the following operations: acquiring the distance between the eyes of the user and the camera and/or acquiring the angle between the eyes of the user and the camera; calculating a first variation of a distance between the eyes of the user and the camera within a first preset time and/or calculating a second variation of an angle between the eyes of the user and the camera within a second preset time; and determining the relative stability of the camera and the eyes of the user according to the first variation and/or the second variation, wherein the larger the first variation and/or the larger the second variation, the lower the relative stability.
As an optional implementation, the processor obtains the relative stability of the camera and the user's eyes, and is specifically configured to perform the following operations: acquiring the acceleration of the terminal; calculating a third variation of the acceleration within a third preset time; and determining the relative stability of the camera and the eyes of the user according to the third variation, wherein the larger the third variation is, the lower the relative stability is.
As an alternative embodiment, the processor is further configured to, prior to the determining a sensitivity level, perform the following operations: detecting whether a virtual reality application program in the terminal is in a starting state or not; if the terminal screen is in the starting state, determining that the sensitivity of a user to the image quality fineness of the display image in the terminal screen is the highest; if not, the determination of a sensitivity level is performed.
As an optional implementation manner, after the detecting whether the virtual reality application in the terminal is in the startup state and before the determining a sensitivity level, the processor is further configured to: if the virtual reality application program is not in a starting state, acquiring a target image in a field range of the camera; identifying whether the target image contains a face image or not; the processor determining a sensitivity level comprises: if the image does not contain the face image, determining that the sensitivity of a user to the image quality fineness of the image displayed in the terminal screen is the lowest; and if the terminal screen contains the face image, determining the sensitivity degree of a user corresponding to the face image to the image quality fineness of the display image in the terminal screen.
As an optional implementation manner, after the identifying whether the target image includes a face image, and before the determining the sensitivity of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen, the processor is further configured to perform the following operations: if the target image contains the face image, acquiring an included angle between the head orientation of the user corresponding to each face image contained in the target image and the terminal screen; calculating the number N of users with the included angles larger than a preset angle threshold; the processor determines the sensitivity degree of a user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen, and the sensitivity degree comprises the following steps: if the number N of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest; and if the number N of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images with the included angles larger than the preset angle threshold value to the image quality fineness of the display images in the terminal screen.
As an optional implementation manner, after the identifying whether the target image includes a face image, and before the determining the sensitivity of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen, the processor is further configured to perform the following operations: if the target image contains the face image, the sight line direction of the user corresponding to each face image contained in the target image is obtained; calculating the number M of users of which the sight directions do not deviate from the display area of the terminal screen; the processor determines the sensitivity degree of a user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen, and the sensitivity degree comprises the following steps: if the number M of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest; and if the number M of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images of which the sight directions do not deviate from the display area of the terminal screen to the image quality fineness of the display images in the terminal screen.
As an optional implementation, before the selecting the target super-resolution algorithm corresponding to the sensitivity level, the processor is further configured to: acquiring the maximum sensitivity value in the sensitivity of a plurality of users to the image quality fineness of the display image in the terminal screen; the processor selecting a target super-resolution algorithm corresponding to the sensitivity level comprises: and selecting a target super-resolution algorithm corresponding to the maximum sensitivity value.
A third aspect of the embodiments of the present invention discloses a terminal, including: the first determining unit is used for determining a sensitivity degree, and the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen; the selecting unit is used for selecting the target super-resolution algorithm corresponding to the sensitivity degree, and the higher the sensitivity degree is, the better the image quality fineness obtained by the target super-resolution algorithm corresponding to the sensitivity degree is; the image processing unit is used for processing the input original image by using the target super-resolution algorithm to obtain a target image; and the display unit is used for displaying the target image to the terminal screen.
As an optional implementation, the first determining unit includes: the first acquisition unit is used for acquiring the distance between the eyes of the user and the camera; the second acquisition unit is used for acquiring the relative stability of the camera and the eyes of the user; the third acquisition unit is used for acquiring a difference value between the ambient light intensity value of the terminal and the brightness value of the terminal screen; the first determining subunit is used for determining a first sensitivity value of a user to the image quality fineness of the display image in the terminal screen according to the distance, determining a second sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the relative stability, and determining a third sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the difference; the first determining subunit is further configured to use a sum of the first sensitivity value, the second sensitivity value, and a product of the third sensitivity value and a corresponding weight coefficient as a target sensitivity value.
As an optional implementation, the second obtaining unit includes: the second acquisition subunit is used for acquiring the distance between the eyes of the user and the camera and/or acquiring the angle between the eyes of the user and the camera; the first calculating unit is used for calculating a first variation of the distance between the eyes of the user and the camera within a first preset time and/or calculating a second variation of the angle between the eyes of the user and the camera within a second preset time; and the second determining subunit is used for determining the relative stability of the camera and the eyes of the user according to the first variation and/or the second variation, and the greater the first variation and/or the greater the second variation, the lower the relative stability.
As an optional implementation, the second obtaining unit includes: the second acquisition subunit is used for acquiring the acceleration of the terminal; the first calculating unit is used for calculating a third variable quantity of the acceleration within a third preset time; and the second determining unit is used for determining the relative stability of the camera and the eyes of the user according to the third variation, and the larger the third variation is, the lower the relative stability is.
As an optional implementation manner, the terminal further includes: the detection unit is used for detecting whether a virtual reality application program in the terminal is in a starting state or not; the first determining unit is further used for determining that the sensitivity of a user to the fineness of the image quality of the image displayed in the terminal screen is the highest when the terminal screen is in a starting state; the determining a sensitivity level is performed when not in the activated state.
As an optional implementation manner, the terminal further includes: the fourth acquisition unit is used for acquiring a target image in the field range of the camera when the virtual reality application program is not in a starting state; the face recognition unit is used for recognizing whether the target image contains a face image or not; the first determining unit is further used for determining that the sensitivity of a user to the fineness of the image quality of the image displayed in the terminal screen is the lowest when the image does not contain the face image; and when the image contains the face image, determining the sensitivity degree of a user corresponding to the face image to the image quality fineness of the display image in the terminal screen.
As an optional implementation manner, the terminal further includes: a fifth obtaining unit, configured to obtain, when a face image is included, an included angle between a head orientation of a user corresponding to each face image included in the target image and the terminal screen; the second calculation unit is used for calculating the number N of users with the included angles larger than a preset angle threshold; the first determining unit is further configured to determine that the user has the lowest sensitivity to the fineness of the image quality of the image displayed in the terminal screen when the number N of users is zero; and when the number N of the users is greater than zero, determining the sensitivity degree of the users corresponding to the face images with the included angles greater than the preset angle threshold value to the image quality fineness of the display images in the terminal screen.
As an optional implementation manner, the terminal further includes: a sixth acquiring unit, configured to acquire, when a face image is included, a gaze direction of a user corresponding to each face image included in the target image; a third calculating unit, configured to calculate the number M of users whose gaze directions do not deviate from a display area of the terminal screen; the first determining unit is further configured to determine that the user has the lowest sensitivity to the fineness of the image quality of the image displayed in the terminal screen when the number M of users is zero; and when the number M of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images of which the sight directions do not deviate from the display area of the terminal screen to the image quality fineness of the display images in the terminal screen.
As an optional implementation manner, the terminal further includes: a seventh obtaining unit, configured to obtain a maximum sensitivity value among sensitivity degrees of multiple users to image quality fineness of a display image in the terminal screen; the selection unit is specifically configured to select a target super-resolution algorithm corresponding to the maximum sensitivity value.
A fourth aspect of the present invention discloses a computer storage medium for storing computer software instructions according to the first aspect, which contains a program designed to execute the above aspects.
It should be understood that the second to fourth aspects of the embodiments of the present invention are consistent with the technical solution of the first aspect of the embodiments of the present invention, and similar beneficial effects are obtained, and are not described again.
Compared with the prior art, in the scheme of the embodiment of the invention, the sensitivity degree of the user to the image quality fineness of the image displayed in the terminal screen is determined, the target super-resolution algorithm corresponding to the sensitivity degree is selected, the higher the sensitivity degree is, the better the image quality fineness of the image obtained by the target super-resolution algorithm corresponding to the sensitivity degree is, then the target super-resolution algorithm is used for processing the input original image to obtain the target image, and the target image is displayed on the terminal screen. Therefore, the screen display effect can be adjusted according to the sensitivity of the user to the image quality fineness of the terminal screen display image, the balance between the visual experience of the user and the power consumption of the terminal can be realized, and the power consumption of the terminal can be reduced to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is an exemplary implementation scenario disclosed in an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image displaying method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another image display method according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another image displaying method according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating another image displaying method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another terminal disclosed in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, or apparatus.
The embodiment of the invention discloses an image display method and a terminal, which can adjust the screen display effect according to the sensitivity of a user to the image quality fineness of a terminal screen display image, and can balance the visual experience of the user and the power consumption of the terminal, thereby reducing the power consumption of the terminal to a certain extent. Referring to fig. 1, fig. 1 exemplarily shows a user usage scenario of the present invention, when a user focuses on a terminal screen, as shown in a left diagram, that is, when the user has a high sensitivity to the fineness of the image quality of the terminal screen display image, a super-resolution algorithm with a relatively high fineness of the image quality of the display image is selected to generate the display image, and conversely, when the user focuses on a lower degree, as shown in a right diagram, that is, when the user has a low sensitivity to the fineness of the image quality of the terminal screen display image, a super-resolution algorithm with a relatively low fineness of the image quality of the display image is selected to generate the display image. The following are detailed below.
Referring to fig. 2, fig. 2 is a schematic flow chart of an image display method according to an embodiment of the present invention. The image display method shown in fig. 2 may include the following steps:
101: determining a sensitivity degree, wherein the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen;
in the embodiment of the present invention, the terminal may include a terminal such as a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and the like, and the following description of the embodiment of the present invention will not be repeated. Wherein the terminal may comprise at least one processor, and the terminal may be operable under the control of the at least one processor.
As an alternative implementation manner, in the step 101, determining a sensitivity level, where the sensitivity level is used to represent a user's perception ability of the fineness of the image quality of the image displayed in the terminal screen, that is, the sensitivity level of the user's perception ability of the fineness of the image quality of the image displayed in the terminal screen may include the following operations:
acquiring the distance between eyes of a user and a camera, acquiring the relative stability between the camera and the eyes of the user, and acquiring the difference value between the ambient light intensity value of the terminal and the brightness value of the terminal screen;
determining a first sensitivity value of a user to the image quality fineness of the display image in the terminal screen according to the distance, determining a second sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the relative stability, and determining a third sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the difference;
and taking the sum of the first sensitivity value, the second sensitivity value and the product of the third sensitivity value and the corresponding weight coefficient as a target sensitivity value.
Wherein, the distance between the eyes of the user and the camera can be obtained by the infrared sensor, the depth camera or the distance sensor in the terminal, the distance between the eyes of the user and the camera can be calculated by the double front-facing cameras according to the angle between the eyes and the two cameras and the distance between the two cameras, the distance between the eyes of the user and the camera can be estimated according to the pupil distance of the pupil by the face image shot by the single front-facing camera, the distance between the eyes of the user and the camera can be estimated according to the size of the face image, the first sensitivity degree value of the user for the image quality of the display image in the terminal screen can be determined based on the distance between the eyes of the user and the camera, different distance ranges can be set, each distance range corresponds to a sensitivity degree value, the functional relation between the distance and the sensitivity degree value can be established by the test calibration, and the like, wherein the larger the distance between the user's eyes and the camera, the lower the first sensitivity value.
Optionally, obtaining the relative stability of the camera and the user's eyes may include the following:
acquiring the distance between the eyes of the user and the camera and/or acquiring the angle between the eyes of the user and the camera;
calculating a first variable quantity of a distance between the eyes of the user and the camera within first preset time and/or calculating a second variable quantity of an angle between the eyes of the user and the camera within second preset time;
and determining the relative stability of the camera and the eyes of the user according to the first variation and/or the second variation, wherein the relative stability is lower when the first variation is larger and/or the second variation is larger.
Wherein, because the spatial position between the camera and the terminal screen on the terminal is fixed, determining the relative stability of the camera and the eyes of the user can also represent the relative stability of the eyes of the user and the terminal screen.
In the embodiment of the present invention, when the user watches the terminal screen, the front side mainly faces the terminal screen, and therefore, the relative stability between the front camera and the eyes of the user needs to be obtained.
The angle between the eyes of the user and the camera can be calculated through the deviation of an image area where the eyes of the user are located in an image acquired by the front camera relative to the central point of the whole image, wherein the angle between the eyes of the user and the camera is specifically the angle between a straight line from the eyes to the camera and the central axis of the camera (the central axis is perpendicular to the terminal screen), and specifically represents the angle in the horizontal direction with the central axis of the camera and the angle in the vertical direction with the central axis of the camera.
The first preset time and the second preset time can be determined by experiments or determined by empirical values.
The method comprises the steps of obtaining relative stability of a camera and eyes of a user, determining the relative stability of the camera and the eyes of the user according to variation of a distance between the eyes of the user and the camera, determining the variation of an angle between the eyes of the user and the camera, determining the variation of the distance and the variation of the angle at the same time, specifically adopting any mode without unique limitation, determining a second sensitivity degree value of the user on the image quality fineness of a display image in a terminal screen based on the relative stability of the camera and the eyes of the user, setting different relative stability ranges, wherein each range corresponds to one sensitivity degree value, establishing a functional relation between the relative stability and the sensitivity degree value through test calibration, and the like, wherein the lower the relative stability is, the lower the second sensitivity degree value is.
Optionally, obtaining the relative stability of the camera and the user's eyes may further include:
acquiring the acceleration of the terminal;
calculating a third variation of the acceleration within a third preset time;
and determining the relative stability of the camera and the eyes of the user according to the third variation, wherein the larger the third variation is, the lower the relative stability is.
The acceleration of the terminal can be determined by the variation of the value obtained by the sensor related to the terminal position, such as an acceleration sensor, a gravity sensor, a gyroscope, and the like in the terminal, for example, the value obtained by the sensor is differentiated according to time, if the acceleration direction is a fixed direction under the condition of uniform acceleration, and if the acceleration value is a fixed value, the user can keep the relative stability between the camera and the eyes of the user, and if the direction of the acceleration or the value of the acceleration changes greatly, the relative stability between the camera and the eyes of the user can be reduced accordingly. The relative stability between the camera and the eyes of the user can be determined by the sum of the accelerations of the three components of the terminal in the X direction, the Y direction and the Z direction, which are obtained by the acceleration sensor and the gyroscope.
The method includes the steps that an ambient light intensity value of a terminal and a brightness value of a terminal screen can be obtained through photosensitive components such as a photosensitive chip, a camera or a light sensor in the terminal, if the ambient light intensity value is larger than the brightness value of the terminal screen, a difference value between the ambient light intensity value of the terminal and the brightness value of the terminal screen is obtained, and when the difference value is larger, the sensitivity degree of a user to the image quality fineness of an image displayed in the terminal screen is reduced. The third sensitivity degree value of the user for the image quality fineness of the display image in the terminal screen can be determined based on the difference between the ambient light intensity value of the terminal and the brightness value of the terminal screen, different range of the difference can be set, each range corresponds to one sensitivity degree value, a functional relation between the light intensity difference value and the sensitivity degree value can be established through test calibration, and the like, wherein the larger the difference is, the lower the third sensitivity degree value is.
As an optional implementation manner, a weight may be determined for each manner of obtaining the sensitivity level, and the last sensitivity level value is a result of adding the sensitivity level values obtained by the respective manners after multiplying the sensitivity level values by the corresponding weights.
102: selecting a target super-resolution algorithm corresponding to the sensitivity degree, wherein the higher the sensitivity degree is, the better the image quality fineness of the image obtained by the target super-resolution algorithm corresponding to the sensitivity degree is;
the higher the sensitivity degree is, the higher the attention of the user to the details of the image displayed on the terminal screen is, the better the image quality fineness of the image obtained by the target super-resolution algorithm is required to be.
The more optimal the target super-resolution algorithm is, the more excellent the fineness of the image quality obtained by the super-resolution algorithm is, that is, the finer the detail of the image display to be output is, and the higher the definition is.
For example, the super-resolution algorithms commonly used at present are generally classified into interpolation-based algorithms and learning-based super-resolution algorithms. Of the interpolation-based SRs, the simplest method is to interpolate based on the spatial average and convolution of image pixel values, such as bilinear interpolation, bicubic interpolation, and multiphase interpolation; the interpolation algorithm only needs a few multiply-add operations, has a small resource area (about 10 ten thousand logic gates), and has low corresponding power consumption. The algorithm effect is poor, and phenomena such as edge sawtooth and image blurring are easy to occur, so that the display effect of the image obtained by adopting the super-resolution algorithm is poor.
In the SR based on Interpolation, there are some methods based on content adaptation, such as NEDI (New Edge-Directed Interpolation), SAI (Soft-discrete adaptive Interpolation), Double Interpolation, etc., which may improve the jaggy of the image Edge and the image definition in some ways compared to the simple difference algorithm mentioned before according to the Interpolation of the image content adaptation, but such algorithms are prone to have side effects (such as black and white edges, ringing phenomena, etc.), and the details are difficult to further improve. The calculation operation of the algorithm is relatively complex, the resource area is also large (about 50 ten thousand logic gates), the corresponding power consumption is larger, and the display effect of the image obtained by adopting the super-resolution algorithm is relatively better than that of the first effect.
SR algorithms based on learning classes typically use a training set of images to generate a learning model that is used to create more high frequency information for the input low resolution images. Therefore, the algorithm can recover more high-frequency detail information, has better image freshness and can provide the effect close to a true high-definition image. However, the algorithm needs a large-scale storage database (a dictionary database or a weight database corresponding to images, etc.), the database is operated frequently, the resource area is large compared with the first two types (about 150 to 200 ten thousand logic gates), and the power consumption is also large, however, the display effect of the image obtained by the super-resolution algorithm is better compared with the second effect.
From the above analysis, it can be seen that the fineness of the image quality obtained by the SR algorithm based on the learning class is better than that of the SR algorithm based on the content adaptation in the interpolation, and the fineness of the image quality obtained by the SR algorithm based on the content adaptation in the interpolation is better than that of the SR algorithm based on the interpolation based on the spatial average value and convolution of the image pixel values.
It should be noted that the above description is only an example showing the difference between the super-resolution algorithms, and the super-resolution algorithm used in the embodiment of the present invention is not limited to the SR algorithm based on the learning class or the SR algorithm based on the interpolation.
103: and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image on a terminal screen.
In the embodiment of the invention, after the input original image is processed by the target super-resolution algorithm, the processed target image is displayed on the terminal screen, the screen display effect can be adjusted according to the sensitivity of a user to the image quality fineness of the terminal screen display image, and the balance between the visual experience of the user and the power consumption of the terminal can be realized.
Therefore, by using the image display method described in fig. 2, the screen display effect can be adjusted according to the sensitivity of the user to the fineness of the image quality of the terminal screen display image, and the balance between the user visual experience and the terminal power consumption can be achieved, so that the terminal power consumption can be reduced to a certain extent.
Referring to fig. 3, fig. 3 is a schematic flow chart of another image display method according to an embodiment of the invention. In the image display method described in fig. 3, the following steps may be included:
200: detecting whether a virtual reality application program in a terminal is in a starting state;
201: if the terminal screen is in the starting state, determining that the sensitivity of a user to the fineness of the image quality of the display image in the terminal screen is the highest, and executing the step 210 to the step 211;
if the terminal is running the virtual reality application program, the distance between the terminal and the user is small, and the relative position of the terminal and the head can be kept even if the head is irregularly moved when the user performs virtual reality experience, so that the determination sensitivity is highest in the virtual reality mode.
202: if the camera is not in the starting state, acquiring a target image in the field range of the camera;
the camera is specifically a front camera in the terminal.
203: identifying whether the target image contains a face image or not;
in the embodiment of the invention, a face recognition mode can be adopted to judge whether the target image contains the face image, namely whether a potential user is in the range of the viewable terminal screen is detected. Most of the current terminals have front cameras, so that face recognition can be carried out according to images shot by the front cameras.
As an alternative embodiment, the face recognition technology used may include, but is not limited to, the following:
and the template reference method is that one or more templates of the standard human face are designed in advance, then the matching degree between the picture shot by the front camera and the template of the standard human face is calculated, and if the matching degree exceeds a preset threshold value, the human face is determined to be recognized.
The face rule method is that a corresponding rule is generated by using the structural distribution characteristics of a face, and a picture shot by a front camera is matched with the rule to perform face recognition.
The sample learning method is to learn the face image sample set and the non-face image sample set and train a classifier by adopting an artificial neural network method, and input the picture shot by the front camera into the classifier to obtain the classification result of whether the picture contains the face.
The skin color model method is to detect whether the picture shot by the front camera contains the human face according to the rule that the skin color of the face is relatively concentrated in the color space.
And the characteristic sub-face method is to regard all face image sets as a face image sub-space, and judge whether the pictures shot by the front camera contain the face or not based on the distance between the pictures shot by the front camera and the projection of the face image sub-space.
Since the face recognition technology is a mature technology, it is not described in detail here.
204: if the face image is not included, determining that the sensitivity of the user to the image quality fineness of the display image in the terminal screen is the lowest, and executing the step 210 to the step 211;
in the embodiment of the invention, if the target image shot by the front camera does not contain the face image, the fact that no user watches the terminal screen is shown, in order to save the terminal power consumption, the sensitivity degree of the user to the image quality fineness of the image displayed in the terminal screen is determined to be the lowest, meanwhile, the corresponding target super-resolution algorithm with the lowest sensitivity degree is selected, and the image quality fineness obtained by adopting the target super-resolution algorithm is the lowest.
205: if the target image contains the face image, acquiring an included angle between the head orientation of the user corresponding to each face image contained in the target image and a terminal screen;
in the embodiment of the invention, the user can see the content displayed on the terminal screen only when the head orientation of the user is within a proper range of the included angle between the head orientation of the user and the terminal screen. The method for obtaining the included angle between the head orientation of the user and the terminal screen can adopt a statistical classifier (such as a support vector machine, an artificial neural network, a hidden Markov model and the like), a series of classifiers corresponding to different head orientation images are trained by adopting known images, the head images of the user currently shot by the front camera of the terminal are classified according to the trained series of classifiers, and the head orientation angle corresponding to the classifier with the highest trust value is the included angle between the head orientation of the user and the terminal screen.
206: calculating the number N of users with included angles larger than a preset angle threshold;
in the embodiment of the present invention, when the angle between the head direction of the user and the terminal screen is greater than a certain angle threshold, the user can see the content displayed on the terminal screen, for example, when the head of the user is directly opposite to the terminal screen, the angle is 90 degrees, and the angle threshold can be determined through experiments.
207: if the number N of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest, and executing the step 210 to the step 211;
if the number N of the users is zero, the fact that no user is watching the content displayed in the terminal screen is shown, in order to save terminal power consumption, the fact that the sensitivity of the user to the image quality fineness of the image displayed in the terminal screen is the lowest is determined, meanwhile, the corresponding target super-resolution algorithm with the lowest sensitivity is selected, and the image quality fineness of the image obtained by the target super-resolution algorithm is the lowest.
208: if the number N of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images with the included angles larger than the preset angle threshold value to the image quality fineness of the display images in the terminal screen;
the method for determining the sensitivity of the user corresponding to the face image with the included angle greater than the preset angle threshold to the image quality fineness of the display image in the terminal screen may refer to the description of step 101 in method embodiment 1, and the embodiment of the present invention will not be repeated.
209: acquiring the maximum sensitivity value in the sensitivity degrees of a plurality of users to the image quality fineness of the display image in the terminal screen;
210: selecting a target super-resolution algorithm corresponding to the maximum sensitivity value;
in the embodiment of the invention, if the number N of the users is greater than zero, the users who can see the display content of the terminal screen exist, so that the sensitivity of the users corresponding to the face images of which the included angles between the head orientations in the target image and the terminal screen are greater than the preset angle threshold to the image quality fineness of the display image in the terminal screen can be obtained.
As an alternative implementation, the maximum sensitivity value in the sensitivity determined in step 208 may be used as a basis for selecting the target super-resolution algorithm, so as to meet the requirement of all users on the fineness of the image quality of the terminal screen display image.
211: and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image on a terminal screen.
Therefore, by using the image display method described in fig. 3, when the virtual reality application program in the terminal is not in a starting state, the target image shot by the front camera of the terminal is identified to determine the sensitivity of each user who can see the display content of the terminal screen to the image quality fineness of the display image of the terminal screen, and a target super-resolution algorithm corresponding to the maximum sensitivity value in the sensitivity is selected to adjust the screen display effect, so that balance can be performed between the user visual experience and the terminal power consumption, and the power consumption of the terminal can be reduced to a certain extent.
Referring to fig. 4, fig. 4 is a schematic flow chart illustrating another image display method according to an embodiment of the present invention. In the image display method described in fig. 4, the following steps may be included:
300: detecting whether a virtual reality application program in a terminal is in a starting state;
301: if the terminal screen is in the starting state, determining that the sensitivity of a user to the fineness of the image quality of the display image in the terminal screen is the highest, and executing the step 310 to the step 311;
302: if the camera is not in the starting state, acquiring a target image in the field range of the camera;
the camera is specifically a front camera in the terminal.
303: identifying whether the target image contains a face image or not;
in the embodiment of the invention, a face recognition mode can be adopted to judge whether the target image contains the face image, namely whether a potential user is in the range of the viewable terminal screen is detected. Most of the current terminals have front cameras, so that face recognition can be carried out according to images shot by the front cameras.
As an alternative implementation, the implementation of step 203 may be referred to in the face recognition technology, which is not described herein again.
304: if the face image is not included, determining that the sensitivity of the user to the image quality fineness of the display image in the terminal screen is the lowest, and executing the step 310 to the step 311;
in the embodiment of the invention, if the target image shot by the front camera does not contain the face image, the fact that no user watches the terminal screen is shown, in order to save the terminal power consumption, the sensitivity degree of the user to the image quality fineness of the image displayed in the terminal screen is determined to be the lowest, meanwhile, the corresponding target super-resolution algorithm with the lowest sensitivity degree is selected, and the image quality fineness obtained by adopting the target super-resolution algorithm is the lowest.
305: if the target image contains the face image, the sight line direction of the user corresponding to each face image contained in the target image is obtained;
in the embodiment of the invention, if the sight direction of the user deviates from the display area of the terminal screen, the user cannot see the content displayed on the terminal screen.
As an alternative embodiment, it may be determined whether the user's sight line deviates from the display area of the terminal screen in the following manner: acquiring a horizontal distance between the pupil center and the eye center and acquiring a vertical distance between the pupil center and the eye center, wherein the horizontal distance represents strabismus, and the vertical distance represents luffing, and if the horizontal distance exceeds a horizontal distance threshold or the vertical distance exceeds a vertical distance threshold, determining that the sight line direction of the user deviates from the display area of the terminal screen.
The image processing method comprises the steps of determining a human eye image through image processing of a shot human face image, carrying out binarization again on a segmented eye window by adopting a partition dynamic threshold method, determining the center position of eyes by using a rectangular frame matching method, determining the center position of a pupil by using a template matching method or a mountain peak algorithm, and further calculating the horizontal distance between the center of the pupil and the center of the eyes and the vertical distance between the center of the pupil and the center of the eyes.
As an alternative embodiment, it may also be determined whether the user's sight line deviates from the display area of the terminal screen in the following manner: establishing a three-dimensional coordinate system, firstly finding out the relative spatial position of eyes relative to a front camera according to the angles of the eyes and the front camera and the distance between the eyes and the front camera, then calculating the approximate visual field range covered by the sight line by using the horizontal and vertical distances between the pupil center and the eye center, then calculating whether the mapping area of the visual field range on the plane where the screen is located is superposed with the screen area, if the superposed area exists, representing that a user watches the terminal screen, otherwise, the user does not watch the terminal screen.
Wherein, the angle between the eyes and the front camera can be obtained by the following modes: establishing a reference coordinate system, setting the position of the front camera as an origin, setting the plane determined by the terminal screen as an XY plane, setting the view field central line in the view field range determined by the front camera as a Z axis, and setting a target point observed from the camera along the Z axis sight line as the central point of the image acquired by the front camera. The image obtained by the front camera is subjected to human eye identification to determine a human eye area, so that the distance between the eye position and the central point of the image obtained by the front camera is calculated, namely, the horizontal and vertical offset (delta X, delta Y) of the eyes relative to the central point of the image is determined, and then the angle of the eye position deviating from the central line of the field of view of the front camera can be calculated through the following formula:
wherein alpha isx、αyRespectively representing the horizontal and vertical angles, alpha, of the eye away from the midline of the field of viewL、αHThe horizontal maximum angle and the vertical maximum angle respectively representing the field of view of the terminal front camera, and L, H representing the horizontal pixels and the vertical pixels of the image acquired by the terminal front camera.
The distance between the eyes and the front camera can be estimated through the size of the eyes in the image acquired by the front camera, and can also be approximate to the distance between the face and the front camera.
306: calculating the number M of users whose sight directions do not deviate from the display area of the terminal screen;
307: if the number M of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest, and executing the step 310 to the step 311;
if the number M of the users is zero, the situation that no user is watching the content displayed in the terminal screen is indicated, in order to save terminal power consumption, the fact that the sensitivity of the user to the image quality fineness of the image displayed in the terminal screen is the lowest is determined, meanwhile, the corresponding target super-resolution algorithm with the lowest sensitivity is selected, and the image quality fineness obtained by adopting the target super-resolution algorithm is the lowest.
308: if the number M of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images of which the sight directions do not deviate from the display area of the terminal screen to the image quality fineness of the display images in the terminal screen;
the sensitivity of the user corresponding to the face image whose sight line direction does not deviate from the display area of the terminal screen to the fineness of the image quality of the display image in the terminal screen may be determined as described in step 101 in method embodiment 1, which will not be repeated in the embodiments of the present invention.
309: acquiring the maximum sensitivity value in the sensitivity degrees of a plurality of users to the image quality fineness of the display image in the terminal screen;
310: selecting a target super-resolution algorithm corresponding to the maximum sensitivity value;
in the embodiment of the invention, if the number M of the users is greater than zero, the users who can see the display content of the terminal screen exist, so that the sensitivity of the users corresponding to the face images of which all sight directions do not deviate from the display area of the terminal screen in the target image to the image quality fineness of the display image in the terminal screen can be obtained.
As an alternative implementation, the maximum sensitivity value in the sensitivity determined in step 308 may be used as a basis for selecting the target super-resolution algorithm, so as to meet the requirement of all users on the fineness of the image quality of the terminal screen display image.
311: and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image on a terminal screen.
Therefore, by using the image display method described in fig. 4, when the virtual reality application program in the terminal is not in a starting state, the target image shot by the front camera of the terminal is identified to determine the sensitivity of each user who can see the display content of the terminal screen to the image quality fineness of the display image of the terminal screen, and a target super-resolution algorithm corresponding to the maximum sensitivity value in the sensitivity is selected to adjust the screen display effect, so that balance can be performed between the user visual experience and the terminal power consumption, and the power consumption of the terminal can be reduced to a certain extent.
Referring to fig. 5, fig. 5 is a schematic flow chart of another image display method according to an embodiment of the disclosure. In the image display method described in fig. 5, the following steps may be included:
400: detecting whether a virtual reality application program in a terminal is in a starting state;
401: if the terminal screen is in the starting state, determining that the sensitivity of a user to the fineness of the image quality of the image displayed in the terminal screen is the highest, and executing the steps 413-414;
402: if the camera is not in the starting state, acquiring a target image in the field range of the camera;
the camera is specifically a front camera in the terminal.
403: identifying whether the target image contains a face image or not;
404: if the face image is not included, determining that the sensitivity of the user to the image quality fineness of the display image in the terminal screen is the lowest, and executing the steps 413 to 414;
405: if the target image contains the face image, acquiring an included angle between the head orientation of the user corresponding to each face image contained in the target image and a terminal screen;
406: calculating the number N of users with included angles larger than a preset angle threshold;
407: if the number N of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest, and executing the steps 413-414;
408: if the number N of the users is larger than zero, determining the sight line direction of the user corresponding to the face image with the included angle larger than the preset angle threshold;
409: calculating the number M of users whose sight directions do not deviate from the display area of the terminal screen;
410: if the number M of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest, and executing the steps 413-414;
411: if the number M of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images of which the sight directions do not deviate from the display area of the terminal screen to the image quality fineness of the display images in the terminal screen;
in the embodiment of the invention, when the included angle between the head orientation of the user and the terminal screen is greater than the preset angle threshold, the user can see the content displayed in the terminal screen, and because the user may look at the screen by side or face towards the screen, but eyes look at other directions, the user cannot confirm whether the user is watching the screen according to the head orientation, and the head orientation and the sight of the user can be combined to more accurately determine whether the user is watching the screen.
In the embodiment of the present invention, a specific implementation manner of determining the sensitivity of the user corresponding to the face image whose sight line direction does not deviate from the display area of the terminal screen to the fineness of the image quality of the display image in the terminal screen may refer to the description in step 101 in embodiment 1, and the embodiment of the present invention will not be repeated.
412: acquiring the maximum sensitivity value in the sensitivity degrees of a plurality of users to the image quality fineness of the display image in the terminal screen;
413: selecting a target super-resolution algorithm corresponding to the maximum sensitivity value;
414: and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image on a terminal screen.
Therefore, by using the image display method described in fig. 5, when the virtual reality application program in the terminal is not in a starting state, the target image shot by the front camera of the terminal is identified to determine the sensitivity of each user who can see the display content of the terminal screen to the image quality fineness of the display image of the terminal screen, and a target super-resolution algorithm corresponding to the maximum sensitivity value in the sensitivity is selected to adjust the screen display effect, so that balance can be performed between the user visual experience and the terminal power consumption, and the power consumption of the terminal can be reduced to a certain extent.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in fig. 6, the terminal includes: at least one processor 501, memory 502, camera 503, sensor 504, input-output module 505, and user interface 506; in some embodiments of the invention, these components may be connected by a bus or other means.
The camera 503 may include a front camera or a rear camera, and the camera 503 may be used for capturing images, where the camera 503 is composed of a lens, an image sensor, a digital signal processing chip, a photosensitive element, and the like, and the image sensor includes a photosensitive pixel array and a filter disposed on the photosensitive pixel array. The image sensor may be a Charge Coupled Device (CCD) image sensor, a Complementary Metal Oxide Semiconductor (CMOS) sensor.
The sensors 504 may include light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may acquire a light intensity value of an environment in which the terminal is located. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can be used for applications of recognizing the terminal gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer, tapping), and the like; a gravity sensor is one of the motion sensors and can also be used for detecting the acceleration of the terminal, for example, the gravity sensor is used for sensing the gravity generated by the terminal during shaking; and the gravity sensed by the gravity sensor is converted into an acceleration value with a direction by a processor 501 in the terminal; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
The memory 502 is used for storing program codes, original images and target images obtained by processing the original images, and the memory 502 transmits the stored program codes to the CPU. Memory 502 may include Volatile Memory (Volatile Memory), such as Random Access Memory (RAM); the Memory 502 may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); the memory 502 may also comprise a combination of memories of the kind described above. The memory 502 is connected to the processor 501 via a bus.
The input/output module 505 is mainly used for implementing an interactive function between the terminal and a user/external environment, and mainly includes an audio input/output module, a key input module, a display, and the like. In a specific implementation, the input/output module 505 may further include: cameras, touch screens, sensors, and the like. Wherein the input output module 505 communicates with the processor 501 via the user interface 506.
In the embodiment of the present invention, the processor 501 calls the program code stored in the memory 502 to perform the following operations:
determining a sensitivity degree, wherein the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen;
selecting a target super-resolution algorithm corresponding to the sensitivity degree, wherein the higher the sensitivity degree is, the better the image quality fineness of the image obtained by the target super-resolution algorithm corresponding to the sensitivity degree is;
and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image on a terminal screen.
By using the terminal described in fig. 6, the screen display effect can be adjusted according to the sensitivity of the user to the resolution of the terminal screen display image, and the balance between the user visual experience and the terminal power consumption can be achieved, so that the power consumption of the terminal can be reduced to a certain extent.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another terminal disclosed in the embodiment of the present invention. As shown in fig. 7, the terminal may include:
a first determining unit 601, configured to determine a sensitivity level, where the sensitivity level is used to represent a perception capability of a user on fineness of image quality of a display image in a terminal screen;
a selecting unit 602, configured to select a target super-resolution algorithm corresponding to the sensitivity degree, where the higher the sensitivity degree is, the better the image quality fineness obtained by the target super-resolution algorithm corresponding to the sensitivity degree is;
an image processing unit 603, configured to process the input original image using the target super-resolution algorithm to obtain a target image;
a display unit 604 for displaying the above target image on a terminal screen.
In an embodiment of the invention, the terminal is presented in the form of a functional unit. As used herein, a "unit" may refer to an ASIC circuit, a processor and memory that execute one or more software or firmware programs, and/or other components that provide the described functionality. The terminal may take the form shown in figure 6. For example, the determination unit 601 may be implemented by the processor 501, the camera 503, and the sensor 504 in fig. 6, the selection unit 602 and the image processing unit 603 may be implemented by the processor 501 in fig. 6, and the display unit 604 may be implemented by the input/output module 505 in fig. 6.
It should be noted that the functions of the functional units in the terminal described in the foregoing embodiments may be implemented according to the method in the method embodiments shown in fig. 2, fig. 3, fig. 4, and fig. 5, and are not described herein again.
By operating the unit, the screen display effect can be adjusted according to the sensitivity of the user to the image quality fineness of the terminal screen display image, and the balance between the visual experience of the user and the power consumption of the terminal can be realized, so that the power consumption of the terminal can be reduced to a certain extent.
In summary, by implementing the embodiment of the present invention, the screen display effect can be adjusted according to the sensitivity of the user to the image quality fineness of the terminal screen display image, and the balance between the user visual experience and the terminal power consumption can be performed, so that the terminal power consumption can be reduced to a certain extent.
It should be noted that the terminals are only divided according to the functional logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
In addition, it is understood by those skilled in the art that all or part of the steps in the above method embodiments may be implemented by related hardware, and the corresponding program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the embodiment of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (18)
1. An image display method, comprising:
determining a sensitivity degree, wherein the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen;
selecting a target super-resolution algorithm corresponding to the sensitivity degree, wherein the higher the sensitivity degree is, the better the image quality fineness of the image obtained by the target super-resolution algorithm corresponding to the sensitivity degree is; the definition is higher when the image quality fineness is better;
and processing the input original image by using the target super-resolution algorithm to obtain a target image, and displaying the target image to the terminal screen.
2. The method of claim 1, wherein said determining a sensitivity level comprises:
acquiring the distance between eyes of a user and a camera, acquiring the relative stability between the camera and the eyes of the user, and acquiring the difference value between the ambient light intensity value of the terminal and the brightness value of the terminal screen;
determining a first sensitivity value of a user to the image quality fineness of the display image in the terminal screen according to the distance, determining a second sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the relative stability, and determining a third sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the difference value;
and taking the sum of the first sensitivity value, the second sensitivity value and the product of the third sensitivity value and the corresponding weight coefficient as a target sensitivity value.
3. The method of claim 2, wherein obtaining the relative stability of the camera and the user's eye comprises:
acquiring the distance between the eyes of the user and the camera and/or acquiring the angle between the eyes of the user and the camera;
calculating a first variation of a distance between the eyes of the user and the camera within a first preset time and/or calculating a second variation of an angle between the eyes of the user and the camera within a second preset time;
and determining the relative stability of the camera and the eyes of the user according to the first variation and/or the second variation, wherein the larger the first variation and/or the larger the second variation, the lower the relative stability.
4. The method of claim 2, wherein obtaining the relative stability of the camera and the user's eye comprises:
acquiring the acceleration of the terminal;
calculating a third variation of the acceleration within a third preset time;
and determining the relative stability of the camera and the eyes of the user according to the third variation, wherein the larger the third variation is, the lower the relative stability is.
5. The method of any one of claims 1 to 4, wherein prior to said determining a sensitivity level, said method further comprises:
detecting whether a virtual reality application program in the terminal is in a starting state or not;
if the terminal screen is in the starting state, determining that the sensitivity of a user to the image quality fineness of the display image in the terminal screen is the highest;
if not, the determination of a sensitivity level is performed.
6. The method according to claim 5, wherein after said detecting whether a virtual reality application in said terminal is in a startup state and before said determining a sensitivity level, said method further comprises:
if the virtual reality application program is not in a starting state, acquiring a target image in a field range of a camera;
identifying whether the target image contains a face image or not;
the determining a sensitivity level includes:
if the image does not contain the face image, determining that the sensitivity of a user to the image quality fineness of the image displayed in the terminal screen is the lowest;
and if the terminal screen contains the face image, determining the sensitivity degree of a user corresponding to the face image to the image quality fineness of the display image in the terminal screen.
7. The method of claim 6, after the identifying whether the target image includes a face image, and before the determining a degree of sensitivity of a user corresponding to the face image to fineness of image quality of an image displayed in the terminal screen, the method further comprises:
if the target image contains the face image, acquiring an included angle between the head orientation of the user corresponding to each face image contained in the target image and the terminal screen;
calculating the number N of users with the included angles larger than a preset angle threshold;
the determining the sensitivity degree of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen comprises:
if the number N of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest;
and if the number N of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images with the included angles larger than the preset angle threshold value to the image quality fineness of the display images in the terminal screen.
8. The method of claim 6, after the identifying whether the target image includes a face image, and before the determining a degree of sensitivity of a user corresponding to the face image to fineness of image quality of an image displayed in the terminal screen, the method further comprises:
if the target image contains the face image, the sight line direction of the user corresponding to each face image contained in the target image is obtained;
calculating the number M of users of which the sight directions do not deviate from the display area of the terminal screen;
the determining the sensitivity degree of the user corresponding to the face image to the fineness of the image quality of the image displayed in the terminal screen comprises:
if the number M of the users is zero, determining that the sensitivity of the users to the image quality fineness of the display image in the terminal screen is the lowest;
and if the number M of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images of which the sight directions do not deviate from the display area of the terminal screen to the image quality fineness of the display images in the terminal screen.
9. The method of claim 7 or 8, wherein prior to said selecting a target super resolution algorithm corresponding to said sensitivity level, said method further comprises:
acquiring the maximum sensitivity value in the sensitivity of a plurality of users to the image quality fineness of the display image in the terminal screen;
the selecting the target super-resolution algorithm corresponding to the sensitivity degree comprises:
and selecting a target super-resolution algorithm corresponding to the maximum sensitivity value.
10. A terminal, comprising:
the first determining unit is used for determining a sensitivity degree, and the sensitivity degree is used for representing the perception capability of a user on the fineness of the image quality of a display image in a terminal screen;
the selecting unit is used for selecting the target super-resolution algorithm corresponding to the sensitivity degree, and the higher the sensitivity degree is, the better the image quality fineness obtained by the target super-resolution algorithm corresponding to the sensitivity degree is; the definition is higher when the image quality fineness is better;
the image processing unit is used for processing the input original image by using the target super-resolution algorithm to obtain a target image;
and the display unit is used for displaying the target image to the terminal screen.
11. The terminal according to claim 10, wherein the first determining unit comprises:
the first acquisition unit is used for acquiring the distance between the eyes of the user and the camera;
the second acquisition unit is used for acquiring the relative stability of the camera and the eyes of the user;
the third acquisition unit is used for acquiring a difference value between the ambient light intensity value of the terminal and the brightness value of the terminal screen;
the first determining subunit is used for determining a first sensitivity value of a user to the image quality fineness of the display image in the terminal screen according to the distance, determining a second sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the relative stability, and determining a third sensitivity value of the user to the image quality fineness of the display image in the terminal screen according to the difference;
the first determining subunit is further configured to use a sum of the first sensitivity value, the second sensitivity value, and a product of the third sensitivity value and a corresponding weight coefficient as a target sensitivity value.
12. The terminal of claim 11, wherein the second obtaining unit comprises:
the second acquisition subunit is used for acquiring the distance between the eyes of the user and the camera and/or acquiring the angle between the eyes of the user and the camera;
the first calculating unit is used for calculating a first variation of the distance between the eyes of the user and the camera within a first preset time and/or calculating a second variation of the angle between the eyes of the user and the camera within a second preset time;
and the second determining unit is used for determining the relative stability of the camera and the eyes of the user according to the first variation and/or the second variation, and the larger the first variation and/or the second variation, the lower the relative stability.
13. The terminal of claim 11, wherein the second obtaining unit comprises:
the second acquisition subunit is used for acquiring the acceleration of the terminal;
the first calculating unit is used for calculating a third variable quantity of the acceleration within a third preset time;
and the second determining unit is used for determining the relative stability of the camera and the eyes of the user according to the third variation, and the larger the third variation is, the lower the relative stability is.
14. The terminal according to any of claims 10 to 13, characterized in that the terminal further comprises:
the detection unit is used for detecting whether a virtual reality application program in the terminal is in a starting state or not;
the first determining unit is further used for determining that the sensitivity of a user to the fineness of the image quality of the image displayed in the terminal screen is the highest when the terminal screen is in a starting state; the determining a sensitivity level is performed when not in the activated state.
15. The terminal of claim 14, wherein the terminal further comprises:
the fourth acquisition unit is used for acquiring a target image in the field range of the camera when the virtual reality application program is not in a starting state;
the face recognition unit is used for recognizing whether the target image contains a face image or not;
the first determining unit is further used for determining that the sensitivity of a user to the fineness of the image quality of the image displayed in the terminal screen is the lowest when the image does not contain the face image; and when the image contains the face image, determining the sensitivity degree of a user corresponding to the face image to the image quality fineness of the display image in the terminal screen.
16. The terminal of claim 15, wherein the terminal further comprises:
a fifth obtaining unit, configured to obtain, when a face image is included, an included angle between a head orientation of a user corresponding to each face image included in the target image and the terminal screen;
the second calculation unit is used for calculating the number N of users with the included angles larger than a preset angle threshold;
the first determining unit is further configured to determine that the user has the lowest sensitivity to the fineness of the image quality of the image displayed in the terminal screen when the number N of users is zero; and when the number N of the users is greater than zero, determining the sensitivity degree of the users corresponding to the face images with the included angles greater than the preset angle threshold value to the image quality fineness of the display images in the terminal screen.
17. The terminal of claim 15, wherein the terminal further comprises:
a sixth acquiring unit, configured to acquire, when a face image is included, a gaze direction of a user corresponding to each face image included in the target image;
a third calculating unit, configured to calculate the number M of users whose gaze directions do not deviate from a display area of the terminal screen;
the first determining unit is further configured to determine that the user has the lowest sensitivity to the fineness of the image quality of the image displayed in the terminal screen when the number M of users is zero; and when the number M of the users is larger than zero, determining the sensitivity degree of the users corresponding to the face images of which the sight directions do not deviate from the display area of the terminal screen to the image quality fineness of the display images in the terminal screen.
18. The terminal according to any of claims 16 or 17, wherein the terminal further comprises:
a seventh obtaining unit, configured to obtain a maximum sensitivity value among sensitivity degrees of multiple users to image quality fineness of a display image in the terminal screen;
the selection unit is specifically configured to select a target super-resolution algorithm corresponding to the maximum sensitivity value.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/103223 WO2018076172A1 (en) | 2016-10-25 | 2016-10-25 | Image display method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109313797A CN109313797A (en) | 2019-02-05 |
CN109313797B true CN109313797B (en) | 2022-04-05 |
Family
ID=62024222
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680086444.9A Active CN109313797B (en) | 2016-10-25 | 2016-10-25 | Image display method and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109313797B (en) |
WO (1) | WO2018076172A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126568B (en) * | 2019-12-09 | 2023-08-08 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111246116B (en) * | 2020-03-20 | 2022-03-11 | 谌春亮 | Method for intelligent framing display on screen and mobile terminal |
CN112188289B (en) * | 2020-09-04 | 2023-03-14 | 青岛海尔科技有限公司 | Method, device and equipment for controlling television |
CN112379803A (en) * | 2020-11-12 | 2021-02-19 | 深圳市沃特沃德股份有限公司 | Screen brightness adjusting method and device, computer equipment and storage medium |
CN112732497B (en) * | 2020-12-29 | 2023-02-10 | 深圳微步信息股份有限公司 | Terminal device and detection method based on terminal device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102119530A (en) * | 2008-08-22 | 2011-07-06 | 索尼公司 | Image display device, control method and computer program |
CN104021036A (en) * | 2013-03-01 | 2014-09-03 | 联想(北京)有限公司 | Electronic equipment and electronic equipment state switching method |
CN105492998A (en) * | 2013-08-23 | 2016-04-13 | 三星电子株式会社 | Mode switching method and apparatus of terminal |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722875B (en) * | 2012-05-29 | 2014-08-13 | 杭州电子科技大学 | Visual-attention-based variable quality ultra-resolution image reconstruction method |
CN103544687A (en) * | 2012-07-11 | 2014-01-29 | 刘书 | Efficient method of image super-resolution reconstruction |
CN103514580B (en) * | 2013-09-26 | 2016-06-08 | 香港应用科技研究院有限公司 | For obtaining the method and system of the super-resolution image that visual experience optimizes |
CN105653032B (en) * | 2015-12-29 | 2019-02-19 | 小米科技有限责任公司 | Display adjusting method and device |
-
2016
- 2016-10-25 CN CN201680086444.9A patent/CN109313797B/en active Active
- 2016-10-25 WO PCT/CN2016/103223 patent/WO2018076172A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102119530A (en) * | 2008-08-22 | 2011-07-06 | 索尼公司 | Image display device, control method and computer program |
CN104021036A (en) * | 2013-03-01 | 2014-09-03 | 联想(北京)有限公司 | Electronic equipment and electronic equipment state switching method |
CN105492998A (en) * | 2013-08-23 | 2016-04-13 | 三星电子株式会社 | Mode switching method and apparatus of terminal |
Also Published As
Publication number | Publication date |
---|---|
WO2018076172A1 (en) | 2018-05-03 |
CN109313797A (en) | 2019-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109313797B (en) | Image display method and terminal | |
CN108805047B (en) | Living body detection method and device, electronic equipment and computer readable medium | |
WO2019137038A1 (en) | Method for determining point of gaze, contrast adjustment method and device, virtual reality apparatus, and storage medium | |
US20170323465A1 (en) | Image processing apparatus, image processing method, and storage medium | |
KR102383129B1 (en) | Method for correcting image based on category and recognition rate of objects included image and electronic device for the same | |
US10885720B2 (en) | Virtual display method, device, electronic apparatus and computer readable storage medium | |
CN110858316A (en) | Classifying time series image data | |
CN111880711B (en) | Display control method, display control device, electronic equipment and storage medium | |
CN111919222A (en) | Apparatus and method for recognizing object in image | |
EP4093015A1 (en) | Photographing method and apparatus, storage medium, and electronic device | |
CN111028276A (en) | Image alignment method and device, storage medium and electronic equipment | |
US11682183B2 (en) | Augmented reality system and anchor display method thereof | |
US10162997B2 (en) | Electronic device, computer readable storage medium and face image display method | |
CN108574803B (en) | Image selection method and device, storage medium and electronic equipment | |
JP5949389B2 (en) | Detection apparatus, detection program, and detection method | |
US9323981B2 (en) | Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored | |
CN109785439A (en) | Human face sketch image generating method and Related product | |
CN114390186B (en) | Video shooting method and electronic equipment | |
WO2024055531A1 (en) | Illuminometer value identification method, electronic device, and storage medium | |
WO2013187282A1 (en) | Image pick-up image display device, image pick-up image display method, and storage medium | |
JP2019102941A (en) | Image processing apparatus and control method of the same | |
CN111507139A (en) | Image effect generation method and device and electronic equipment | |
US9811161B2 (en) | Improving readability of content displayed on a screen | |
KR102605451B1 (en) | Electronic device and method for providing multiple services respectively corresponding to multiple external objects included in image | |
CN107087114B (en) | Shooting method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |