CN115576422A - Method and device for adjusting screen display effect - Google Patents

Method and device for adjusting screen display effect Download PDF

Info

Publication number
CN115576422A
CN115576422A CN202211273090.2A CN202211273090A CN115576422A CN 115576422 A CN115576422 A CN 115576422A CN 202211273090 A CN202211273090 A CN 202211273090A CN 115576422 A CN115576422 A CN 115576422A
Authority
CN
China
Prior art keywords
screen
image
virtual plane
plane
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211273090.2A
Other languages
Chinese (zh)
Inventor
王思琦
汤锴
谢礼刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN202211273090.2A priority Critical patent/CN115576422A/en
Publication of CN115576422A publication Critical patent/CN115576422A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for adjusting screen display effect. The specific implementation mode of the method comprises the following steps: acquiring the position and sight direction of human eyes; rotating a plane P1 where a screen is located to a virtual plane P3 perpendicular to the sight line direction by a three-dimensional rotation method according to the position of human eyes, and calculating a rotation matrix M1; moving the virtual plane P3 to a virtual plane P2 which is perpendicular to the sight line direction and passes through the top point of the screen, and calculating a mapping formula M2 from the virtual plane P2 to a plane P1 where the screen is located by a perspective transformation method; projecting the display boundary of an original image to be displayed on a screen to a plane P1 where the screen is located through a rotation matrix M1 and a mapping formula M2, and determining the display range E of a picture on the screen; corresponding the pixel points in the display range E to the pixel points of the original image one by one to obtain a deformed image; and rendering the deformed image on a screen. The embodiment performs the rotation perspective processing on the image, and solves the problem of large and small image when the human eyes watch the screen.

Description

Method and device for adjusting screen display effect
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for adjusting screen display effect.
Background
At present, products such as mobile phones, televisions, computers, tablets and the like are more and more closely connected with the life of people, and people use the devices for entertainment and work. When the screen of the mobile device is not perpendicular to the sight of a person but has a certain angle, the image is deformed, so that the watching experience of a user is influenced. In this case, for the best visual effect, it is necessary to manually adjust the angle of the device or use an additional device such as a bracket.
Most of the existing methods are inclined and rotated in a certain range through a gravity sensor, and the problem that the visual effect is large at a near part and small at a far part cannot be solved.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for adjusting screen display effect.
In a first aspect, an embodiment of the present disclosure provides a method for adjusting a screen display effect, including: acquiring the position and sight direction of human eyes; if the included angle between the sight line direction and the screen is not in a preset vertical angle range and is larger than a minimum included angle threshold value, rotating a plane P1 where the screen is located to a virtual plane P3 perpendicular to the sight line direction through a three-dimensional rotation method according to the position of the human eyes, and calculating a rotation matrix M1; moving the virtual plane P3 to a virtual plane P2 which is perpendicular to the sight line direction and passes through the vertex of the screen, and calculating a mapping formula M2 from the virtual plane P2 to a plane P1 where the screen is located by a perspective transformation method; projecting the display boundary of the original image to be displayed on the screen to a plane P1 where the screen is located through the rotation matrix M1 and the mapping formula M2, and determining the display range E of the picture on the screen; corresponding the pixel points in the display range E to the pixel points of the original image one by one to obtain a deformed image; rendering the warped image onto the screen.
In some embodiments, the acquiring the position and the sight line direction of the human eyes comprises: acquiring a face image to perform face recognition to obtain identity information of at least one user; determining a target user with the highest priority according to the identity information of the at least one user and preset priority information; acquiring the positions of the two eyes of the target user, and determining the positions of the human eyes according to the centers of the positions of the two eyes; and determining a connecting line of the human eye position and the center of the screen as a sight line direction.
In some embodiments, the determining, according to the identity information of the at least one user and preset priority information, a target user with a highest priority includes: and if the child mode is set and the child user is detected, determining the child user as a target user.
In some embodiments, the method further comprises: measuring the distance between the position of the eyes of the child user and the center of the screen; and if the distance is smaller than a preset value, reducing the original image.
In some embodiments, the one-to-one correspondence between the pixel points in the display range E and the pixel points of the original image to obtain a deformed image includes: if the display range E is more than the display boundary reduction times by a preset threshold value, adjusting the virtual plane P3 and the virtual plane P2 to be not perpendicular to the sight line but inclined by a preset angle; updating the rotation matrix M1 and the mapping formula M2 according to the adjusted virtual plane P3 and the adjusted virtual plane P2, and re-determining the display range E of the picture on the screen; and if the display range E after updating is smaller than the display boundary by a factor not exceeding a preset threshold, corresponding the pixel points in the display range E after updating to the pixel points of the original image one by one to obtain an updated deformed image.
In some embodiments, the method further comprises: and if the updated display range E is smaller than the display boundary by a multiple exceeding a preset threshold value, determining the original image as a deformed image.
In some embodiments, the method further comprises: in response to detecting the screen angle adjustment, acquiring the positions and the sight directions of human eyes again; recalculating a rotation matrix M1 and a mapping formula M2 according to the updated included angle between the sight line direction and the screen, and re-determining the display range E of the picture on the screen; corresponding the pixel points in the updated display range E to the pixel points of the image to be displayed one by one to obtain an updated deformation image; and rendering the updated deformation image to a screen.
In a second aspect, an embodiment of the present disclosure provides an apparatus for adjusting a screen display effect, including: an acquisition unit configured to acquire a position of a human eye and a sight line direction; the rotating unit is configured to rotate a plane P1 where the screen is located to a virtual plane P3 perpendicular to the sight line direction through a three-dimensional rotating method according to the position of the human eyes if an included angle between the sight line direction and the screen is not within a preset vertical angle range and is larger than a minimum included angle threshold value, and calculate a rotation matrix M1; a mapping unit configured to move the virtual plane P3 to a virtual plane P2 perpendicular to the viewing direction and passing through a vertex of the screen, and calculate a mapping formula M2 of the virtual plane P2 to a plane P1 where the screen is located by a perspective transformation method; the projection unit is configured to project a display boundary of an original image to be displayed on the screen onto a plane P1 where the screen is located through the rotation matrix M1 and the mapping formula M2, and determine a display range E of a picture on the screen; the deformation unit is configured to correspond the pixel points in the display range E to the pixel points of the original image one by one to obtain a deformed image; a rendering unit configured to render the deformed image onto the screen.
In some embodiments, the obtaining unit is further configured to: acquiring a face image for face recognition to obtain identity information of at least one user; determining a target user with the highest priority according to the identity information of the at least one user and preset priority information; acquiring the positions of the two eyes of the target user, and determining the positions of the human eyes according to the centers of the positions of the two eyes; and determining a connecting line of the human eye position and the center of the screen as a sight line direction.
In some embodiments, the obtaining unit is further configured to: and if the child mode is set and the child user is detected, determining the child user as a target user.
In some embodiments, the apparatus further comprises a scaling unit configured to: measuring the distance between the position of the eyes of the child user and the center of the screen; and if the distance is smaller than a preset value, reducing the original image.
In some embodiments, the deformation unit is further configured to: if the display range E is smaller than the display boundary by a factor more than a preset threshold, adjusting the virtual plane P3 and the virtual plane P2 to be not perpendicular to the sight line but inclined by a preset angle; updating the rotation matrix M1 and the mapping formula M2 according to the adjusted virtual plane P3 and the adjusted virtual plane P2, and re-determining the display range E of the picture on the screen; and if the reduction multiple of the updated display range E to the display boundary is not more than the preset threshold, corresponding the pixel points in the updated display range E to the pixel points of the original image one by one to obtain an updated deformation image.
In some embodiments, the deformation unit is further configured to: and if the updated display range E is more than a preset threshold value than the display boundary reduction multiple, determining the original image as a deformed image.
In some embodiments, the apparatus further comprises an update unit configured to: in response to detecting the screen angle adjustment, acquiring the positions and the sight directions of human eyes again; recalculating a rotation matrix M1 and a mapping formula M2 according to the updated included angle between the sight line direction and the screen, and re-determining the display range E of the picture on the screen; corresponding the pixel points in the updated display range E to the pixel points of the image to be displayed one by one to obtain an updated deformation image; rendering the updated deformation image to a screen.
In a third aspect, an embodiment of the present disclosure provides an electronic device for adjusting a screen display effect, including: one or more processors; storage means having one or more computer programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of any one of the first aspects.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method according to any of the first aspects.
The method and the device for adjusting the screen display effect are used for rotating the picture through the rotation matrix and the perspective matrix to enable the picture displayed on the screen to always face the direction of human eyes when the sight line of a person is not perpendicular to the screen, and the displayed picture is rectangular, so that the problem of the size of the picture is solved, and the visual effect of a user is improved.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIGS. 1a and 1b are respectively a screen image with a vertical viewing angle and a non-vertical viewing angle;
FIG. 2 is a flow diagram of one embodiment of a method of adjusting screen display effects according to the present disclosure;
3a-3d are schematic diagrams of a calculated rotation matrix M1 according to the method of adjusting screen display effects of the present disclosure;
fig. 4 is a schematic diagram of the calculated mapping formula M2 of the method of adjusting a screen display effect according to the present disclosure;
5a, 5b are schematic diagrams of determining a display range E of a picture on a screen according to the method for adjusting the screen display effect of the present disclosure;
FIG. 6 is a diagram illustrating a transformation result of a method of adjusting a screen display effect according to the present disclosure;
FIG. 7 is a schematic diagram illustrating an embodiment of an apparatus for adjusting a screen display effect according to the present disclosure;
FIG. 8 is a schematic structural diagram of a computer system suitable for use with the electronic device used to implement embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1a and 1b show the contrast of the picture viewed from a vertical viewing angle and a non-vertical viewing angle. It should be noted that the method for adjusting the screen display effect provided by the embodiment of the present disclosure is executed by a terminal device having a screen, and accordingly, an apparatus for adjusting the screen display effect is disposed in the terminal device. The terminal device is not limited to a smartphone, a tablet computer, an e-book reader, and the like.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of adjusting screen display effects according to the present disclosure is shown. The method for adjusting the screen display effect comprises the following steps:
step 201, acquiring the position and the sight line direction of human eyes.
In the present embodiment, an executing subject (for example, a mobile phone shown in fig. 1a and 1 b) of the method for adjusting the screen display effect may acquire the positions of both eyes of the user through the camera. The position of the human eye is determined according to the centers of the positions of the two eyes. And determining a connecting line between the position of the human eyes and the center of the screen of the terminal equipment as the sight line direction. The width w and the height h of the screen can also be obtained, so that the coordinates of the vertex of the screen can be obtained after the coordinate system is established.
Step 202, if the included angle between the view direction and the screen is not within the predetermined vertical angle range and is greater than the minimum included angle threshold, rotating the plane P1 where the screen is located to the virtual plane P3 perpendicular to the view direction by a three-dimensional rotation method according to the position of the human eye, and calculating a rotation matrix M1.
In this embodiment, the vertical angle range may be in the [85,95] degree interval. And need not be strictly limited to 90 degrees vertical. The minimum angle threshold may be set to a small value, for example, 5 degrees. Since even the adjusted screen display is not visible to the user if the angle is too small.
If the included angle is within the predetermined vertical angle range or less than the minimum included angle threshold, no subsequent steps are performed.
The position and orientation of the physical screen need to be transformed into a three-dimensional coordinate system by means of three-dimensional transformation. Establishing a proper space coordinate system, calculating a rotation matrix M1, and rotating a plane P1 where a screen is located to a virtual plane P3 which is perpendicular to a sight line by a three-dimensional rotation method.
A coordinate system 1 is established by taking the center of the screen as an origin, taking the direction parallel to two sides of the screen as an x-axis and a y-axis, and taking the direction vertical to the screen as a z-axis. And converting the central positions of the left eye, the right eye and the two-eye connecting line into a coordinate system.
The rotation angle is calculated as follows, taking fig. 3a-3d as an example:
as shown in fig. 3a, left eye coordinates (x) l ,y l ,z l ) Right eye coordinate (x) r ,y r ,z r ) Center coordinates of both eyes
Figure BDA0003895472350000061
As shown in fig. 3b, rotate around the y-axis of the self-coordinate system by an angle γ in the positive direction:
Figure BDA0003895472350000062
Figure BDA0003895472350000063
as shown in fig. 3c, the rotation angle is β around the positive direction of the self coordinate system x:
Figure BDA0003895472350000064
Figure BDA0003895472350000065
after the plane is rotated by a gamma angle around the positive direction of the y axis of the coordinate system and rotated by a beta angle around the positive direction of the x axis of the coordinate system, the relative positions of the plane and the central points of the two eyes are equivalent to the relative positions of the plane and the central points of the two eyes after the central points of the two eyes are rotated by the gamma angle around the y axis of the fixed coordinate system in the reverse direction and the central points of the two eyes are rotated by the beta angle around the x axis of the fixed coordinate system in the reverse direction, and the coordinates of the central points of the two eyes are rotated for calculating the rotation angle around the z axis conveniently.
As shown in fig. 3 d:
the vectors for both eyes (left-to-right eye vector) without rotation are:
(x′,y′,z′)=(x r -x l ,y r -y l ,z r -z l )
the vectors for the rotated eyes (left-to-right eye vectors) are:
Figure BDA0003895472350000071
the projection of the vector on an xy plane is (x ', y', 0), and the included angle between the vector and the positive direction of the x axis is the rotation angle alpha around the z axis of the coordinate system:
Figure BDA0003895472350000072
if y is>0,
Figure BDA0003895472350000073
If y is less than 0, then,
Figure BDA0003895472350000074
from the above description, the screen plane needs to rotate around the y axis of its own coordinate system by an angle γ, rotate around the x axis of its own coordinate system by an angle β, and rotate around the z axis of its own coordinate system by an angle α, and the rotation matrix M1 is calculated as follows:
Figure BDA0003895472350000075
knowing the width of the screen as w and the height as h, transforming the coordinates of the four vertexes a, B, C, and D shown in fig. 3a to the rotated coordinate system to obtain a virtual plane P3:
Figure BDA0003895472350000081
Figure BDA0003895472350000082
Figure BDA0003895472350000083
Figure BDA0003895472350000084
step 203, moving the virtual plane P3 to a virtual plane P2 perpendicular to the viewing direction and passing through the vertex of the screen, and calculating a mapping formula M2 from the virtual plane P2 to a plane P1 where the screen is located by a perspective transformation method.
In this embodiment, if the observer and the screen have a certain angle, the image seen by the observer is big-small-big-high-low, and the virtual plane P2 is a trapezoid or trapezoid. Therefore, the problem of the image displayed on the screen being large and small needs to be solved by converting the trapezoid or the trapezoid into the rectangle through a perspective transformation method. The perspective transformation method described below is merely illustrative, and any of the methods in the related art may be used for the perspective transformation.
First for the sake of calculation we assume that a plane perpendicular to the line of sight passes through one vertex of the screen and establishes a coordinate system 2 with this vertex as the origin, as shown in fig. 4.
Assuming that point E is a light emission source, an arbitrary quadrangle q00 (i.e., origin O) q01 q11 q10 on the virtual plane P2 can be projected as r00 (i.e., origin O) r01 r11 r10 on the plane P1 on which the screen is located.
Coordinates of r points:
Figure BDA0003895472350000085
trilateral angulation is known:
Figure BDA0003895472350000086
Figure BDA0003895472350000091
eq length:
Figure BDA0003895472350000092
eq10 length:
Figure BDA0003895472350000093
q10 point coordinates:
Figure BDA0003895472350000094
in the same way, the coordinates of q01 and q11 can be obtained. The detailed calculation process is prior art and thus will not be described in detail.
As shown in fig. 4, a qr conversion formula, i.e., mapping formula M2, can be obtained from the perspective mapping:
Figure BDA0003895472350000095
Figure BDA0003895472350000096
wherein a is 0 ,a 1 To make it possible to
Figure BDA0003895472350000097
A constant coefficient of success. x is a radical of a fluorine atom 1 ,y 1 To make an equation
Figure BDA0003895472350000098
The coefficients are true, where q is a point in a virtual plane perpendicular to the line of sight. x is the number of 0 ,y 0 To make it possible to
Figure BDA0003895472350000099
The coefficients for which r is a point in the screen plane hold.
Step 204, projecting the display boundary of the original image to be displayed on the screen to the plane P1 of the screen through the rotation matrix M1 and the mapping formula M2, and determining the display range E of the picture on the screen.
In this embodiment, the screen is projected on a virtual plane so that the viewable image is the largest. Since the rotated plane P3 is parallel to the plane P2, the rotated plane coordinates (coordinates in the coordinate system 1 established with the screen center point as the origin) are first transformed into the coordinate system 2 established with the screen vertex (translation in the x, y directions is performed). Then the rotated screen is projected on a plane P2 (the vector is added to the screen coordinate after the rotation transformation)
Figure BDA00038954723500000910
) Get deficiencyCoordinates of the projection of the pseudo-plane P2. This projected rectangle is reduced to the point where all points are in the plane, as shown in figure 5 a. The four vertices of the rectangle are mapped to the screen plane P1 by the qr conversion formula to obtain the final picture display range E, as shown in fig. 5 b.
And step 205, corresponding the pixel points in the display range E to the pixel points of the original image one by one to obtain a deformed image.
In this embodiment, the above quadrilateral and rectangle mapping formulas and the blurring process are used to make the pixel points in E correspond to the pixel points of the whole screen one to one, so as to obtain the deformed image, that is, the original image is a rectangle, and the deformed image is a trapezoid, but the user looks like a rectangle from an oblique angle.
Step 206, rendering the deformation image on a screen.
In the present embodiment, the deformed image is finally rendered on the screen. The final effect is schematically shown in fig. 6. Users from a vertical perspective view see a trapezoidal image, while users from a non-vertical perspective view see a rectangular image.
In some optional implementations of the present embodiment, the acquiring the position and the gaze direction of the human eye includes: acquiring a face image to perform face recognition to obtain identity information of at least one user; determining a target user with the highest priority according to the identity information of the at least one user and preset priority information; acquiring the positions of the two eyes of the target user, and determining the positions of the human eyes according to the centers of the positions of the two eyes; and determining a connecting line of the human eye position and the center of the screen as a sight line direction.
When the terminal equipment is started for the first time, a function prompt for whether to start the automatic adjusting picture is popped up. After the selection is not started, the function can be manually started. A single user mode, a multi-user mode, a child mode may also be set. Wherein, only if the current setting user is identified, the picture can be automatically adjusted in the single-user mode; in the multi-user mode, a plurality of faces are recorded on the terminal equipment in advance, and priority setting is carried out on the faces. When a plurality of users exist, the user with the highest priority is set as the main visual angle through facial recognition, and when all recognized faces do not belong to the faces recorded in the equipment, automatic adjustment is not performed. In this way, the display effect can be adjusted in a targeted manner.
In some optional implementation manners of this embodiment, determining a target user with a highest priority according to the identity information of the at least one user and preset priority information includes: and if the child mode is set and the child user is detected, determining the child user as a target user. Under the children's mode, prerecorded children's face, children's priority is the highest, when children's sight and screen are not in the vertical state, can prevent the strabismus to the adjustment of picture.
In some optional implementations of this embodiment, the method further includes: measuring the distance between the position of the eyes of the child user and the center of the screen; and if the distance is smaller than a preset value, reducing the original image. When the position of the child is too close to the screen, the picture can be reduced, and myopia is prevented.
In some optional implementation manners of this embodiment, the one-to-one correspondence between the pixel points in the display range E and the pixel points of the original image to obtain a deformed image includes: if the display range E is more than the display boundary reduction times by a preset threshold value, adjusting the virtual plane P3 and the virtual plane P2 to be not perpendicular to the sight line but inclined by a preset angle; updating the rotation matrix M1 and the mapping formula M2 according to the adjusted virtual plane P3 and the adjusted virtual plane P2, and re-determining the display range E of the picture on the screen; and if the display range E after updating is smaller than the display boundary by a factor not exceeding a preset threshold, corresponding the pixel points in the display range E after updating to the pixel points of the original image one by one to obtain an updated deformed image.
The picture to be adjusted is smaller than the original image, and if the plane P1 where the screen is located is rotated to the virtual plane P3 perpendicular to the viewing direction, the rotation angle is too large, which may cause the picture that can be observed after rotation to be too small, and the observer cannot see the transformed picture clearly. A threshold value (rotation angle threshold value or multiple threshold value of image reduction) may be set, and if the rotation angle exceeds this rotation angle threshold value and is not much different from the rotation angle threshold value (for example, less than 5 °), processing is performed by the rotation angle threshold value, and otherwise, processing is not performed. The rotation angle may also be defined by setting a multiple threshold. For example, if the picture to be adjusted is smaller than the original image by 3 times, the virtual plane P3 and the virtual plane P2 may be adjusted to be not perpendicular to the line of sight but inclined by a predetermined angle, for example, 85 degrees, so that the finally obtained deformed image may be smaller by 1.3 times and may not feel a difference with naked eyes, thereby improving the user experience. The predetermined threshold may be set by empirical statistics, for example, a large number of users are counted to manually adjust the average angle of the screen after the mobile phone automatically adjusts the screen, and the reduction factor calculated according to the average angle is used as the multiple threshold.
In some optional implementations of this embodiment, the method further includes: and if the updated display range E is smaller than the display boundary by a multiple exceeding a preset threshold value, determining the original image as a deformed image. If the picture is too small after adjustment, the transformed picture can not be seen clearly by an observer, and the original image is displayed for the user without deformation processing, so that poor user experience is avoided.
In some optional implementations of this embodiment, the method further includes: in response to detecting the screen angle adjustment, acquiring the positions and the sight directions of human eyes again; recalculating the rotation matrix M1 and the mapping formula M2 according to the updated included angle between the sight line direction and the screen, and re-determining the display range E of the picture on the screen; corresponding the pixel points in the updated display range E to the pixel points of the image to be displayed one by one to obtain an updated deformed image; and rendering the updated deformation image to a screen. The screen angle adjustment may be detected by a gravity sensor or the like, which triggers the re-execution of the method described in the flow 200.
With further reference to fig. 7, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of an apparatus for adjusting a screen display effect, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 7, the apparatus 700 for adjusting screen display effect of the present embodiment includes: an acquisition unit 701, a rotation unit 702, a mapping unit 703, a projection unit 704, a deformation unit 705, and a rendering unit 706. Wherein, the acquiring unit 701 is configured to acquire a human eye position and a sight line direction; a rotation unit 702, configured to, if an included angle between the gaze direction and a screen is not within a predetermined vertical angle range and is greater than a minimum included angle threshold, rotate a plane P1 on which the screen is located to a virtual plane P3 perpendicular to the gaze direction by a three-dimensional rotation method according to the position of the human eye, and calculate a rotation matrix M1; a mapping unit 703 configured to move the virtual plane P3 to a virtual plane P2 perpendicular to the viewing direction and passing through a vertex of the screen, and calculate a mapping formula M2 of the virtual plane P2 to a plane P1 where the screen is located by a perspective transformation method; a projection unit 704 configured to project a display boundary of an original image to be displayed on the screen onto a plane P1 on which the screen is located through the rotation matrix M1 and the mapping formula M2, and determine a display range E of a picture on the screen; a deformation unit 705 configured to correspond pixel points in the display range E to pixel points of the original image one to one, so as to obtain a deformed image; a rendering unit 706 configured to render the warped image onto the screen.
In this embodiment, the specific processing of the acquiring unit 701, the rotating unit 702, the mapping unit 703, the projecting unit 704, the deforming unit 705 and the rendering unit 706 of the apparatus 700 for adjusting the screen display effect may refer to steps 201 to 206 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the obtaining unit 701 is further configured to: acquiring a face image to perform face recognition to obtain identity information of at least one user; determining a target user with the highest priority according to the identity information of the at least one user and preset priority information; acquiring the positions of the two eyes of the target user, and determining the positions of the human eyes according to the centers of the positions of the two eyes; and determining a connecting line of the human eye position and the center of the screen as a sight line direction.
In some optional implementations of this embodiment, the obtaining unit 701 is further configured to: and if the child mode is set and the child user is detected, determining the child user as a target user.
In some optional implementations of this embodiment, the apparatus 700 further comprises a reduction unit (not shown in the drawings) configured to: measuring the distance between the position of the eyes of the child user and the center of the screen; and if the distance is smaller than a preset value, reducing the original image.
In some optional implementations of this embodiment, the deformation unit 705 is further configured to: if the display range E is smaller than the display boundary by a factor more than a preset threshold, adjusting the virtual plane P3 and the virtual plane P2 to be not perpendicular to the sight line but inclined by a preset angle; updating the rotation matrix M1 and the mapping formula M2 according to the adjusted virtual plane P3 and the adjusted virtual plane P2, and re-determining the display range E of the picture on the screen; and if the reduction multiple of the updated display range E to the display boundary is not more than the preset threshold, corresponding the pixel points in the updated display range E to the pixel points of the original image one by one to obtain an updated deformation image.
In some optional implementations of this embodiment, the deformation unit 705 is further configured to: and if the updated display range E is smaller than the display boundary by a multiple exceeding a preset threshold value, determining the original image as a deformed image.
In some optional implementations of the present embodiment, the apparatus 700 further comprises an updating unit (not shown in the drawings) configured to: in response to detecting the screen angle adjustment, acquiring the eye position and the sight line direction again; recalculating the rotation matrix M1 and the mapping formula M2 according to the updated included angle between the sight line direction and the screen, and re-determining the display range E of the picture on the screen; corresponding the pixel points in the updated display range E to the pixel points of the image to be displayed one by one to obtain an updated deformed image; rendering the updated deformation image to a screen.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
An electronic device for adjusting screen display effects, comprising: one or more processors; a storage device having one or more computer programs stored thereon which, when executed by the one or more processors, cause the one or more processors to implement the method of flow 200.
A computer readable medium, on which a computer program is stored, wherein the computer program, when executed by a processor, performs the method of flow 200.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806 such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as a method of adjusting a screen display effect. For example, in some embodiments, the method of adjusting screen display effects may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the method of adjusting screen display effects described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method of adjusting the screen display effect in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology. The server may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (10)

1. A method of adjusting screen display effects, comprising:
acquiring the positions and sight directions of human eyes;
if the included angle between the sight line direction and the screen is not within a preset vertical angle range and is larger than a minimum included angle threshold value, rotating a plane P1 where the screen is located to a virtual plane P3 which is perpendicular to the sight line direction through a three-dimensional rotation method according to the eye position, and calculating a rotation matrix M1;
moving the virtual plane P3 to a virtual plane P2 which is perpendicular to the sight line direction and passes through the vertex of the screen, and calculating a mapping formula M2 from the virtual plane P2 to a plane P1 where the screen is located by a perspective transformation method;
projecting the display boundary of the original image to be displayed on the screen to a plane P1 where the screen is located through the rotation matrix M1 and the mapping formula M2, and determining the display range E of the picture on the screen;
corresponding the pixel points in the display range E to the pixel points of the original image one by one to obtain a deformed image;
rendering the warped image onto the screen.
2. The method of claim 1, wherein said acquiring a human eye position and a gaze direction comprises:
acquiring a face image to perform face recognition to obtain identity information of at least one user;
determining a target user with the highest priority according to the identity information of the at least one user and preset priority information;
acquiring the positions of the two eyes of the target user, and determining the positions of the human eyes according to the centers of the positions of the two eyes;
and determining a connecting line of the human eye position and the center of the screen as a sight line direction.
3. The method according to claim 2, wherein the determining a target user with a highest priority according to the identity information of the at least one user and preset priority information comprises:
and if the child mode is set and the child user is detected, determining the child user as a target user.
4. The method of claim 3, wherein the method further comprises:
measuring the distance between the position of the eyes of the child user and the center of the screen;
and if the distance is smaller than a preset value, reducing the original image.
5. The method according to claim 1, wherein the one-to-one correspondence of the pixel points in the display range E with the pixel points of the original image to obtain a deformed image comprises:
if the display range E is more than the display boundary reduction times by a preset threshold value, adjusting the virtual plane P3 and the virtual plane P2 to be not perpendicular to the sight line but inclined by a preset angle;
updating the rotation matrix M1 and the mapping formula M2 according to the adjusted virtual plane P3 and the adjusted virtual plane P2, and re-determining the display range E of the picture on the screen;
and if the reduction multiple of the updated display range E to the display boundary is not more than the preset threshold, corresponding the pixel points in the updated display range E to the pixel points of the original image one by one to obtain an updated deformation image.
6. The method of claim 5, wherein the method further comprises:
and if the updated display range E is smaller than the display boundary by a multiple exceeding a preset threshold value, determining the original image as a deformed image.
7. The method according to any one of claims 1-6, wherein the method further comprises:
in response to detecting the screen angle adjustment, acquiring the eye position and the sight line direction again;
recalculating the rotation matrix M1 and the mapping formula M2 according to the updated included angle between the sight line direction and the screen, and re-determining the display range E of the picture on the screen;
corresponding the pixel points in the updated display range E to the pixel points of the image to be displayed one by one to obtain an updated deformed image;
rendering the updated deformation image to a screen.
8. An apparatus for adjusting screen display effects, comprising:
an acquisition unit configured to acquire a position of a human eye and a sight line direction;
the rotating unit is configured to rotate a plane P1 where the screen is located to a virtual plane P3 perpendicular to the sight line direction through a three-dimensional rotating method according to the position of the human eyes if an included angle between the sight line direction and the screen is not within a preset vertical angle range and is larger than a minimum included angle threshold value, and calculate a rotating matrix M1;
a mapping unit configured to move the virtual plane P3 to a virtual plane P2 perpendicular to the viewing direction and passing through a vertex of the screen, and calculate a mapping formula M2 of the virtual plane P2 to a plane P1 where the screen is located by a perspective transformation method;
the projection unit is configured to project a display boundary of an original image to be displayed on the screen onto a plane P1 where the screen is located through the rotation matrix M1 and the mapping formula M2, and determine a display range E of a picture on the screen;
the deformation unit is configured to correspond pixel points in the display range E to pixel points of the original image one by one to obtain a deformed image;
a rendering unit configured to render the deformed image onto the screen.
9. An electronic device for adjusting screen display effects, comprising:
one or more processors;
a storage device having one or more computer programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202211273090.2A 2022-10-18 2022-10-18 Method and device for adjusting screen display effect Pending CN115576422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211273090.2A CN115576422A (en) 2022-10-18 2022-10-18 Method and device for adjusting screen display effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211273090.2A CN115576422A (en) 2022-10-18 2022-10-18 Method and device for adjusting screen display effect

Publications (1)

Publication Number Publication Date
CN115576422A true CN115576422A (en) 2023-01-06

Family

ID=84585815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211273090.2A Pending CN115576422A (en) 2022-10-18 2022-10-18 Method and device for adjusting screen display effect

Country Status (1)

Country Link
CN (1) CN115576422A (en)

Similar Documents

Publication Publication Date Title
CN106502427B (en) Virtual reality system and scene presenting method thereof
JP4181039B2 (en) Portable virtual reality system
WO2016110199A1 (en) Expression migration method, electronic device and system
CN114223195A (en) System and method for video communication using virtual camera
CN111971713A (en) 3D face capture and modification using image and time tracking neural networks
CN109741289B (en) Image fusion method and VR equipment
US20190325564A1 (en) Image blurring methods and apparatuses, storage media, and electronic devices
US20190130536A1 (en) Image blurring methods and apparatuses, storage media, and electronic devices
JP7101269B2 (en) Pose correction
CN110706283B (en) Calibration method and device for sight tracking, mobile terminal and storage medium
US11354875B2 (en) Video blending method, apparatus, electronic device and readable storage medium
CN111275801A (en) Three-dimensional picture rendering method and device
CN113380269B (en) Video image generation method, apparatus, device, medium, and computer program product
WO2017173583A1 (en) Terminal display anti-shake method and apparatus
CN115576422A (en) Method and device for adjusting screen display effect
CN107452045B (en) Space point mapping method based on virtual reality application anti-distortion grid
CN116563740A (en) Control method and device based on augmented reality, electronic equipment and storage medium
CN112528707A (en) Image processing method, device, equipment and storage medium
CN113810755B (en) Panoramic video preview method and device, electronic equipment and storage medium
WO2022057576A1 (en) Facial image display method and apparatus, and electronic device and storage medium
CN114727077A (en) Projection method, apparatus, device and storage medium
CN114049472A (en) Three-dimensional model adjustment method, device, electronic apparatus, and medium
CN112473138A (en) Game display control method and device, readable storage medium and electronic equipment
US20240112303A1 (en) Context-Based Selection of Perspective Correction Operations
CN113961746B (en) Video generation method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination