CN115373548A - Display content adjusting method and system - Google Patents

Display content adjusting method and system Download PDF

Info

Publication number
CN115373548A
CN115373548A CN202211023209.0A CN202211023209A CN115373548A CN 115373548 A CN115373548 A CN 115373548A CN 202211023209 A CN202211023209 A CN 202211023209A CN 115373548 A CN115373548 A CN 115373548A
Authority
CN
China
Prior art keywords
viewing
user
display screen
target
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211023209.0A
Other languages
Chinese (zh)
Inventor
陈洋
郑阳阳
陈玮
吴昌恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hansang Nanjing Technology Co ltd
Original Assignee
Hansang Nanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hansang Nanjing Technology Co ltd filed Critical Hansang Nanjing Technology Co ltd
Priority to CN202211023209.0A priority Critical patent/CN115373548A/en
Publication of CN115373548A publication Critical patent/CN115373548A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

An embodiment of the present specification provides a display content adjustment method and system, where the method includes: determining a relative relationship between a user and a display screen, wherein the relative relationship comprises at least one of a viewing distance between the user and the display screen, a position of a reference point on the display screen, which is away from a first preset point of the user and meets a preset condition, and a viewing angle when the user views a target viewing picture in the display screen; and adjusting the target viewing picture based on the relative relation.

Description

Display content adjusting method and system
Technical Field
The present disclosure relates to the field of display technologies, and in particular, to a method and a system for adjusting display contents.
Background
For the equipment which needs to be visually operated, different users have different sight angles due to different installation positions of the equipment and height and sitting posture differences of the users, and inconsistent user experience is caused.
Therefore, it is desirable to provide a method and a system for adjusting display content, which can adjust the display content according to different viewing conditions of a user, so that the user can achieve a better viewing effect at different positions or viewing angles, and the like, thereby improving the user experience.
Disclosure of Invention
One embodiment of the present specification provides a display content adjusting method. The display content adjusting method comprises the following steps: determining a relative relationship between a user and a display screen, wherein the relative relationship comprises at least one of a viewing distance between the user and the display screen, a position of a reference point on the display screen, which is away from a first preset point of the user and meets a preset condition, and a viewing angle when the user views a target viewing picture in the display screen; and adjusting the target viewing picture based on the relative relation.
One embodiment of the present specification provides a display content adjustment system. The display content adjustment system includes: the device comprises a determining module, a display module and a display module, wherein the determining module is used for determining a relative relationship between a user and a display screen, and the relative relationship comprises at least one of a viewing distance between the user and the display screen, a position of a reference point on the display screen, which is away from a first preset point of the user and meets a preset condition, and a viewing angle when the user views a target viewing picture in the display screen; and the adjusting module is used for adjusting the target watching picture based on the relative relation.
One of the embodiments of the present specification provides a display content adjusting apparatus, which includes at least one processor and at least one memory; the at least one memory is for storing computer instructions; the at least one processor executes at least a portion of the computer instructions to implement a display content adjustment method.
One of the embodiments of the present specification provides a computer-readable storage medium, where the storage medium stores computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes a display content adjustment method.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a display content adjustment system according to some embodiments of the present description;
FIG. 2 is an exemplary block diagram of a display content adjustment system according to some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a display content adjustment method according to some embodiments of the present description;
FIG. 4 is an exemplary flow diagram illustrating warping a target viewing screen based on warping parameters according to some embodiments of the present description;
FIG. 5 is an exemplary flow diagram illustrating scaling of a target viewing screen based on a scaling parameter according to some embodiments of the present description;
FIG. 6a is an exemplary diagram illustrating non-deformation of a target viewing screen based on deformation parameters, according to some embodiments of the present description;
FIG. 6b is an exemplary diagram illustrating warping of a target viewing screen based on warping parameters, according to some embodiments of the present description;
FIG. 6c is yet another exemplary diagram illustrating warping a target viewing screen based on warping parameters, according to some embodiments of the present description;
FIG. 7a is an exemplary diagram illustrating a zoom out of a target viewing screen based on a zoom parameter, according to some embodiments of the present description;
FIG. 7b is an exemplary diagram illustrating non-zooming of a target viewing screen based on zoom parameters according to some embodiments herein;
FIG. 7c is an exemplary illustration of a target viewing screen being enlarged based on a zoom parameter, as shown in some embodiments according to the description;
FIG. 8 is an exemplary diagram illustrating adjusting primary viewing content to a primary display position according to some embodiments of the present description;
FIG. 9a is an exemplary schematic diagram of a standard viewing angle shown in accordance with some embodiments herein;
FIG. 9b is an exemplary schematic diagram of a horizontally offset viewing angle, shown in accordance with some embodiments herein;
fig. 9c is an exemplary schematic diagram of a vertically offset viewing angle, shown in accordance with some embodiments herein.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of a display content adjustment system according to some embodiments of the present description.
In some embodiments, the display content adjustment system may be used to adjust the target viewing screen based on the relative relationship between the user and the display screen. As shown in fig. 1, an application scenario 100 of a display content adaptation system may include a processor 110, a display screen 120, and a user 130.
The processor 110 may be configured to perform one or more of the functions disclosed in one or more embodiments of the present description. For example, the processor 110 may be used to determine a relative relationship between the user 130 and the display screen 120. For another example, the processor 110 may be configured to adjust the target viewing screen based on the relative relationship. In some embodiments, the processor 110 may be configured to receive images captured, recorded by a camera and/or information sensed by a sensor (e.g., an infrared sensor, a distance sensor, etc.) that is used to determine the relative relationship between the user 130 and the display screen 120. In some embodiments, the processor 110 may send the adjusted display content to the display screen 120. In some embodiments, the processor 110 may be part of a terminal (e.g., a computer or a mobile phone) to which the display screen belongs. In some embodiments, the processor 110 may be independent of the terminal to which the display belongs and may be connected to the display 120 by wire and/or wirelessly. For example, the processor 110 may be remote and located on a server.
The display screen 120 may be used to display a target viewing screen 121 of the user. In some embodiments, the target viewing screen 121 in the display screen 120 may be adjusted according to the relative relationship of the display screen 120 and the user. For example only, when the user 130 is positioned at a midpoint of the head-up display 120 directly in front of the display, the content displayed by the display is the target viewing screen 1211. The content in the target viewing screen 1211 is a target viewing screen that is not deformed. When the user 130 needs to look down on the display screen 120, the content displayed on the display screen 120 is the target viewing screen 1212. The content in the target viewing screen 1212 is a screen after the target viewing screen 1211 is deformed. By deforming the target viewing screen 121, it is possible to ensure that the effect of the user looking down on the target viewing screen and the effect of looking down on the target viewing screen are not greatly different.
In some embodiments, the display 120 may be part of a terminal (e.g., a computer, a television, a tablet, an embedded device, etc.). In some embodiments, the terminal may also include a projection type device or the like, and the display screen may refer to a projection screen or the like of the projection type device. The terminal can be provided with a monitoring device such as a camera and a sensor. In some embodiments, the terminal may be a combination of at least 2 of the above devices. The display screen may refer to the display screen of the at least 2 devices. The processor 110 may be configured to adjust a target viewing frame in the display screens of the at least 2 devices based on a relative relationship between the display screens of the at least 2 devices and a user, so as to implement a display matrix. The display matrix may mean that at least 2 display screens are arranged in a matrix manner. In some embodiments, the shape of the display screen may include various shapes. The shape of the display screen may include a square, a circle, an irregular polygon, etc., in addition to a rectangle as shown in fig. 1. The present specification mainly describes a rectangle as an example.
The user may refer to a person in front of the display screen who has a need to view the content of the display screen 120, etc. The user may be a single person or a plurality of persons. For more about the user, see the related description of fig. 3.
FIG. 2 is an exemplary block diagram of a display content adjustment system shown in accordance with some embodiments of the present description. In some embodiments, the display content adjustment system 200 may include a determination module 210 and an adjustment module 220.
In some embodiments, the determining module 210 may be configured to determine a relative relationship between the user and the display screen, where the relative relationship includes at least one of a viewing distance between the user and the display screen, a position of a reference point on the display screen, which is located at a first preset point from the user and satisfies a preset condition, and a viewing angle when the user views a target viewing screen in the display screen.
In some embodiments, the adjustment module 220 may be configured to adjust the target viewing screen based on the relative relationship.
In some embodiments, the user may be a primary user of a plurality of candidate users, and the determining module 210 may be further configured to determine the primary user from the plurality of candidate users.
In some embodiments, the determining module 210 may be configured to determine the relative relationship based on a first image of the user captured by a camera on the display screen and a second image captured by a global camera in a space in which the display screen is located.
In some embodiments, the adjustment module 220 may be further configured to determine a deformation parameter of the target viewing frame based on the viewing perspective and the standard viewing perspective; and deforming the target viewing picture based on the deformation parameters.
In some embodiments, the adjusting module 220 may be further configured to process the viewing perspective and the standard viewing perspective through a deformation parameter determination model to determine a deformation parameter, where the deformation parameter determination model is a machine learning model. The deformation parameter determination model is input into the first image and the second image, and output as a deformation parameter.
In some embodiments, the adjustment module 220 may also be used to determine the perspective determined when the user is looking forward as the standard perspective.
In some embodiments, the adjustment module 220 may be further configured to determine a zoom parameter of the target viewing screen based on the viewing distance; the target viewing screen is scaled based on the scaling parameter.
In some embodiments, the adjusting module 220 may be further configured to determine a zoom-out parameter or a zoom-in parameter based on a difference between the viewing distance and a preset distance, wherein the zoom-out parameter or the zoom-in parameter is positively correlated to an absolute value of the difference.
In some embodiments, the adjustment module 220 may be further configured to determine the scaling parameter through a scaling parameter determination model, wherein the scaling parameter determination model is a machine learning model, the scaling parameter determination model is input as the first image and the second image, and the output is the scaling parameter.
In some embodiments, the adjustment module 220 may also be configured to determine the main viewing content from the target viewing screen; determining a main display position of the display screen based on the position of the reference point; and adjusting the main viewing content to the main display position.
In some embodiments, the adjusting module 220 may be further configured to obtain a first sequence and a second sequence of the user, where the first sequence includes a sequence of historical control parameters of the remote controller at multiple time points, and the second sequence includes a sequence of historical display contents of the display screen at multiple time points; based on the first sequence and the second sequence, a main-view content is determined.
In some embodiments, the adjustment module 220 may be configured to process the first sequence and the second sequence through a content prediction model to determine the main viewing content.
It should be understood that the display content adjustment system and its modules shown in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments the system and its modules may be implemented in hardware, software, or a combination of software and hardware.
It should be noted that the above description of the display content adjustment system and the modules thereof is only for convenience of description, and the present specification is not limited to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the determining module and the adjusting module disclosed in fig. 2 may be different modules in a system, or may be a module that implements the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
FIG. 3 is an exemplary flow diagram of a display content adjustment method, shown in some embodiments herein. In some embodiments, the process 300 may be performed by the processor 110. As shown in fig. 3, the process 300 includes the following steps:
at step 310, the relative relationship between the user and the display screen is determined. In some embodiments, step 310 may be performed by determination module 210.
Different users may be in different positions in front of the display screen. For example, different users may be at different distances from the display screen, different orientations of different users on the display screen, etc.
In some embodiments, there may be multiple candidate users in front of the display screen. The candidate user may refer to a person in front of the display screen. The user may be a primary user of a plurality of candidate users. A master user may refer to a person who has a need to view the content of the display screen in front of the display screen. The primary user may be one or more persons.
In some embodiments, the determination module 210 may determine a primary user from a plurality of candidate users. For example, the determining module 210 may identify a person who views the display screen from among a plurality of candidate users by recognizing an image captured by a camera, which may be located on the display screen to capture a candidate user in front of the display screen, and determine the person who views the display screen as the master user. For another example, the determination module 210 may determine, as the primary user, a candidate user closest to the display screen, which may be detected by the infrared sensor. It should be noted that, the embodiment of the present specification does not limit the manner of determining the primary user. For example, the determination module 210 may identify an image captured by a camera, determine a person operating a display screen as a primary user, and the like.
Relative relationship may refer to the relationship between the user and the display screen. The relative relationship may be distance, angle, orientation, etc. In some embodiments, the relative relationship may include a viewing distance between the user and the display screen.
The viewing distance may refer to a distance of the user from the display screen when the user is viewing the display screen. In some embodiments, the viewing distance may be a vertical distance between the user and the display screen. For example, a vertical distance of a point (e.g., a first preset point, etc.) of the user with respect to a plane on which the display screen is located. In some embodiments, the viewing distance may be a straight-line distance between one point of the user (e.g., a first preset point, etc.) and one point of the display screen (e.g., a reference point, a second preset point, etc.). For example, the linear distance of the user's eyes from the center of the display screen, the linear distance of the user's eyes from the closest point in the display screen to the user, and so forth.
The first preset point may refer to a certain part of the user set in advance. Such as the user's head or the user's eyes, etc.
In some embodiments, the first preset point is associated with an eye position of the user. For example, the first preset point is a center position of an eye (left eye or right eye) of the user, a center position of a line between both eyes, or the like. In some embodiments, the determining module 210 may determine the first preset point by identifying an image of the user captured by the camera to determine the eye position of the user.
The second preset point may refer to a position of the display screen set in advance. For example, the center position of the display screen, or the position where the display screen plays the main viewing content, etc. In some embodiments, the second preset point may comprise a reference point. The reference point may be a point on the display screen at which a distance from the first preset point satisfies a preset condition. The preset condition may be that the distance is closest, the distance is less than a threshold, the angle to the horizontal plane is minimal, etc. For example, the reference point may be the closest point on the display screen to the first preset point of the user.
As shown in FIG. 6a, the first preset point of the user is the user's eye o 1 . Eyes o of a user 1 And point q in the display screen 1 (e.g., the center of the display screen) are located on the same horizontal plane. The user looks up the display. The vertical distance between the user and the display screen is D 1 . Eyes o of a user 1 And point q in the display screen 1 Is also D 1
As shown in fig. 6b, the user looks down at the display screen. The vertical distance between the user and the display screen is D 1 . Eyes o of a user 2 And point q in the display screen 2 Has a linear distance of D 2
As shown in fig. 6c, the user looks down on the display. The vertical distance between the user and the display screen is D 1 . Eyes o of a user 3 And point q in the display screen 3 Has a linear distance of D 3
The straight-line distance between the first preset point of the user and the point in the display screen in fig. 6a, 6b and 6c is only an example, and the straight-line distance may be a distance between other parts of the user and other positions of the display screen.
In some embodiments, the relative relationship may include a position of a reference point on the display screen from the first preset point of the user that satisfies a preset condition. For reference points, the description of the preset conditions is as described above.
In some embodiments, the relative relationship may include a viewing perspective of the user viewing a target viewing screen in the display screen.
The target viewing screen refers to a screen of an area viewed by a user in the display screen. In some embodiments, the target viewing scene may refer to all or a portion of the scene in the display screen. When the target viewing screen is a partial screen, the target viewing screen may be located in different regions of the display screen, for example, a central region of the display screen, a lower right corner region of the display screen, and the like.
The viewing angle may refer to an angle of a user's sight line when the user views a target viewing screen in the display screen. The viewing perspective can be represented in a variety of ways. For example, an angle between a connecting line between the eyes of the user and the target viewing frame (e.g., a central point of the target viewing frame) and the display screen may be used as the viewing angle, and at this time, the larger the difference between the angle and 90 ° is, the poorer the viewing angle is.
The determination module 210 may determine the relative relationship between the user and the display screen in a variety of ways. For example, the determination module 210 may determine the viewing distance between the user and the display screen through a distance measurement method. The distance measuring method may include a millimeter wave radar ranging method, the display screen includes a built-in radar system (which may emit an electromagnetic wave signal) therein, the electromagnetic wave signal is blocked by an object on its emission path and then reflected, and the radar system may determine a distance between the objects by capturing the reflected signal, thereby determining a viewing distance between the user and the display screen. For another example, the determining module 210 may determine the first preset point of the user through an image recognition technology, and further determine the position of the reference point according to the shape and the position of the display screen, wherein the shape and the position of the display screen may be stored in the memory of the processor in advance. For another example, the determining module 210 may determine the viewing angle when the user views the target viewing screen in the display screen through a line-of-sight detection method. For example, a camera built in or external to the display screen may acquire images of the user and the display screen, and analyze the acquired images by using an image recognition technology to determine a viewing angle at which the user views a target viewing screen in the display screen.
In some embodiments, the relative relationship may be determined by the first image and the second image taken by the camera. The first image refers to a user image taken by a camera on the display screen. The second image is an image which is used for shooting the display screen and a global camera of a user in the space where the display screen is located. In some embodiments, the first image and the second image are analyzed by image recognition techniques so that the relative relationship between the user and the display screen can be determined. For example, by performing image recognition on the first image, a viewing angle at which the user views a target viewing screen in the display screen is determined. For another example, the distance between the user and the display screen in the image is converted into a real distance by performing image recognition on the second image, so that the viewing distance between the user and the display screen can be determined. For another example, by performing image recognition on the second image, a position of a reference point on the display screen, which satisfies a preset condition from the first preset point of the user, is determined.
The camera shoots the user and the whole situation to obtain the multi-angle images, the relative relation between the user and the display screen can be determined from the two aspects of the visual angle of the user and the distance between the user and the display screen, and the effects that the images are conveniently and efficiently obtained and the result of outputting the relative relation is more accurate are achieved.
In some embodiments, if the user is multiple users or the primary user is multiple users, the determining module 210 may determine the relative relationship between each user or each primary user and the display screen, and then determine the final relative relationship by fusing the relative relationships between each user or each primary user. The fusion may be: averaging, weighted averaging, etc. The weights may be determined based on the importance of the user. For example, the weight of the primary user is greater than the weight of the user. For example, the weight of the primary user decreases as the distance from the display screen increases.
And step 320, adjusting the target viewing picture based on the relative relation. In some embodiments, step 320 may be performed by adjustment module 220.
In some embodiments, the adjustment module 220 may automatically deform, zoom, shift position, etc. the target viewing screen of the display screen based on the relative relationship between the user and the display screen. For example, when the viewing distance between the user and the display screen is relatively far or relatively close, the adjusting module 220 may automatically zoom in or zoom out on the target viewing screen of the display screen. For another example, when the offset viewing angle between the user and the display screen is large, the adjusting module 220 may automatically adjust the distortion parameter of the target viewing screen. In some embodiments, the adjustment module 220 may adjust the mated other components of the terminal of the display screen based on the relative relationship between the user and the display screen. For example, when the viewing distance between the user and the display screen is long, the adjusting module 220 may automatically turn up the sound of the sound generating unit (such as a speaker). The user can experience better audio-visual effect even if the viewing distance between the user and the display screen is far through the enlarged target viewing picture and the enlarged sound.
In some embodiments, the adjustment module 220 may determine a deformation parameter of the target viewing screen based on the viewing perspective and the standard perspective, and deform the target viewing screen based on the deformation parameter. For more on the standard viewing angle, the target viewing frame is adjusted based on the deformation parameter, see fig. 4 and its related description.
In some embodiments, the adjustment module 220 may determine a scaling parameter for the target viewing screen based on the viewing distance, and scale the target viewing screen based on the scaling parameter. See fig. 5 and its associated description for more on adjusting the target viewing screen based on the zoom parameter.
In some embodiments, the adjustment module 220 may determine the main viewing content from the target viewing screen, determine a main display position of the display screen based on the position of the reference point, and adjust the main viewing content to the main display position. See fig. 8 and its associated description for more about adjusting the target viewing screen based on the display position.
In some embodiments of the present description, by considering the relative relationship between the user and the display screen for the viewing distance and the viewing angle between the user and the display screen, the target viewing frame is intelligently adjusted, such as deformed, scaled, and moved, so that the user can achieve a better viewing effect at different positions or viewing angles, and the user experience is improved.
Fig. 4 is an exemplary flow diagram illustrating warping a target viewing screen based on warping parameters according to some embodiments of the present description. As shown in fig. 4, the process 400 includes the following steps. In some embodiments, the flow 400 may be performed by the adjustment module 220.
And step 410, determining deformation parameters of the target viewing picture based on the viewing angle and the standard viewing angle.
The standard viewing angle may refer to a viewing angle at which the user is looking forward.
In some embodiments, when a connection line between the eyes of the user and the target viewing screen is perpendicular to the display screen, the viewing angle at this time is a standard viewing angle; when a connecting line between the eyes of the user and the target viewing picture is not perpendicular to the display screen, the viewing angle at this time is offset relative to the standard viewing angle. As shown in FIG. 6a, the first preset point of the user is the user's eye o 1 Point q of viewing picture with object 1 Is perpendicular to the display screen, at this time, the viewing angle o of the user 1 q 1 Is a standard viewing angle. As shown in fig. 6b, the eye o of the user 2 Point q of viewing picture with object 2 Is not perpendicular to the display screen, at which point the user's viewing angle o 2 q 2 Not at the standard viewing angle, at the standard viewing angle of o 2 p 2 . As shown in fig. 6c, the eye o of the user 3 Point q of viewing picture with object 3 Is not perpendicular to the display screen, at which time the viewing angle o of the user 3 q 3 Not at the standard viewing angle, at the standard viewing angle of o 3 p 3
As previously mentioned, the user's viewing perspective may be offset from the standard viewing perspective. In some embodiments, the offset of the viewing perspective from the standard perspective may be represented by an offset perspective. For example, the offset viewing angle may be represented by an angle between the viewing angle and a standard viewing angle, where the greater the absolute value of the offset viewing angle, the more severe the offset. As shown in fig. 6b, the offset viewing angle is Φ. As shown in fig. 6c, the offset viewing angle is Ψ. If the normal viewing angle is taken as the positive x-direction, phi is positive and psi is negative. Also for example, the offset viewing angle may be represented by an angle between the viewing angle and the display screen, or the like.
In some embodiments, the offset viewing angle may include a horizontally offset viewing angle, a vertically offset viewing angle, and the like. The horizontally offset viewing angle may refer to an angle at which the user is offset to the left or right of the target viewing screen, not at a central position in front of the target viewing screen. The adjusting module 220 may preset the horizontal deviation viewing angle to be positive or negative. For example, the angle of shifting to the left of the target viewing screen is a positive angle. The angle of shifting to the right of the target viewing screen is a negative angle. The vertically offset viewing angle may refer to an angle at which the user viewing angle is higher or lower than a central position at which the user is not positioned in front of the target viewing screen. The user needs to look down or look up to view the target viewing frame in the display screen. The adjustment module 220 may preset the positive and negative of the vertical offset viewing angle. For example, a user's view angle is positive when it is high. The user's view angle is negative when low. The magnitude of the offset viewing angle may be represented by an angle. The larger the absolute value of the offset angle of view is, the more the user is deviated from the center position in front of the target viewing screen.
In some embodiments, the adjusting module 220 may determine the size of the offset viewing angle by a connection line between the first preset point of the user and the end point in the target viewing screen. For example, if the distances between two end points on a horizontal line of the target viewing picture relative to the user and the first preset point are not equal, it indicates that left-right deviation exists, and the horizontal deviation viewing angle is not 0; if the distances between the two end points on the vertical line relative to the user in the target viewing picture and the first preset point are not equal, it indicates that the vertical offset exists, and the vertical offset viewing angle is not 0.
Fig. 9a is an exemplary schematic diagram of a standard viewing angle shown in accordance with some embodiments herein. As shown in fig. 9a, the viewing angle of the user is located at the standard viewing angle, and the first preset point of the user is equal to the line lengths of the connecting lines of the 4 endpoints m1, m2, m3, and m4 in the target viewing picture, that is, the line lengths of d1, d2, d3, and d4 are equal. The user's viewing perspective is not shifted from the standard viewing perspective. At this time, the user's horizontally and vertically offset viewing angles are both 0 °.
Fig. 9b is an exemplary schematic diagram of a horizontally offset viewing angle, shown in accordance with some embodiments herein. As shown in fig. 9b, the viewing angle of the user deviates from the standard viewing angle. The user shifts to the right of the target viewing screen without shifting up and down. The first preset point of the user is not equal to the length of a connecting line of two end points on a horizontal line relative to the user in the target viewing picture, and the length of a connecting line of two end points on a vertical line relative to the user is equal. That is, d1 is equal to d3, d2 is equal to d4, d1 is not equal to d2, d3 is not equal to d4, and the difference between d1 and d2 is the same as the difference between d3 and d 4. The user's vertically offset viewing angle is 0 deg., and the horizontally offset viewing angle is not 0 deg.. The larger the degree of shift of the viewing angle of the user to the left or right of the target viewing screen, the larger the difference between d1 and d2, and between d3 and d 4. The adjusting module 220 may determine the horizontal offset viewing angle by the difference between d1 and d2, or d3 and d4, and the distance between the connecting lines of the corresponding 2 endpoints. For example, the horizontal offset viewing angle can be calculated by equation (1):
α=arcsin((d1-d2)/m1m2) (1)
wherein α is a horizontally offset viewing angle; d1 and d2 are respectively the distance between a first preset point of the user and a connecting line of end points m1 and m2 in the target watching picture; m1m2 is the distance between the connecting lines of the end points m1 and m2 of the target viewing screen.
Fig. 9c is an exemplary schematic diagram of vertically offset viewing angles, as shown in some embodiments according to the present description. As shown in fig. 9c, the user shifts to the lower side of the target viewing screen, and does not shift to the left or right. The first preset point of the user is not equal to the length of a connecting line of two end points on a vertical line relative to the user in the target watching picture. The length of the line connecting the two end points on the horizontal line relative to the user is equal. That is, d1 equals d2, d3 equals d4, d1 does not equal d3, and d2 does not equal d 4. The difference between d1 and d3 is the same as the difference between d2 and d 4. The user's horizontally offset viewing angle is 0 deg., and the vertically offset viewing angle is not 0 deg.. The larger the degree of shift of the viewing angle of the user to the upper side or the lower side of the target viewing screen, the larger the difference between d1 and d3, and between d2 and d 4. The adjustment module 220 may determine the vertical offset viewing angle by the difference between d1 and d3, or d2 and d4, and the distance of the connecting line of the corresponding 2 endpoints. The vertically offset viewing angle can be calculated by equation (2):
β=arcsin((d1-d3)/m1m3) (2)
wherein β is a vertically offset viewing angle; d3 and d1 are respectively the distance between a first preset point of the user and a connecting line of end points m1 and m3 in the target watching picture; m1m3 is the distance between the connecting lines of the end points m1 and m3 of the target viewing screen.
When the user shifts to the left or right of the target viewing picture and also shifts to the upper or lower side of the target viewing picture, the horizontal shift viewing angle and the vertical shift viewing angle of the user are not 0 °. d1, d2, d3 and d4 are all different. The adjustment module 220 may calculate the horizontally offset viewing angle and the vertically offset viewing angle by the above equations (1) and (2), respectively. For example, the adjustment module 220 determines that the horizontal offset viewing angle of the user is negative 15 ° and the vertical offset viewing angle is negative 30 ° by formula (1) and formula (2), etc.
In some embodiments, when the target viewing frame has a shape other than a rectangle or a square (e.g., a circle, an irregular polygon, etc.), the adjusting module 220 may determine the four end points of the target viewing frame according to a predetermined rule, and then determine the size of the offset viewing angle. The preset rules may be rules of how the endpoints are determined. For example, the preset rule is that the two endpoints are two endpoints intersecting with the edge of the target viewing picture on a horizontal line relative to the user in the target viewing picture; the other two endpoints are two endpoints which are intersected with the edge of the target watching picture on a vertical line relative to the user in the target watching picture; the connection lines of the four endpoints can divide the target viewing picture into 4 parts with small size difference. The adjusting module 220 may determine four end points of the target viewing frame based on the preset rules. The adjustment module 220 may determine the size of the offset viewing angle based on the determined four end points and the first preset point of the user. For example, the adjustment module 220 determines the horizontally offset viewing angle and the vertically offset viewing angle through formula (1) and formula (2).
The distortion parameter may be related data on how the target viewing scene is distorted. In some embodiments, the warping parameter is a parameter that warps content in the target viewing screen, the warping including: twisting and rotating. When the viewing angle at which the user is positioned is the same as the standard viewing angle, the distortion parameter of the target viewing screen may be 0. When the user is located at a viewing angle other than the standard viewing angle, the warping parameter is not 0. The larger the offset viewing angle of the viewing angle from the standard viewing angle, the larger the corresponding deformation parameter. The smaller the offset viewing angle of the viewing angle from the standard viewing angle, the smaller the corresponding deformation parameter.
And the target viewing picture is displayed to the user after being deformed based on the deformation parameters, and the target viewing picture viewed by the user is not deformed and is equivalent to the target viewing picture viewed from a standard visual angle.
In some embodiments, the adjustment module 220 may determine the distortion parameter of the target viewing screen in a variety of ways based on the viewing perspective and the standard perspective. For example, the adjustment module 220 may construct a database based on historical data and/or networked data. The database comprises deformation parameters corresponding to different viewing angles reflecting the user, scaling parameters corresponding to different viewing distances between the user and the display screen, and the like. The adjustment module 220 may determine the deformation parameter of the target viewing frame based on the matching of the current viewing perspective of the user with the database. For another example, a correspondence between the offset view angle and the deformation parameter may be constructed, and the deformation parameter may be determined based on the correspondence.
In some embodiments, the adjustment module 220 may process the viewing perspective and the standard viewing perspective through a distortion parameter determination model to determine a distortion parameter.
The deformation parameter determination model may be a machine learning model. In some embodiments, the type of deformation parameter determination model may include a neural network model, a deep neural network model, and the like, and the selection of the model type may be contingent on the circumstances.
In some embodiments, the input to the deformation parameter determination model may include a viewing perspective, a standard viewing perspective, and the like. The output of the deformation parameter determination model may comprise the deformation parameters.
In some embodiments, the deformation parameter determination model may be derived based on a plurality of sets of first training samples and label training. The first training sample comprises a plurality of groups of training data and labels of each group of training data, and each group of training data comprises a sample viewing visual angle and a sample standard visual angle. And the label of each group of training data is a sample deformation parameter of a sample target viewing picture. Training samples may be obtained based on historical data.
In some embodiments, the first training sample may be determined by screening historical data. For example, when a historical user watches a historical target watching screen of a display screen, historical data with actions meeting requirements of the user are screened as a first training sample. For example, the history data in which the number of movements of the head of the user is small is used as the first training sample. The motion of the head of the user can be determined through image recognition of the camera.
In some embodiments, the input to the deformation parameter determination model may be a first image and a second image. The output of the deformation parameter determination model may be the deformation parameters. Further description of the first image and the second image may be found in relation to fig. 3.
In some embodiments, the deformation parameter determination model comprises a first feature extraction layer, a second feature extraction layer, a prediction layer, the first feature extraction layer input comprising a first image and a second image, outputting a standard view angle feature vector, the second feature extraction layer input comprising the first image, outputting a viewing view angle feature vector; the prediction layer input comprises a standard view angle characteristic vector and a viewing view angle characteristic vector, and the output is a deformation parameter. The first feature extraction layer and the second feature extraction layer can be models such as CNN. The prediction layer may be DNN or the like. At this time, each set of training data of the deformation parameter determination model is a sample first image and a sample second image.
In some embodiments of the present description, the deformation parameter is determined by the deformation parameter determination model, so that the accuracy of the determined deformation parameter can be improved, the accuracy of the deformation of the target viewing picture can be further improved, and the physical examination of the user can be improved.
And step 420, deforming the target viewing picture based on the deformation parameters.
In some embodiments, the adjustment module 220 may distort the target viewing screen based on the distortion parameter.
Fig. 6a is an exemplary schematic diagram illustrating non-deformation of a target viewing screen based on a deformation parameter according to some embodiments of the present description. As shown in fig. 6a, the viewing angle between the user and the target viewing screen is the same as the standard viewing angle, the distortion parameter is 0, and the target viewing screen (e.g. screen 1, screen 2, screen 3, etc.) in the display screen 610 does not need to be distorted.
Fig. 6b is an exemplary diagram illustrating warping of a target viewing screen based on warping parameters according to some embodiments of the present description. As shown in fig. 6b, the viewing angle between the user and the target viewing frame is different from the standard viewing angle, and the user needs to look up at the target viewing frame. The target viewing picture in the display screen 620 needs to be deformed, so that the target viewing picture viewed by the user is clear and has no deformation, and the effect of viewing the picture by the front view target can be achieved. The deformed target view screen in the display 620 has a changed shape of the screen 1, the screen 2, and the screen 3 compared to the undeformed target view screen in the display 610.
Fig. 6c is yet another exemplary diagram illustrating warping of a target viewing screen based on warping parameters according to some embodiments of the present description. As shown in fig. 6c, the viewing angle between the user and the target viewing frame is different from the standard viewing angle, and the user needs to look down the viewing target viewing frame. The target viewing screen in the display 630 needs to be distorted. The deformed target viewing screen in display 630 has a changed shape for screen 1, screen 2, and screen 3 compared to the undeformed target viewing screen in display 610.
In some embodiments of the present description, the target viewing image is deformed based on the deformation parameter, so that it can be ensured that no matter how much the viewing angle of the user deviates from the standard viewing angle, the effect of the user viewing the target viewing image is equivalent to that of the user viewing at the standard viewing angle, and the use experience of the user can be improved.
FIG. 5 is an exemplary flow diagram illustrating scaling of a target viewing screen based on a scaling parameter according to some embodiments of the present description. As shown in fig. 5, the process 500 includes the following steps. In some embodiments, the flow 500 may be performed by the adjustment module 220.
Based on the viewing distance, a scaling parameter of the target viewing screen is determined, step 510.
The zoom parameter may refer to relevant data regarding how the target viewing screen is zoomed. For example, the scaling parameter may be a real value (0.8, 1.2, 2, 3, etc.). The magnitude of the numerical value may represent a multiple of the zoom of the target viewing frame. For example, a zoom parameter of 0.8 indicates that the target viewing screen is zoomed out to 0.8 times normal. The zoom parameter 2 represents that the target viewing screen is enlarged to 2 times the normal. The zoom parameter 1 indicates that the target viewing screen is normal and does not need to be zoomed.
In some embodiments, the adjustment module 220 may determine the zoom parameter of the target viewing screen based on the size of the viewing distance between the user and the display screen. The zoom parameter is positively correlated with the viewing distance.
The zoom parameter may include a zoom-out parameter, a zoom-in parameter, and the like. The reduction parameters include reduction factor, etc. The magnification parameters include magnification factor and the like.
In some embodiments, the adjustment module 220 may determine the zoom-out parameter or the zoom-in parameter based on a difference between the viewing distance and a preset distance. The reduction or magnification is positively correlated with the absolute value of the difference. For example, if the difference between the viewing distance and the preset distance is greater than 0, the distortion parameter is the amplification parameter, and the larger the difference is, the larger the amplification factor is. And if the difference value between the viewing distance and the preset distance is less than 0, the deformation parameter is a reduction parameter, and the larger the absolute value of the difference value is, the larger the reduction multiple is.
The preset distance may refer to an optimal viewing distance between the user and the display screen. When the distance between the user and the display screen is a preset distance, the target watching picture does not need to be reduced or enlarged, and the reduction parameter or the enlargement parameter is 0.
In some embodiments, the adjustment module 220 may determine the zoom parameter of the target viewing screen based on a database. See above for the database.
In some embodiments of the present description, the zoom-out parameter or the zoom-in parameter of the target viewing image is determined based on the difference between the viewing distance and the preset distance, so that it can be ensured that the size of the target viewing image is optimal when the user views the target viewing image no matter how far the user is from the display screen, which is beneficial to improving the physical examination of the user.
In some embodiments, the adjustment module 220 may determine the scaling parameter through a scaling parameter determination model.
The scaling parameter determination model may be a machine learning model. In some embodiments, the type of scaling parameter determination model may include a convolutional neural network model or the like, and the selection of the model type may be contingent on the circumstances.
In some embodiments, the input to the scaling parameter determination model may comprise the first image and the second image, and so on. The output of the scaling parameter determination model may comprise scaling parameters. For example, the first image and the second image are processed based on the zoom parameter determination model, and the like, and the zoom parameter of the target viewing screen is determined. Further description of the first image and the second image may be found in relation to the description of fig. 3.
In some embodiments, the scaling parameter determination model may be derived based on a plurality of sets of second training samples and label training.
In some embodiments, the second training sample includes a plurality of sets of training data and a label for each set of training data, each set of training data including the sample first image and the sample second image. The labels of each set of training data are sample scaling parameters of the target viewing frame. The second training sample may be obtained based on historical data.
In some embodiments, the training samples may be determined by screening historical data. For example, when a historical user watches a historical target watching screen of a display screen, historical data with actions meeting requirements of the user are screened to be used as training samples. For example, historical data of the user whose movement range of the back and forth movement is small is used as a training sample. The back-and-forth movement of the user can be determined by identifying images shot by the camera.
In some embodiments of the present description, the zoom parameter determination model determines the zoom parameter, so that the accuracy of the determined zoom parameter can be improved, the accuracy of zooming the target view image can be further improved, and the physical examination of the user can be improved.
And step 520, zooming the target viewing picture based on the zooming parameter.
In some embodiments, the adjustment module 220 may scale the target viewing screen based on the scaling parameter. Fig. 7b is an exemplary diagram illustrating non-zooming of a target viewing screen based on a zoom parameter according to some embodiments of the present description. As shown in fig. 7b, the viewing distance between the user and the display screen 720 is optimal (e.g., 2 meters), and the target viewing screen in the display screen 720 does not need to be zoomed.
Fig. 7a is an exemplary diagram illustrating a reduction of a target viewing screen based on a zoom parameter according to some embodiments of the present description. As shown in fig. 7a, when the viewing distance between the user and the display 710 is short (e.g. 1 meter), the target viewing frame needs to be reduced, so as to ensure that the target viewing frame viewed by the user is optimal. The reduced target viewing screen of the display 710 has a size that changes from the non-scaled target viewing screen of the display 720. In the process of reducing the target viewing screen, the adjusting module 220 may add a new target viewing screen to the display screen. Comparing fig. 7a with fig. 7b, the display screen 710 has 2 new target viewing frames.
Fig. 7c is an exemplary diagram illustrating zooming in on a target viewing screen based on a zoom parameter, according to some embodiments of the present description. As shown in fig. 7c, the viewing distance between the user and the display screen 730 is relatively long (e.g. 5 meters), the target viewing screen needs to be enlarged, so as to ensure that the target viewing screen viewed by the user is optimal, and the size of the enlarged target viewing screen in the display screen 730 is changed compared to the target viewing screen in the display screen 720 that is not zoomed. In the process of enlarging the target viewing frame, the adjusting module 220 may reduce the target viewing frame in the display screen. As in fig. 7c, the display screen 730 has 2 less target viewing frames compared to that of fig. 7 b.
In some embodiments, when the user requirement is a key requirement, the screens in fig. 7a, 7b and 7c may be buttons and/or menus, etc. The adjusting module 220 may zoom in, zoom out, etc. the buttons and/or menus, etc. in the display screen according to the zooming parameters. For more on user demand, see the associated description of fig. 8.
In some embodiments of the present description, the target viewing frame is zoomed based on the viewing distance, so that it can be ensured that the size of the target viewing frame is moderate no matter how far or near the viewing distance of the user is, and the user experience can be improved.
Fig. 8 is an exemplary diagram illustrating adjusting primary viewing content to a primary display position according to some embodiments of the present description.
In some embodiments, the adjustment module 220 may determine the main viewing content from the target viewing screen.
Primary viewing content may refer to content of interest to a user on a display screen. For example, when a movie and an advertisement are being played in the display screen, the movie is more interesting to the user than the advertisement, and the movie is a main-view content.
In some embodiments, the main viewing content may be preset specific content. For example, movies are subjective views of content, and the like.
In some embodiments, the adjustment module 220 may determine the main viewing content by combining one or more of the currently displayed content, historical play records, remote control controls, and user feedback. For example, the adjustment module 220 detects that the currently displayed content of the display screen is a movie and an advertisement. The adjustment module 220 may determine the movie as the primary viewing content. For another example, the adjustment module 220 may determine the main viewing content by the various factors described above (e.g., the currently displayed content and the historical play records). Different factors may correspond to different weights. The weights corresponding to the different factors may be preset in advance. For example, the corresponding weights of the currently displayed content and the history play record are 0.3 and 0.7, respectively. Illustratively, the currently displayed content is movies and advertisements. Users tend to watch movies instead of other content such as advertisements. The adjustment module 220 may determine that the user has a 70 point of interest in the movie and a 30 point of interest in other content such as advertisements. By analyzing the historical play record of the user, it is determined that the movie attention frequency of the user is 90%, the advertisement attention frequency of the user is 10%, and the adjustment module 220 may determine that the movie attention degree of the user in the historical play record is 90 points, and the advertisement attention frequency of the user is 10 points. The weighted score of the attention degree of the user to the movie is 84 (70 + 0.3+90 + 0.7) and the weighted score of the attention degree of the user to other contents such as the advertisement is 16 (30 + 0.3+10 + 0.7). The adjustment module 220 may determine the movie as the primary viewing content.
In some embodiments, the adjustment module 220 may obtain a first sequence and a second sequence of the user and determine the primary viewing content based on the first sequence and the second sequence.
The first sequence may comprise a sequence of historical control parameters, such as a remote control or a keyboard, at a plurality of points in time. The historical control parameters may include the manner in which the remote control is operated by the user at different points in time, etc. The plurality of time points may be a plurality of time points before the current time point. For example, a plurality of time points within 10 minutes, 15 minutes, etc. before the current time point.
The second sequence may include a sequence of historical display content for a plurality of time point display screens. The plurality of time points of the second sequence may correspond one-to-one to the plurality of time points of the first sequence. For example, at a plurality of time points within 10 minutes, 15 minutes, and the like before the current time point, the historical display contents of the second sequence display screen corresponding to the first time point, the second time point, the third time point, and the nth time point of the first sequence are respectively a movie, a variety, an advertisement, a tv series, and the like.
In some embodiments, the adjustment module 220 may determine the main viewing content based on the first sequence and the second sequence. For example, the historical display content of the display screen of the second sequence corresponding to the non-operation in the plurality of time points in the first sequence is the synthesis, and the adjustment module may determine the synthesis as the main viewing content.
The subjective viewing content is determined by analyzing the historical control parameters of the remote controllers at the multiple time points and the historical display content of the display screens at the multiple time points, so that the accuracy of the determined subjective viewing content can be further improved.
In some embodiments, the adjustment module 220 may process the first sequence and the second sequence through a content prediction model to determine the main viewing content.
The content prediction model may refer to a model for predicting the content that a user watches subjectively. In some embodiments, the content prediction model may be a machine learning model. The types of the content prediction model may include a Long-Short Term Memory (LSTM) model, a Recurrent Neural Network (RNN), and the like, and the selection of the model type may be determined as the case may be.
In some embodiments, the input to the content prediction model may include a first sequence and a second sequence. The output of the content prediction model may include a user's subjective view of the content. For example, the first sequence and the second sequence are processed based on a content prediction model to determine that the user is watching content (such as a movie).
In some embodiments, the content prediction model may be derived based on a plurality of sets of third training samples and label training. The third training sample comprises a plurality of groups of training data and labels of each group of training data, and each group of training data comprises a sample first sequence and a sample second sequence. The labels of each set of training data are the sample subjective observations of the user. Training samples may be obtained based on historical data.
In some embodiments, the input to the content prediction model further comprises a third sequence.
The third sequence refers to a sequence including actions of the user at a plurality of points in time. The user's actions may include looking up at the display screen, looking down at the display screen, playing a cell phone, doing a sport, and so on. The third sequence may reflect different needs of the user corresponding to different time points. For example, when the user acts to look up the display screen, it indicates that the user has a viewing demand at the time point, and the content viewed by the user is located above the display screen. For another example, when the user performs a downward viewing of the display screen, it indicates that the user has a viewing request at that time point, and the content viewed by the user is located below the display screen. For another example, the user's action is playing a mobile phone, and when doing sports, it means that the user has no viewing demand at that point in time.
The first sequence, the second sequence and/or the third sequence are processed through the content prediction model to determine the subjective viewing content, so that the basis for determining the subjective viewing content is richer, and the efficiency and the accuracy for determining the subjective viewing content are further improved.
In some embodiments, the adjustment module 220 may determine a primary display position of the display screen based on the position of the reference point. See fig. 3 and its associated description for more on the location of the reference point.
The primary display position may refer to the user's best viewing area on the display screen. For example, the main display position may be a position or an area of the display screen closest to the first preset point of the user. For another example, the main display position may be a position or region where the viewing angle formed with the first preset point of the user is the smallest. The adjustment module 220 may determine the primary display position based on a variety of ways.
In some embodiments, the adjustment module 220 may determine the primary display position based on the offset viewing angle. Illustratively, the viewing angle of the user is horizontally shifted by minus 15 ° (the user shifts to the right of the target viewing frame) and vertically shifted by minus 30 ° (the user has a low viewing angle and needs to look up at the target viewing frame), and the lower right corner region of the display screen may be the main display position (the boundary of the main display position does not exceed the boundary of the display screen). See fig. 4 and its associated description for more on determining an offset view angle.
In some embodiments, the adjustment module 220 may adjust the primary viewing content to the primary display position.
As shown in fig. 8, the adjusting module 220 adjusts the main viewing content in the display screen 810 to the main display position in the display screen 820, and obtains the display screen 830. The main viewing content and the main display position in the display screen 830 coincide and the viewing angle of the user is optimal.
Based on the relative relation between the user and the display screen, the main display position of the main viewing content is adjusted, the user can be ensured to view the content to be viewed in a comfortable posture, the influence of other contents except the main viewing content on the user in vision is avoided, and the viewing experience of the user is improved.
It should be noted that the above description of the flow is for illustration and description only and does not limit the scope of the application of the present specification. Various modifications and alterations to the flow may occur to those skilled in the art, given the benefit of this description. However, such modifications and variations are still within the scope of the present specification.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, though not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics may be combined as suitable in one or more embodiments of the specification.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the foregoing description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features are required than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range in some embodiments of the specification are approximations, in specific embodiments, such numerical values are set forth as precisely as possible within the practical range.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments described herein. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those explicitly described and depicted herein.

Claims (10)

1. A method for adjusting display content, the method comprising:
determining a relative relationship between a user and a display screen, wherein the relative relationship comprises at least one of a viewing distance between the user and the display screen, a position of a reference point on the display screen, which is away from a first preset point of the user and meets a preset condition, and a viewing angle when the user views a target viewing screen in the display screen;
and adjusting the target viewing picture based on the relative relation.
2. The method of claim 1, wherein the adjusting the target viewing screen in the display screen based on the relative relationship comprises:
determining a deformation parameter of the target viewing picture based on the viewing angle and a standard viewing angle;
and deforming the target viewing picture based on the deformation parameters.
3. The method of claim 1, wherein the adjusting the target viewing screen based on the relative relationship comprises:
determining a zoom parameter of the target viewing screen based on the viewing distance;
zooming the target viewing screen based on the zoom parameter.
4. The method of claim 1, wherein the adjusting the target viewing screen based on the relative relationship comprises:
determining main viewing content from the target viewing picture;
determining a primary display position of the display screen based on the position of the reference point;
and adjusting the main viewing content to the main display position.
5. A display content adjustment system, the system comprising:
the device comprises a determining module, a display module and a display module, wherein the determining module is used for determining a relative relationship between a user and a display screen, and the relative relationship comprises at least one of a viewing distance between the user and the display screen, a position of a reference point on the display screen, which is away from a first preset point of the user and meets a preset condition, and a viewing angle when the user views a target viewing picture in the display screen;
and the adjusting module is used for adjusting the target watching picture based on the relative relation.
6. The system of claim 5, wherein the adjustment module is further configured to:
determining a deformation parameter of the target viewing picture based on the viewing angle and a standard viewing angle;
and deforming the target viewing picture based on the deformation parameters.
7. The system of claim 5, wherein the adjustment module is further configured to:
determining a scaling parameter of the target viewing picture based on the viewing distance;
zooming the target viewing screen based on the zoom parameter.
8. The system of claim 5, wherein the adjustment module is further configured to:
determining main viewing content from the target viewing picture;
determining a primary display position of the display screen based on the position of the reference point;
and adjusting the main viewing content to the main display position.
9. An apparatus for adjusting display content, the apparatus comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the method of any of claims 1-4.
10. A computer-readable storage medium, characterized in that the storage medium stores computer instructions which, when executed by a processor, implement the method of any of claims 1 to 4.
CN202211023209.0A 2022-08-25 2022-08-25 Display content adjusting method and system Pending CN115373548A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211023209.0A CN115373548A (en) 2022-08-25 2022-08-25 Display content adjusting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211023209.0A CN115373548A (en) 2022-08-25 2022-08-25 Display content adjusting method and system

Publications (1)

Publication Number Publication Date
CN115373548A true CN115373548A (en) 2022-11-22

Family

ID=84067766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211023209.0A Pending CN115373548A (en) 2022-08-25 2022-08-25 Display content adjusting method and system

Country Status (1)

Country Link
CN (1) CN115373548A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331168A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Display adjusting method and electronic equipment
CN113268215A (en) * 2021-06-10 2021-08-17 咪咕文化科技有限公司 Screen picture adjusting method, device, equipment and computer readable storage medium
CN114063962A (en) * 2021-11-25 2022-02-18 深圳Tcl数字技术有限公司 Image display method, device, terminal and storage medium
CN114339371A (en) * 2021-12-30 2022-04-12 咪咕音乐有限公司 Video display method, device, equipment and storage medium
CN114816641A (en) * 2022-05-09 2022-07-29 海信视像科技股份有限公司 Display device, multimedia content display method, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331168A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Display adjusting method and electronic equipment
CN113268215A (en) * 2021-06-10 2021-08-17 咪咕文化科技有限公司 Screen picture adjusting method, device, equipment and computer readable storage medium
CN114063962A (en) * 2021-11-25 2022-02-18 深圳Tcl数字技术有限公司 Image display method, device, terminal and storage medium
CN114339371A (en) * 2021-12-30 2022-04-12 咪咕音乐有限公司 Video display method, device, equipment and storage medium
CN114816641A (en) * 2022-05-09 2022-07-29 海信视像科技股份有限公司 Display device, multimedia content display method, and storage medium

Similar Documents

Publication Publication Date Title
US8427476B2 (en) System and method for automatically adjusting visual setting of display device
US20070248281A1 (en) Prespective improvement for image and video applications
KR101596975B1 (en) Information display device and information display method
CN110189378A (en) A kind of method for processing video frequency, device and electronic equipment
US20120093365A1 (en) Conference system, monitoring system, image processing apparatus, image processing method and a non-transitory computer-readable storage medium
JP4061379B2 (en) Information processing apparatus, portable terminal, information processing method, information processing program, and computer-readable recording medium
US9324158B2 (en) Image processing device for performing image processing on moving image
CN112492388A (en) Video processing method, device, equipment and storage medium
JP2002351603A (en) Portable information processor
US20100054534A1 (en) System and method for interacting with a media device using faces and palms of video display viewers
CN115223514B (en) Liquid crystal display driving system and method capable of intelligently adjusting parameters
CN114072764B (en) System and method for adaptively modifying presentation of media content
JP5971712B2 (en) Monitoring device and method
JP4906588B2 (en) Specific operation determination device, reference data generation device, specific operation determination program, and reference data generation program
CN112153269B (en) Picture display method, device and medium applied to electronic equipment and electronic equipment
JP5115763B2 (en) Image processing apparatus, content distribution system, image processing method, and program
CN115373548A (en) Display content adjusting method and system
CN110662001B (en) Video projection display method, device and storage medium
CN112700568A (en) Identity authentication method, equipment and computer readable storage medium
CN115209193B (en) Display processing equipment and method and display system
US20220189200A1 (en) Information processing system and information processing method
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium
CN113038257B (en) Volume adjusting method and device, smart television and computer readable storage medium
CN112540676B (en) Projection system-based variable information display device
JP2017028688A (en) Image managing device, image managing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination