CN113110908A - Display content adjusting method and device, computer equipment and storage medium - Google Patents

Display content adjusting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113110908A
CN113110908A CN202110425841.7A CN202110425841A CN113110908A CN 113110908 A CN113110908 A CN 113110908A CN 202110425841 A CN202110425841 A CN 202110425841A CN 113110908 A CN113110908 A CN 113110908A
Authority
CN
China
Prior art keywords
area
visual
content
display content
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110425841.7A
Other languages
Chinese (zh)
Other versions
CN113110908B (en
Inventor
许泽臣
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110425841.7A priority Critical patent/CN113110908B/en
Publication of CN113110908A publication Critical patent/CN113110908A/en
Application granted granted Critical
Publication of CN113110908B publication Critical patent/CN113110908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a display content adjusting method, a display content adjusting device, computer equipment and a storage medium, wherein the display content adjusting method and the display content adjusting device can display visual field test contents; acquiring a region of interest of a user for the visual field test content; acquiring a projection area of the visual field of the user on a display screen where display content is located according to the attention area; selecting a corresponding target visual area in the projection area according to the type of the display content; and adjusting the display content based on the target visual area, so that a more appropriate display range is set for the vision obstruction user, and the universality of the display is improved.

Description

Display content adjusting method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of display, in particular to a display content adjusting method and device, computer equipment and a storage medium.
Background
With the development of display technology, in order to improve the visual experience of users with normal eyesight, the screen of the display device is larger and larger. However, in real life, for many users who have impaired vision or have only one eye to watch the screen of the display device and the other eye to be blind and cannot watch the screen, their visual field on the screen of the display device is usually smaller than that of users with normal vision, so that the users with smaller visual field cannot completely watch the display content on the display device, and the users with smaller visual field may miss the core information of the display content and cannot perform related operations.
Disclosure of Invention
The embodiment of the application provides a display content adjusting method and device, computer equipment and a storage medium, which can set a relatively proper display range for users with visual field disorders, and improve the universality of a display.
The embodiment of the application provides a display content adjusting method, which comprises the following steps:
displaying the vision test content;
acquiring a region of interest of a user for the visual field test content;
acquiring a projection area of the visual field of the user on a display screen where display content is located according to the attention area;
selecting a corresponding target visual area in the projection area according to the type of the display content;
adjusting the display content based on the target viewable area.
Correspondingly, an embodiment of the present application further provides a display content adjusting apparatus, including:
the display unit is used for displaying the visual field test content;
a first acquisition unit configured to acquire a region of interest of a user with respect to the visual field test content;
the second acquisition unit is used for acquiring a projection area of the visual field of the user on a display screen where display content is located according to the attention area;
the selecting unit is used for selecting a corresponding target visual area in the projection area according to the type of the display content;
and the adjusting unit is used for adjusting the display content based on the target visual area.
Optionally, the visual field test content includes at least two first preset objects, and the first acquiring unit is further configured to:
setting the first preset object to move from four sides of the display screen to a first central position of the display screen, wherein the brightness of the first preset object is unchanged;
acquiring the first preset object of the user executing the related operation as the first target object;
connecting the first target object to form a boundary;
and acquiring the region surrounded by the boundary as the attention region.
Optionally, the projection area includes a dynamic visual area, a central visual area, and an edge visual area, and the second obtaining unit is further configured to:
acquiring the attention area as a dynamic visual area, wherein the dynamic visual area comprises a central visual area, an edge visual area and a second central position, and the second central position is the center of the dynamic visual area;
obtaining the central visual area, wherein the central visual area comprises an area which is reduced by a first threshold value from the boundary to the second central position;
and acquiring the area outside the central visible area as the edge visible area.
Optionally, the selecting unit is further configured to:
if the display content is first type display content, selecting the dynamic visual area as the target visual area corresponding to the first type display content;
if the display content is a second type display content, selecting the edge visible area as the target visible area corresponding to the second type display content;
and if the display content is a third type display content, selecting the central visual area as the target visual area corresponding to the third type display content.
Optionally, the first type of display content comprises a shooting game, the second type of display content comprises a racing game, and the third type of display content comprises a multi-player virtual role-playing game.
Optionally, the display content includes dynamic visual content, center visual content and edge visual content, and the adjusting unit is further configured to:
if the target visual area is the dynamic visual area, setting the dynamic visual content to be displayed in the dynamic visual area, and simplifying the content except the dynamic visual content in the display content;
if the target visual area is the edge visual area, setting the edge visual content to be displayed in the edge visual area, and simplifying the content except the edge visual content in the display content;
if the target visual area is the central visual area, the central visual content is set to be displayed in the central visual area, and the content except the central visual content in the display content is simplified.
Optionally, the selecting unit is further configured to:
setting the dynamic visual area as the target visual area;
the adjustment unit is further configured to:
changing the size of the display content in equal proportion to form target display content;
and setting the target display content to be displayed in the target visible area.
Optionally, the selecting unit is further configured to:
if the dynamic visual area is different from the display content in shape, acquiring a surrounding visual area, and setting the surrounding visual area as the target visual area, wherein the surrounding visual area comprises the dynamic visual area, and the surrounding visual area is the same as the display content in shape.
Optionally, the visual field test content includes at least one second preset object, and the first obtaining unit is further configured to:
setting the brightness of the second preset object as a first brightness, wherein the position of the second preset object on the display screen is unchanged, and the first brightness comprises invisible brightness of a user;
increasing the brightness of the second preset object to a second brightness, wherein the second brightness is a critical brightness when the invisible brightness of the user is changed into the visible brightness of the user;
and acquiring the display area with the second brightness as the attention area.
Optionally, the projection area includes a static visual area, and the second obtaining unit is further configured to:
acquiring the attention area as the projection area;
and setting the projection area as a static visual area.
Optionally, the selecting unit is further configured to:
and if the display content is a fourth type display content, selecting the static visual area as the target visual area corresponding to the fourth type display content.
Optionally, the fourth type of display content includes a web interface and/or a picture.
Optionally, the display content includes static visual content, and the adjusting unit is further configured to:
and if the target visual area is the static visual area, setting the static visual content to be displayed in the static visual area, and simplifying the content except the static visual content in the displayed content.
Similarly, an embodiment of the present application further provides a computer device, including:
a memory for storing a computer program;
a processor for executing the steps of any one of the display content adjustment methods.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of any one of the display content adjusting methods.
The embodiment of the application provides a display content adjusting method and device, computer equipment and a storage medium, after a display adjusting function of the display equipment is triggered by a vision obstruction user, the display equipment can display vision test contents to the vision obstruction user, then different types of projection areas of the vision obstruction user on a screen are obtained according to related operations of the vision obstruction user on the vision test contents, and therefore a proper display range is provided for the vision obstruction user aiming at the different types of display contents, the vision obstruction user can see core information of the different types of display contents, and the universality of a display is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system diagram of a display content adjusting apparatus according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a display content adjustment method according to an embodiment of the present application;
fig. 3 is another schematic flow chart of a display content adjustment method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a display content adjusting apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a display content adjusting method and device, computer equipment and a storage medium. Specifically, the display content adjusting method according to the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network service, big data and an artificial intelligence platform.
Referring to fig. 1, fig. 1 is a system schematic diagram of a display content adjusting apparatus according to an embodiment of the present disclosure. The system may include at least one terminal. The terminal is used for displaying the visual field test content according to the triggering of the user on the display adjusting function; then, acquiring a focus area of the user aiming at the visual field test content; then, acquiring a projection area of the visual field of the user on a display screen where the display content is located according to the attention area; then, in the projection area, selecting a corresponding target visual area according to the type of the display content; finally, the display content is adjusted based on the target visual area. Therefore, the users with visual field disorder can see more information in the display content, and the situation that the display content cannot be normally viewed or relevant operations cannot be implemented because the users do not see the core information of the display content is avoided.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The present embodiment will be described in terms of a display content adjusting apparatus, which may be specifically integrated in a terminal, and the terminal may include a smart phone, a notebook computer, a tablet computer, a personal computer, and other devices.
The display content adjusting method provided in the embodiment of the present application may be executed by a processor of a terminal, as shown in fig. 2, a specific flow of the display content adjusting method mainly includes steps 201 to 205, which are described in detail as follows:
step 201, displaying the vision field test content.
In the embodiment of the application, if a user wants to adjust the position, size, included elements and the like of the display content according to the range that the user can see on the display screen when the user is at a certain position, the user can execute corresponding trigger operation to trigger the display adjustment function of the terminal, so that the visual field test content is displayed to the user on the display screen. The triggering operation for triggering the terminal to display the adjustment function may be a touch operation and/or a non-touch operation, the touch operation may be a click, a long press, and/or a slide of a control representing the display adjustment function, and the non-touch operation may be a mode of controlling the terminal to trigger the display adjustment function by using voice and the like.
In this embodiment of the application, "the visual field test content" in step 201 is used to test a range that the user sees on the display screen where the display content is located, and the terminal may obtain a projection area of the visual field on the display screen where the user watches the display screen when the user is at a certain position according to a related operation performed by the user on the visual field test content on the display screen. The visual field of the display screen when the user is at a certain position refers to the space range which can be seen when the eyes of the user fixedly watch any position of the display screen. The projection area refers to a cross section of the display screen to the spatial extent.
In one embodiment, when a user tests a range seen on a display screen using the visual field test content, in order to make the testing process more interesting, the visual field test content may exist in various ways, for example, it may be an interesting mini-game, an interesting picture, an interesting web page, and/or an interesting video, and so on. The interesting mini-game may be a shooting game, a fruit cutting game, and/or a groundmouse game, etc., and the interesting video may be a fun video, a game video, etc. Since the field of view of the user viewing the display screen when the user is at a certain position includes a dynamic field of view, a static field of view, a center field of view, and an edge field of view, a projection area of the user field of view on the display screen may have a dynamic visual area, a static visual area, a center visual area, and an edge visual area. When a user watches the screen, the space range seen by the rotation of the eyes is a dynamic visual field, the space range seen by the static eyes is a static visual field, the space range of the dynamic visual field which shrinks from all directions to the center to a certain range is a central visual field, and the space range except the central visual field in the dynamic visual field is an edge visual field.
The dynamic visual area is a partial projection area corresponding to a dynamic visual field of a display screen when a user is at a certain position on the display screen, the static visual area is a partial projection area corresponding to a static visual field of the display screen when the user is at a certain position on the display screen, the central visual area is a partial projection area corresponding to a central visual field of the display screen when the user is at a certain position on the display screen, and the edge visual area is a partial projection area corresponding to an edge visual field of the display screen when the user is at a certain position on the display screen.
Step 202, obtaining a focus area of the user aiming at the visual field test content.
In the embodiment of the application, because the area seen on the display screen when the eyes of the user rotate changes due to the positions of different first preset objects, the visual field test content of the user can be set to include at least two first preset objects, so that the terminal confirms the first preset objects which can be watched by the user at a certain position according to the related operation of the user on the first preset objects, the concerned area when the eyes of the user rotate is obtained, the larger area seen on the screen by the user is determined, and the dynamic visual area is judged. The "acquiring the attention area of the user for the visual field test content" in step 202 can be realized by the following steps S2021 to S2024:
step S2021: the method comprises the steps of setting a first preset object to move from four sides of a display screen to a first center position of the display screen, and keeping the brightness of the first preset object unchanged.
In the embodiment of the present application, in order to acquire as large an area of interest as possible when the eyes of the user rotate on the display screen, all the first preset objects are set so that the initial positions during the test are close to the four sides of the display screen, and then are slowly moved from the four sides of the display screen to the center position of the display screen according to the user's operation. In the process, since other objective factors such as the attribute of the first preset object can affect the observation and operation of the user on the first preset object, the attributes such as the brightness, the size and/or the shape of all the first preset objects are set to be unchanged.
In an embodiment of the present application, the first preset object may exist on the display screen for a long time after beginning to appear until the test is finished and disappears, or may disappear after a period of time after appearing.
Step S2022: and acquiring a first preset object of a user executing related operation as a first target object.
In the embodiment of the application, since the range that the user sees on the display screen is limited, the user may not perform the related operation on the first preset object existing outside the viewing range, thereby indicating that the first preset object on which the user performs the related operation is within the range that the user sees on the display screen. The related operations executed by the user on the first preset object are not limited, and may be set according to rules of different visual field test contents, for example, click operations, long-time press operations, re-press operations, and/or sliding operations.
Step S2023: the first target object is connected to form a boundary.
In the embodiment of the application, because the first preset objects are located in all directions of the display screen, and the first preset objects seen by the user when the user rotates eyes are also located in different directions, the first target objects for performing the related operations are also located in different directions, and a boundary can be obtained by connecting the first target objects for performing the related operations for the first time by the user in different directions.
Step S2024: the region enclosed by the boundary is acquired as a region of interest.
In the embodiment of the present application, since the first preset object moves from four sides of the display screen to the center position, the position of the first target object where the user performs the relevant operation for the first time is located at the boundary position of the user attention area, and the area surrounded by the boundary can be considered as the attention area of the user on the display screen.
For example, if the visual field test content is a shooting game, the first preset object is an enemy target of the user, the enemy target is set to gradually move from four sides of the display screen to the center position, the enemy target which is firstly hit by the user in each direction is acquired as the first target object, the first target objects in each direction are connected to form a boundary, and the area surrounded by the boundary is a focus area.
In some embodiments of the application, when the watching area of the user on the display screen is tested when the eyes of the user rotate, the visual field test content can be an interesting video, when the interesting video is played at the terminal, the fixation point of the user on the display screen is monitored in real time, the eye movement track of the user when watching the interesting video is obtained, and therefore the area surrounded by the eye movement track of the user can be obtained as the attention area.
In one embodiment of the present application, since the area seen on the display screen when the user's eyes are not rotated does not change due to the number of the second preset objects, the test can be performed using only one preset object. When the attention area of the user when the eyes of the user are not rotated is tested by using the user visual field test content, and the user visual field test content is set to include at least one second preset object, the above step 202 of "acquiring the attention area of the user for the visual field test content" may be implemented by the following steps S2025 to S2027:
step S2025: and setting the brightness of the second preset object as the first brightness, wherein the position of the second preset object on the display screen is unchanged, and the first brightness comprises the brightness which is invisible to the user.
In the embodiment of the application, because the areas occupied by the objects under different brightness are different in size, the range which can be seen on the screen when the eyes of the user do not rotate can be judged by using the brightness change of the second preset object, so that the terminal can accurately and automatically acquire the attention area on the display screen when the eyes of the user do not rotate. In order to avoid the influence of the position change on the area occupied by the brightness of the second preset object, it may be set that the factors such as the position, shape and/or size of the second preset object are not changed.
Step S2026: and increasing the brightness of the second preset object to a second brightness, wherein the second brightness is a critical brightness for changing the invisible brightness of the user into the visible brightness of the user.
Step S2027: and acquiring a display area with second brightness as a focus area.
In the embodiment of the application, when the range that the eyes of the user can not rotate to see on the display screen is tested, the other areas except the second initial object on the display screen are set to keep the brightness which can not be seen by the user, the initial brightness of the second preset object is set to be the brightness which can not be seen by the user, the brightness of the second preset object is gradually increased to reach the brightness which can be seen by the user, and the second brightness display area of the second preset object can be considered as the attention area of the user because the user can not see the areas except the second preset object on the display screen.
In some embodiments of the application, when the watching area on the display screen is tested when the eyes of the user are not rotated, the visual field test content can be an interesting picture, and when the interesting picture is displayed at the terminal, the watching area on the display screen within the preset time of the user is obtained, so that the watching area of the user can be obtained as a focus area. The preset time period is short, and may be, for example, 2 seconds.
Step 203, acquiring a projection area of the visual field of the user on the display screen where the display content is located according to the attention area.
In the embodiment of the application, after an attention area of a user for the visual field test content is acquired, displaying the display content selected to be watched by the user on a display screen where the visual field test content is located; when a user watches display content on a display screen, acquiring a space range which can be seen by eyes of the user as a visual field of the user, and determining a focus area as a projection area of the visual field of the user on the display screen. The visual field of the user comprises a dynamic visual field, a central visual field and an edge visual field, and a projection region corresponding to the dynamic visual field, a projection region corresponding to the central visual field and a projection region corresponding to the edge visual field are obtained in the attention region according to the relation among the dynamic visual field, the central visual field and the edge visual field.
In this embodiment of the present application, since the field of view of the user viewing the display screen when the user is at a certain position includes a dynamic field of view, a central field of view, and an edge field of view, the projection area of the field of view of the user on the display screen may include a dynamic visual area, a central visual area, and an edge visual area, and the step 203 of "acquiring the projection area of the field of view of the user on the display screen where the content is displayed according to the attention area" may be implemented by the following steps S2031 to S2033:
step S2031: the attention area is obtained as a dynamic visual area, the dynamic visual area comprises a central visual area, an edge visual area and a second central position, and the second central position is the center of the dynamic visual area.
In this embodiment of the application, when the attention area is an area that is seen on the display screen when the eyes of the user rotate, the attention area may be set as a dynamic visual area because the dynamic visual area is a partial projection area on the display screen corresponding to a dynamic field of view for viewing the display screen when the user is at a certain position. The central visual area is a partial projection area corresponding to a central visual field of the display screen viewed by a user at a certain position on the display screen, the edge visual area is a partial projection area corresponding to an edge visual field of the display screen viewed by the user at a certain position on the display screen, and the dynamic visual area comprises the central visual area and the edge visual area as the central visual field and the edge visual field are part of the dynamic visual field.
Step S2032: a central viewable area is obtained that includes an area that narrows by a first threshold from the boundary to a second central location.
In the embodiment of the present application, since the central visual field is a spatial range in which the dynamic visual field is narrowed from all directions to the center to a certain extent, the central visual region may be set to a region in which the first threshold is narrowed from the boundary of the dynamic visual region to the second center position. The setting of the first threshold is not limited, and may be flexibly set according to the actual situation, for example, it may be seventy-five percent, that is, the central visible area occupies twenty-five percent of the dynamic visible area.
Step S2033: and acquiring the area outside the central visible area as an edge visible area.
In the embodiment of the present application, since the edge visual field is a spatial range other than the center visual field in the dynamic visual field, the edge visual region may be set to a region other than the center visual region of the dynamic visual region.
In this embodiment of the present application, since the field of view of the user viewing the display screen when the user is at a certain position includes a static field of view, the projection region of the field of view of the user on the display screen includes a static visual region, and the step 203 of "acquiring the projection region of the field of view of the user on the display screen where the content is displayed according to the attention region" may be implemented by the following steps S2034 to S2035:
step S2034: the region of interest is acquired as a projection region.
Step S2035: and setting the projection area as a static visual area.
In this embodiment of the application, when the attention area is an area that is seen on the display screen when the eyes of the user do not rotate, the attention area may be set as a static visual area because the static visual area is a partial projection area of a static field of view of the display screen that is viewed when the user is at a certain position on the display screen.
And 204, selecting a corresponding target visual area according to the type of the display content in the projection area.
In the embodiment of the present application, in order to satisfy the requirement that the user views the corresponding part of the display content with different visual field ranges, the display content may be divided into different regions, for example, the display content may include dynamic visual content, static display content, central visual content and edge visual content. The core information included in the different types of display contents is different, and the distribution areas of the core information in the different types of display contents are also different, so that the corresponding target visual area can be selected according to the distribution area where the core information is located in the different types of display contents, and the core information of the different types of display contents can be displayed in the target visual area.
In this embodiment of the application, if the projection area is an area that the user can see on the display screen by rotating the eyes when the user is at a certain position, the "projection area" in the step 204 may be a dynamic visual area, a central visual area, and/or an edge visual area, and the "selecting a corresponding target visual area according to the type of the display content in the projection area" in the step 204 may be implemented by the following steps S2041 to S2043:
step S2041: and if the display content is the first type display content, selecting the dynamic visual area as a target visual area corresponding to the first type display content.
In the embodiment of the application, the dynamic visual content includes core information for ensuring that a user reads and/or operates the first type of display content normally, the dynamic visual content is located in a dynamic visual area of a normal user on a display screen, and in order to enable a user with visual field disorder to see the core information included in the dynamic visual content, the dynamic visual area of the user is selected as a target visual area corresponding to the first type of display content, so that the size and/or the position of the display content can be adjusted according to the dynamic visual area tested by the user with visual field disorder.
Step S2042: and if the display content is the second type display content, selecting the edge visible area as a target visible area corresponding to the second type display content.
In the embodiment of the application, the edge visual content includes core information for ensuring that the user reads and/or operates the second type of display content normally, the edge visual content is located in an edge visual area of a normal user on the display screen, and in order to enable a user with visual field disorder to see the core information included in the edge visual content, the edge visual area of the user is selected as a target visual area corresponding to the first type of display content, so that the size and/or the position of the display content can be adjusted according to the edge visual area tested by the user with visual field disorder.
Step S2043: and if the display content is the third type display content, selecting the central visual area as a target visual area corresponding to the third type display content.
In the embodiment of the application, the edge visual content includes core information for ensuring that the user reads and/or operates the second type of display content normally, the edge visual content is located in an edge visual area of a normal user on the display screen, and in order to enable a user with visual field disorder to see the core information included in the edge visual content, the edge visual area of the user is selected as a target visual area corresponding to the first type of display content, so that the size and/or the position of the display content can be adjusted according to the edge visual area tested by the user with visual field disorder.
In one or more embodiments of the present application, the "first type display content" in step S2041 may be a shooting game, and the shooting game needs to present information included in the dynamic display content because the shooting game needs to observe shooting targets appearing at various positions on the display screen, so that the dynamic visual region is selected as the target display region. The "second type display content" in step S2042 may be a race game, and since the race game requires to observe the surrounding environment of the character in the game at any time, the race game has a high requirement on the information included in the edge display content, and therefore the edge visible area is selected as the target display area. The "third type of display content" in step S2043 may be a multiplayer virtual role playing game, and since the multiplayer virtual role playing game usually includes close fighting of multiple virtual roles, it is clearly observed that information in the close range of the virtual roles is an important factor for ensuring the fighting, so that the multiplayer virtual role playing game has a high requirement on information included in the center display content, and the center visual area is selected as the target display area.
In an embodiment of the present application, if the projection area is an area that the user can see on the display screen by rotating the eyes when the user is at a certain position, the "projection area" in the step 204 may be a dynamic visual area, and the "selecting a corresponding target visual area according to the type of the display content in the projection area" in the step 204 may be implemented by the following step S2044:
step S2044: and setting the dynamic visual area as a target visual area.
In the embodiment of the application, since the range of the dynamic visual area is larger than the central visual area, the edge visual area and the static visual area, in order to enable a user to see more display contents, the dynamic visual area can be directly set as a target visual area, and as much display contents as possible are displayed in the dynamic visual area.
In this embodiment, the step S2044 of "setting the dynamic visual area as the target visual area" may include the following steps S20441:
step S20441: if the dynamic visual area and the display content are different in shape, acquiring a surrounding visual area, setting the surrounding visual area as a target visual area, wherein the surrounding visual area comprises the dynamic visual area, and the surrounding visual area and the display content are the same in shape.
In the embodiment of the present application, in order to keep the display content unchanged in scale after adjustment, the display content may be enlarged, reduced and/or moved in an equal scale, and if the dynamic visual area is different from the display content in shape, the display content may not be displayed completely in the target visual area, or the display content may be disordered, and therefore, a surrounding visual area having the same shape as the display content may be set as the target visual area, where the surrounding visual area may be a minimum area including the dynamic visual area.
In an embodiment of the present application, if the projection area is an area that the user does not see on the display screen when the user is at a certain position, the "projection area" in the step 204 may be a static visual area, and the "selecting a corresponding target visual area according to the type of the display content in the projection area" in the step 204 may be implemented by the following step S2044:
step S2044: and if the display content is the fourth type display content, selecting the static visual area as a target visual area corresponding to the fourth type display content.
In the embodiment of the application, the static visual content includes core information for ensuring that a user reads and/or operates the first type of display content normally, the static visual content is in a static visual area of a normal user on a display screen, and in order to enable a user with visual field disorder to see the core information included in the static visual content, the static visual area of the user is selected as a target visual area corresponding to the first type of display content, so that the size and/or the position of the display content can be adjusted according to a dynamic visual area tested by the user with visual field disorder.
In some embodiments of the present application, the "fourth type display content" in the step S2044 includes a web interface and/or a picture. The static visual area is selected as the target display area because the user generally does not rotate eyes when viewing the static web interface and/or the picture, which indicates that the web interface and/or the picture has a high requirement on the information included in the static display content.
And step 205, adjusting the display content based on the target visual area.
In one embodiment of the present application, in order to present different information included in the display content to a user in different regions based on functions, such as core information, auxiliary information, and the like, so that the user can view the different information of the display content using a matching field of view, thereby better understanding the display content according to the functions of the different information, the display content may be divided into different regions, which may include dynamic visual content, center visual content, and edge visual content, and then the regions to which the different information belongs may be allocated according to the type of the display content.
In some embodiments of the present application, the "adjusting the display content based on the target visual area" in step 205 may be implemented by steps S2051 to S2053 as follows:
step S2051: if the target visual area is the dynamic visual area, the dynamic visual content is set to be displayed in the dynamic visual area, and the content except the dynamic visual content in the displayed content is simplified.
In the embodiment of the application, if the core information is judged to be in the dynamic visual content according to the type of the display content, the dynamic visual content is completely reserved, and the dynamic visual content is adjusted to the dynamic visual area of the user with the visual field obstruction in a manner of changing the size and/or moving in an equal proportion and the like for display, so that the user with the visual field obstruction can see the complete dynamic visual content, and the core information is prevented from being missed. The first display content except the dynamic visual content in the display content may have more information, the visual field range of the user with visual field obstruction is smaller, and in order to reduce the interference of the first display content to the user with visual field obstruction, the unnecessary information of other display contents can be simplified, and only the key information that the display content can normally operate is reserved.
For example, if the display content is a shooting game, the dynamic visual content may have various shooting targets, and in addition to the shooting targets and virtual characters and/or various guns and the like that ensure normal play of the shooting game, environmental information and the like other than the dynamic visual content may be simplified.
Step S2052: if the target visual area is the edge visual area, edge visual content is set to be displayed in the edge visual area, and content except the edge visual content in the display content is simplified.
In the embodiment of the application, if the core information is judged to be in the edge visual content according to the type of the display content, the edge visual content is completely reserved, and the edge visual content is adjusted to the edge visual area of the user with the visual field obstruction in a manner of changing the size and/or moving in an equal proportion and the like for display, so that the user with the visual field obstruction can see the complete edge visual content, the core information is prevented from being missed, unnecessary information of other display contents can be simplified, and only key information of the display content which can normally run is reserved.
For example, if the display content is a racing game, the edge visual content may be obstacles around the track, and if the user does not observe the obstacles around the track, the normal operation of the racing game may be affected, so that the edge visual content of the racing game is retained, and the appearance special effect of the racing car in the center visual content and/or the special effect generated by the drifting of the racing car may be simplified.
Step S2053: if the target visual area is the central visual area, the central visual content is set to be displayed in the central visual area, and the content except the central visual content in the displayed content is simplified.
In the embodiment of the application, if the core information is judged to be in the central visual content according to the type of the display content, the central visual content is completely reserved, and the central visual content is adjusted to the central visual area of the user with the visual field disorder through the modes of changing the size and/or moving in equal proportion and the like for display, so that the user with the visual field disorder can see the complete central visual content, the core information is prevented from being missed, unnecessary information of other display contents can be simplified, and only key information of the display content which can normally run is reserved.
For example, if the displayed content is a multi-player virtual role playing game, the central visual content may be blood bar changes when two virtual roles fight, the position and/or skill special effect of the opposite virtual role, and if the user with poor visual field does not observe the situation that the position and/or skill of the opposite virtual role hits the virtual role, the normal operation of the multi-player virtual role playing game may be affected.
In an embodiment of the application, after the dynamic visual area is acquired as the target visual area in the step S2044, the step 205 of "adjusting the display content based on the target visual area" may be implemented by the following steps S2054 to S2055:
step S2054: and changing the size of the display content in equal proportion to form target display content.
Step S2055: and setting the target display content to be displayed in the target visible area.
In the embodiment of the application, when the user with the visual field disorder wants to see all the information of the display content, since the dynamic visual area is the maximum range seen on the display screen when the user is at a certain position, the dynamic visual area can be set as the target visual area, and the size of all the information of the display content is changed in an equal proportion and/or the position of the display content is moved, so that all the information in the display content can be within the visual range of the user with the visual field disorder.
In the embodiment of the present application, the display content includes static visual content, and the "adjusting the display content based on the target visual area" in the above step 205 may be implemented by the following step S2056:
step S2056: if the target visual area is a static visual area, static visual contents are set to be displayed in the static visual area, and the contents except the static visual contents in the displayed contents are simplified.
In the embodiment of the application, if the core information is judged to be in the static visual content according to the type of the display content, the static visual content is completely reserved, and the static visual content is adjusted to the static visual area of the user with the visual field obstruction in a manner of changing the size and/or moving in an equal proportion and the like for display, so that the user with the visual field obstruction can see the complete static visual content, the core information is prevented from missing, unnecessary information of the content except the static visual content in the display content can be simplified, and only key information of the display content which can normally run is reserved.
For example, if the displayed content is a web interface, the static visual content may be key information conveyed to the user in the web page, and in order to enable the user with the visual obstruction to observe the key information, the static visual content may be retained, and other information in the edge visual content may be simplified, so that the user may quickly obtain the key information when browsing the web interface.
In the embodiment of the application, the sensitivity of the user to the color can be detected, whether the user has color blindness or color weakness is roughly judged, and therefore the display color of the display content is adjusted according to the recognition capability of the user to various colors. The interesting game can be used for detecting the recognition capability of the user to various colors, for example, pictures with different colors are displayed to the user, the information which can be seen from the displayed pictures is inquired, the sensitivity degree of the user to various colors is obtained according to the answer of the user, and therefore the display color of the display content is adjusted, the user can see all information of the display content, and core information missing due to the fact that certain colors cannot be recognized is avoided.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the display content adjusting method provided by the embodiment of the application, after the display adjusting function of the display device is triggered by the vision obstruction user, the display device can display the vision test content to the vision obstruction user, then different types of projection areas of the vision obstruction user on a screen are obtained according to the relevant operation of the vision test content by the vision obstruction user, so that a proper display range is provided for the vision obstruction user aiming at different types of display content, the vision obstruction user can see core information of different types of display content, and the universality of a display is improved.
Referring to fig. 3, fig. 3 is another schematic flow chart of a display content adjusting method according to an embodiment of the present disclosure. The specific process of the method can be as follows:
and 301, starting a display adjustment function and displaying the vision field test content.
For example, if the user wants to adjust the position, size, and/or included elements of the display content according to the range that the user can see on the display screen when the user is at a certain position, the user may perform a corresponding trigger operation to trigger the display adjustment function of the terminal, so as to display the visual field test content to the user on the display screen.
Step 302, setting the first preset object to move from four sides of the display screen to a first center position of the display screen, wherein the brightness of the first preset object is unchanged.
For example, if the user's eyes are tested to be in the focus area on the display screen while rotating, all the first preset objects are set to be at initial positions close to the four sides of the display screen during the test, and then are slowly moved from the four sides of the display screen to the center of the display screen according to the user's operation.
Step 303, acquiring a first preset object of the user executing the relevant operation as a first target object.
For example, a first preset object, on which a user performs a related operation for the first time, is acquired as a first target object.
And step 304, connecting the first target object to form a boundary.
For example, the first preset object is located in each direction of the display screen, and therefore the first target object is also located in each direction of the display screen, and a boundary may be formed by connecting the first target object.
Step 305, acquiring the area surrounded by the boundary as a dynamic visual area, and acquiring a central visual area and an edge visual area according to the dynamic visual area.
For example, since the dynamic visual area is a partial projection area on the display screen corresponding to a dynamic field of view for viewing the display screen when the user is at a certain position, an area surrounded by the boundary is set as the dynamic visual area. And acquiring a region with the first threshold reduced from the boundary to the second center position as a central visible region according to the relation between the central visual field and the dynamic visual field. And acquiring the area except the central visual area in the dynamic visual area as an edge visual area according to the relation between the edge visual area and the dynamic visual area.
And step 306, setting the brightness of the second preset object as a first brightness, wherein the position of the second preset object on the display screen is unchanged, and the first brightness comprises the invisible brightness of the user.
For example, if the area that the user sees on the display screen when the eyes are not rotated is tested, a second preset object may be set and the test may be performed according to the brightness of the second preset object.
And 307, increasing the brightness of the second preset object to a second brightness, wherein the second brightness is a critical brightness for changing the invisible brightness of the user into the visible brightness of the user.
And 308, acquiring a display area with the second brightness as a static visible area.
For example, setting the other regions of the display screen except for the second initial object to be kept at the invisible brightness of the user, setting the initial brightness of the second preset object to be the invisible brightness of the user, gradually increasing the brightness of the second preset object to the visible brightness of the user, and considering the second brightness display region of the second preset object to be the static visible region of the user.
And 309, selecting a corresponding target visible area according to the type of the display content.
For example, if the display content is a shooting game, the dynamic visual area is selected as a target visual area corresponding to the shooting game. If the displayed content is the racing game, selecting the edge visual area as a target visual area corresponding to the racing game. If the displayed content is the multi-player virtual role playing game, the central visual area is selected as a target visual area corresponding to the multi-player virtual role playing game. And if the webpage interface displays the content, selecting the static visual area as a target visual area corresponding to the webpage interface.
And step 310, adjusting the display content based on the target visual area.
For example, if the target visual area is a dynamic visual area, the dynamic visual content is set to be displayed in the dynamic visual area, and the content except the dynamic visual content in the displayed content is simplified. If the target visual area is the edge visual area, edge visual content is set to be displayed in the edge visual area, and content except the edge visual content in the display content is simplified. If the target visual area is the central visual area, the central visual content is set to be displayed in the central visual area, and the content except the central visual content in the displayed content is simplified. And if the target visual area is a static visual area, setting static visual contents to be displayed in the static visual area, and simplifying the contents except the static visual contents in the displayed contents.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the display content adjusting method provided by the embodiment of the application, after the display adjusting function of the display device is triggered by the vision obstruction user, the display device can display the vision test content to the vision obstruction user, then different types of projection areas of the vision obstruction user on a screen are obtained according to the relevant operation of the vision test content by the vision obstruction user, so that a proper display range is provided for the vision obstruction user aiming at different types of display content, the vision obstruction user can see core information of different types of display content, and the universality of a display is improved.
In order to better implement the display content adjusting method according to the embodiment of the present application, an embodiment of the present application further provides a display content adjusting apparatus. Referring to fig. 4, fig. 4 is a schematic structural diagram of a display content adjusting apparatus according to an embodiment of the present disclosure. The display content adjusting apparatus may include a display unit 401, a first acquisition unit 402, a second acquisition unit 403, a selection unit 404, and an adjusting unit 405.
The display unit 401 is configured to display the visual field test content;
a first acquiring unit 402 configured to acquire a region of interest of the user with respect to the visual field test content;
a second obtaining unit 403, configured to obtain, according to the attention area, a projection area of the field of view of the user on a display screen where display content is located;
a selecting unit 404, configured to select, in the projection area, a corresponding target visible area according to the type of the display content;
an adjusting unit 405, configured to adjust the display content based on the target visual area.
Optionally, the visual field test content includes at least two first preset objects, and the first obtaining unit 402 is further configured to:
setting the first preset object to move from four sides of the display screen to a first central position of the display screen, wherein the brightness of the first preset object is unchanged;
acquiring the first preset object of the user executing the related operation as the first target object;
connecting the first target object to form a boundary;
and acquiring the region surrounded by the boundary as the attention region.
Optionally, the projection area includes a dynamic visual area, a central visual area, and an edge visual area, and the second obtaining unit 403 is further configured to:
acquiring the attention area as a dynamic visual area, wherein the dynamic visual area comprises a central visual area, an edge visual area and a second central position, and the second central position is the center of the dynamic visual area;
obtaining the central visual area, wherein the central visual area comprises an area which is reduced by a first threshold value from the boundary to the second central position;
and acquiring the area outside the central visible area as the edge visible area.
Optionally, the selecting unit 404 is further configured to:
if the display content is first type display content, selecting the dynamic visual area as the target visual area corresponding to the first type display content;
if the display content is a second type display content, selecting the edge visible area as the target visible area corresponding to the second type display content;
and if the display content is a third type display content, selecting the central visual area as the target visual area corresponding to the third type display content.
Optionally, the first type of display content comprises a shooting game, the second type of display content comprises a racing game, and the third type of display content comprises a multi-player virtual role-playing game.
Optionally, the display content includes dynamic visual content, center visual content and edge visual content, and the adjusting unit 405 is further configured to:
if the target visual area is the dynamic visual area, setting the dynamic visual content to be displayed in the dynamic visual area, and simplifying the content except the dynamic visual content in the display content;
if the target visual area is the edge visual area, setting the edge visual content to be displayed in the edge visual area, and simplifying the content except the edge visual content in the display content;
if the target visual area is the central visual area, the central visual content is set to be displayed in the central visual area, and the content except the central visual content in the display content is simplified.
Optionally, the selecting unit 404 is further configured to:
setting the dynamic visual area as the target visual area;
the adjusting unit 405 is further configured to:
changing the size of the display content in equal proportion to form target display content;
and setting the target display content to be displayed in the target visible area.
Optionally, the selecting unit 404 is further configured to:
if the dynamic visual area is different from the display content in shape, acquiring a surrounding visual area, and setting the surrounding visual area as the target visual area, wherein the surrounding visual area comprises the dynamic visual area, and the surrounding visual area is the same as the display content in shape.
Optionally, the visual field test content includes at least one second preset object, and the first obtaining unit 402 is further configured to:
setting the brightness of the second preset object as a first brightness, wherein the position of the second preset object on the display screen is unchanged, and the first brightness comprises invisible brightness of a user;
increasing the brightness of the second preset object to a second brightness, wherein the second brightness is a critical brightness when the invisible brightness of the user is changed into the visible brightness of the user;
and acquiring the display area with the second brightness as the attention area.
Optionally, the projection area includes a static visual area, and the second obtaining unit 403 is further configured to:
acquiring the attention area as the projection area;
and setting the projection area as a static visual area.
Optionally, the selecting unit 404 is further configured to:
and if the display content is a fourth type display content, selecting the static visual area as the target visual area corresponding to the fourth type display content.
Optionally, the fourth type of display content includes a web interface and/or a picture.
Optionally, the display content includes static visual content, and the adjusting unit 405 is further configured to:
and if the target visual area is the static visual area, setting the static visual content to be displayed in the static visual area, and simplifying the content except the static visual content in the displayed content.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the display content adjusting device provided by the embodiment of the application, after a vision disorder user triggers a display adjusting function of a display device by using the display unit 401, the display device can display vision test content to the vision disorder user, then, a focus area of the user for the vision test content is acquired by using the first acquisition unit 402, then, different types of projection areas of the vision disorder user on a screen are obtained by using the second acquisition unit 403 according to the focus area, so that a proper display range is provided for the vision disorder user for different types of display content by using the selection unit 404, the display content is adjusted by using the adjustment unit 405 based on the proper display range, the vision disorder user can see core information of the different types of display content, and the universality of the display is improved.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal, and the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a personal computer, a personal digital assistant and the like. As shown in fig. 5, fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer-readable storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby monitoring the computer device 500 as a whole.
In this embodiment of the application, the processor 501 in the computer device 500 loads instructions corresponding to processes of one or more applications into the memory 502, and the processor 501 runs the applications stored in the memory 502, so as to implement various functions as follows:
displaying the visual field test content according to the triggering of the display adjusting function; acquiring a region of interest of a user for the visual field test content; acquiring a projection area of the visual field of the user on a display screen where display content is located according to the attention area; selecting a corresponding target visual area in the projection area according to the type of the display content; adjusting the display content based on the target viewable area.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 5, the computer device 500 further includes: touch-sensitive display screen 503, radio frequency circuit 504, audio circuit 505, input unit 506 and power 507. The processor 501 is electrically connected to the touch display screen 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 5 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 503 can be used for displaying a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to the user as well as various graphical user interfaces of the computer device 500, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 501, and can receive and execute commands sent by the processor 501. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 501 to determine the type of the touch event, and then the processor 501 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 503 can also be used as a part of the input unit 506 to implement an input function.
In the embodiment of the present application, the processor 501 executes a triggering operation of responding to the display adjustment function by the user, displays the visual field test content to the user, analyzes the relevant operation of the user for the visual field test content, obtains a target visual area for different types of display content, and adjusts the display content according to the target visual area.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
Audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones. The audio circuit 505 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 505 and converted into audio data, which is then processed by the audio data output processor 501, and then transmitted to, for example, another computer device via the rf circuit 504, or output to the memory 502 for further processing. The audio circuitry 505 may also include an earbud jack to provide communication of a peripheral headset with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Optionally, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 507 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 5, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, in the computer device 500 provided in this embodiment, after the display adjustment function of the computer device is triggered by the vision obstruction user, the computer device 500 displays the vision test content to the vision obstruction user, and then obtains different types of projection areas of the vision obstruction user on the screen according to the relevant operation of the vision test content by the vision obstruction user, so as to provide a suitable display range for the vision obstruction user with respect to different types of display content, so that the vision obstruction user can see core information of different types of display content, and the universality of the display is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any one of the display content adjustment methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
displaying the vision test content;
acquiring a region of interest of a user for the visual field test content;
acquiring a projection area of the visual field of the user on a display screen where display content is located according to the attention area;
selecting a corresponding target visual area in the projection area according to the type of the display content;
adjusting the display content based on the target viewable area.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any display content adjusting method provided in the embodiments of the present application, the beneficial effects that can be achieved by any display content adjusting method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted here for the details in the foregoing embodiments.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The display content adjusting method, device, computer device and storage medium provided by the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the technical solution and the core idea of the present invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (16)

1. A display content adjustment method, comprising:
displaying the vision test content;
acquiring a region of interest of a user for the visual field test content;
acquiring a projection area of the visual field of the user on a display screen where display content is located according to the attention area;
selecting a corresponding target visual area in the projection area according to the type of the display content;
adjusting the display content based on the target viewable area.
2. The method of claim 1, wherein the visual field test content comprises at least two first preset objects, and the acquiring the user's attention area for the visual field test content comprises:
setting the first preset object to move from four sides of the display screen to a first central position of the display screen, wherein the brightness of the first preset object is unchanged;
acquiring the first preset object of the user executing the related operation as the first target object;
connecting the first target object to form a boundary;
and acquiring the region surrounded by the boundary as the attention region.
3. The method of claim 2, wherein the projection area comprises a dynamic visual area, a center visual area and an edge visual area, and the obtaining the projection area of the user's visual field on the display screen on which the content is displayed according to the attention area comprises:
acquiring the attention area as a dynamic visual area, wherein the dynamic visual area comprises a central visual area, an edge visual area and a second central position, and the second central position is the center of the dynamic visual area;
obtaining the central visual area, wherein the central visual area comprises an area which is reduced by a first threshold value from the boundary to the second central position;
and acquiring the area outside the central visible area as the edge visible area.
4. The method according to claim 3, wherein selecting a corresponding target visual area according to the type of the display content in the projection area comprises:
if the display content is first type display content, selecting the dynamic visual area as the target visual area corresponding to the first type display content;
if the display content is a second type display content, selecting the edge visible area as the target visible area corresponding to the second type display content;
and if the display content is a third type display content, selecting the central visual area as the target visual area corresponding to the third type display content.
5. The method of claim 4, wherein the first type of display content comprises a shooting game, the second type of display content comprises a racing game, and the third type of display content comprises a multi-player virtual role-playing game.
6. The method of claim 3, wherein the display content comprises dynamic visual content, center visual content, and edge visual content, and wherein adjusting the display content based on the target visual area comprises:
if the target visual area is the dynamic visual area, setting the dynamic visual content to be displayed in the dynamic visual area, and simplifying the content except the dynamic visual content in the display content;
if the target visual area is the edge visual area, setting the edge visual content to be displayed in the edge visual area, and simplifying the content except the edge visual content in the display content;
if the target visual area is the central visual area, the central visual content is set to be displayed in the central visual area, and the content except the central visual content in the display content is simplified.
7. The method according to claim 3, wherein selecting a corresponding target visual area according to the type of the display content in the projection area comprises:
setting the dynamic visual area as the target visual area;
the adjusting the display content based on the target viewable area includes:
changing the size of the display content in equal proportion to form target display content;
and setting the target display content to be displayed in the target visible area.
8. The method of claim 7, wherein the setting the dynamic visual area as the target visual area comprises:
if the dynamic visual area is different from the display content in shape, acquiring a surrounding visual area, and setting the surrounding visual area as the target visual area, wherein the surrounding visual area comprises the dynamic visual area, and the surrounding visual area is the same as the display content in shape.
9. The method of claim 1, wherein the visual field test content comprises at least one second preset object, and the acquiring the user's attention area for the visual field test content comprises:
setting the brightness of the second preset object as a first brightness, wherein the position of the second preset object on the display screen is unchanged, and the first brightness comprises invisible brightness of a user;
increasing the brightness of the second preset object to a second brightness, wherein the second brightness is a critical brightness when the invisible brightness of the user is changed into the visible brightness of the user;
and acquiring the display area with the second brightness as the attention area.
10. The method of claim 9, wherein the projection area comprises a static visual area, and wherein the obtaining the projection area of the user's field of view on the display screen where the content is displayed according to the attention area comprises:
acquiring the attention area as the projection area;
and setting the projection area as a static visual area.
11. The method according to claim 10, wherein selecting a corresponding target visual area according to the type of the display content in the projection area comprises:
and if the display content is a fourth type display content, selecting the static visual area as the target visual area corresponding to the fourth type display content.
12. The method of claim 11, wherein the fourth type of display content comprises a web interface and/or a picture.
13. The method of claim 11, wherein the display content comprises static visual content, and wherein adjusting the display content based on the target visual area comprises:
and if the target visual area is the static visual area, setting the static visual content to be displayed in the static visual area, and simplifying the content except the static visual content in the displayed content.
14. A display content adjusting apparatus, comprising:
the display unit is used for displaying the visual field test content;
a first acquisition unit configured to acquire a region of interest of a user with respect to the visual field test content;
the second acquisition unit is used for acquiring a projection area of the visual field of the user on a display screen where display content is located according to the attention area;
the selecting unit is used for selecting a corresponding target visual area in the projection area according to the type of the display content;
and the adjusting unit is used for adjusting the display content based on the target visual area.
15. A computer device, comprising:
a memory for storing a computer program;
a processor for implementing the steps in the display content adjustment method according to any one of claims 1 to 13 when executing the computer program.
16. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the display content adjustment method according to any one of claims 1 to 13.
CN202110425841.7A 2021-04-20 2021-04-20 Display content adjustment method, device, computer equipment and storage medium Active CN113110908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110425841.7A CN113110908B (en) 2021-04-20 2021-04-20 Display content adjustment method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110425841.7A CN113110908B (en) 2021-04-20 2021-04-20 Display content adjustment method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113110908A true CN113110908A (en) 2021-07-13
CN113110908B CN113110908B (en) 2023-05-30

Family

ID=76719204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110425841.7A Active CN113110908B (en) 2021-04-20 2021-04-20 Display content adjustment method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113110908B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114420010A (en) * 2021-12-30 2022-04-29 联想(北京)有限公司 Control method and device and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819680A (en) * 2012-07-31 2012-12-12 北京天神互动科技有限公司 System and method for processing visual field of online role-playing network game
US20180001198A1 (en) * 2016-06-30 2018-01-04 Sony Interactive Entertainment America Llc Using HMD Camera Touch Button to Render Images of a User Captured During Game Play
WO2019002559A1 (en) * 2017-06-29 2019-01-03 Koninklijke Kpn N.V. Screen sharing for display in vr
CN109189215A (en) * 2018-08-16 2019-01-11 腾讯科技(深圳)有限公司 A kind of virtual content display methods, device, VR equipment and medium
CN109799912A (en) * 2019-02-25 2019-05-24 努比亚技术有限公司 A kind of display control method, equipment and computer readable storage medium
US10360876B1 (en) * 2016-03-02 2019-07-23 Amazon Technologies, Inc. Displaying instances of visual content on a curved display
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen
CN111198608A (en) * 2018-11-16 2020-05-26 广东虚拟现实科技有限公司 Information prompting method and device, terminal equipment and computer readable storage medium
CN111258698A (en) * 2020-01-17 2020-06-09 支付宝(杭州)信息技术有限公司 Object display method and device
US20210049821A1 (en) * 2019-08-17 2021-02-18 Samsung Electronics Co., Ltd. Apparatuses and methods for establishing virtual reality (vr) call between caller vr device and callee vr device
CN112473134A (en) * 2020-12-09 2021-03-12 网易(杭州)网络有限公司 Method and device for displaying visual field area, storage medium and computer equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819680A (en) * 2012-07-31 2012-12-12 北京天神互动科技有限公司 System and method for processing visual field of online role-playing network game
US10360876B1 (en) * 2016-03-02 2019-07-23 Amazon Technologies, Inc. Displaying instances of visual content on a curved display
US20180001198A1 (en) * 2016-06-30 2018-01-04 Sony Interactive Entertainment America Llc Using HMD Camera Touch Button to Render Images of a User Captured During Game Play
WO2019002559A1 (en) * 2017-06-29 2019-01-03 Koninklijke Kpn N.V. Screen sharing for display in vr
CN109189215A (en) * 2018-08-16 2019-01-11 腾讯科技(深圳)有限公司 A kind of virtual content display methods, device, VR equipment and medium
CN111198608A (en) * 2018-11-16 2020-05-26 广东虚拟现实科技有限公司 Information prompting method and device, terminal equipment and computer readable storage medium
CN109799912A (en) * 2019-02-25 2019-05-24 努比亚技术有限公司 A kind of display control method, equipment and computer readable storage medium
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen
US20210049821A1 (en) * 2019-08-17 2021-02-18 Samsung Electronics Co., Ltd. Apparatuses and methods for establishing virtual reality (vr) call between caller vr device and callee vr device
CN111258698A (en) * 2020-01-17 2020-06-09 支付宝(杭州)信息技术有限公司 Object display method and device
CN112473134A (en) * 2020-12-09 2021-03-12 网易(杭州)网络有限公司 Method and device for displaying visual field area, storage medium and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114420010A (en) * 2021-12-30 2022-04-29 联想(北京)有限公司 Control method and device and electronic equipment

Also Published As

Publication number Publication date
CN113110908B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN112156464A (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN113101657B (en) Game interface element control method, game interface element control device, computer equipment and storage medium
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN113350793B (en) Interface element setting method and device, electronic equipment and storage medium
CN113332726B (en) Virtual character processing method and device, electronic equipment and storage medium
CN113144601B (en) Expression display method, device, equipment and medium in virtual scene
CN113426124B (en) Display control method and device in game, storage medium and computer equipment
CN112870718A (en) Prop using method and device, storage medium and computer equipment
CN113332716A (en) Virtual article processing method and device, computer equipment and storage medium
CN112843716B (en) Virtual object prompting and viewing method and device, computer equipment and storage medium
CN113110908B (en) Display content adjustment method, device, computer equipment and storage medium
CN113181632A (en) Information prompting method and device, storage medium and computer equipment
CN114189731B (en) Feedback method, device, equipment and storage medium after giving virtual gift
CN115212567A (en) Information processing method, information processing device, computer equipment and computer readable storage medium
CN115068941A (en) Game image quality recommendation method and device, computer equipment and storage medium
CN115300904A (en) Recommendation method and device, electronic equipment and storage medium
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN113521739B (en) Method, device, computer equipment and storage medium for adjusting object attribute
CN113398564B (en) Virtual character control method, device, storage medium and computer equipment
CN113426128B (en) Method, device, terminal and storage medium for adjusting appearance of custom roles
CN116808583A (en) Virtual player control method and device, computer equipment and storage medium
CN118079377A (en) Game display control method, game display control device, electronic equipment and storage medium
CN116943238A (en) Game realization method, game realization device, electronic equipment and computer readable storage medium
CN115068936A (en) Animation playing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant