CN114296627A - Content display method, device, equipment and storage medium - Google Patents

Content display method, device, equipment and storage medium Download PDF

Info

Publication number
CN114296627A
CN114296627A CN202111233652.6A CN202111233652A CN114296627A CN 114296627 A CN114296627 A CN 114296627A CN 202111233652 A CN202111233652 A CN 202111233652A CN 114296627 A CN114296627 A CN 114296627A
Authority
CN
China
Prior art keywords
content
display
display content
area
content area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111233652.6A
Other languages
Chinese (zh)
Other versions
CN114296627B (en
Inventor
唐伟
徐世超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111233652.6A priority Critical patent/CN114296627B/en
Publication of CN114296627A publication Critical patent/CN114296627A/en
Application granted granted Critical
Publication of CN114296627B publication Critical patent/CN114296627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a content display method, a content display device, content display equipment and a storage medium, and belongs to the technical field of terminals. The method comprises the following steps: under the condition of displaying two content areas, when the sight line landing point of a user is in the corresponding first display content in one content area, the second display content related to the first display content is displayed in the other content area in a highlighted mode to prompt the user that the two display contents are related, so that the user is helped to quickly locate the related content, the man-machine interaction efficiency is improved, and the information obtaining efficiency of the user is effectively improved.

Description

Content display method, device, equipment and storage medium
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a content display method, device, apparatus, and storage medium.
Background
With the development of terminal technology, a terminal can install various applications, the terminal often displays a corresponding interface in the process of running the applications, and a user can obtain information by browsing the content displayed on the interface. In some scenarios, when there is an association relationship between multiple pieces of content displayed on the interface, the user often needs to repeatedly move the line of sight to browse the multiple pieces of content, resulting in inefficient information acquisition for the user.
Disclosure of Invention
The embodiment of the application provides a content display method, a content display device, content display equipment and a storage medium, and the efficiency of obtaining information by a user can be effectively improved. The technical scheme is as follows:
in one aspect, a content display method is provided, the method including:
displaying a first content area and a second content area;
determining first display content corresponding to the sight line drop point in a first content area;
second display content associated with the first display content is highlighted in the second content region.
In another aspect, there is provided a content display apparatus including:
the display module is used for displaying the first content area and the second content area;
the determining module is used for determining first display content corresponding to the sight line drop point in the first content area;
the display module is further configured to highlight second display content associated with the first display content in the second content region.
In an optional implementation manner, the first content area and the second content area are different display areas of the same file; or the first content area and the second content area are display areas of different files; or the first content area and the second content area are different display areas of the same page; or the first content area and the second content area are display areas of different pages.
In an optional implementation manner, the first content area and the second content area are both text display areas; or, one of the first content area and the second content area is a text display area, and the other is a graphic display area; or, the first content area and the second content area are both graphic display areas.
In an alternative implementation, the first display content is the same as the second display content; or the similarity between the first display content and the second display content is greater than or equal to a first threshold; or, a target corresponding relation exists between the first display content and the second display content.
In an optional implementation, the display module includes:
a content determination unit configured to determine, in the second content area, second display content associated with the first display content based on the first display content;
and the highlighting unit is used for highlighting the second display content.
In an optional implementation, the content determining unit is configured to: acquiring associated information corresponding to the first display content, wherein the associated information indicates display content in the second content area and having a target corresponding relation with the first display content; based on the association information, the second display content is determined.
In an optional implementation manner, the association information includes position information, where the position information indicates a position where display content having the target correspondence with the first display content is located in the second content region; the content determining unit is configured to determine, as the second display content, the display content in the second content area corresponding to the position information based on the association information.
In an optional implementation manner, the association information includes identification information, and the identification information is used for identifying the first display content; the content determining unit is configured to determine, as the second display content, the display content corresponding to the identification information in the second content area based on the association relationship.
In an optional implementation, the apparatus further comprises:
the information determining module is used for applying a text detection algorithm and a text recognition algorithm based on the first content area and the second content area to determine text information corresponding to a plurality of display contents in the first content area and the second content area;
and the information generating module is used for generating the associated information corresponding to the plurality of display contents based on the text information corresponding to the plurality of display contents.
In an optional implementation, the content determining unit is configured to:
extracting key information corresponding to the first display content;
and determining the display content corresponding to the key information in the second content area as the second display content based on the key information.
In an optional implementation, the content determining unit is configured to:
obtaining a historical sight line moving track of a user, wherein the historical sight line moving track indicates the turn-back condition of the sight line falling point between the first content area and the second content area;
and if the historical sight line moving track indicates that the turn-back times of the sight line falling point between the first display content and the third display content are greater than or equal to a second threshold value, determining the third display content as the second display content, wherein the third display content is any one corresponding display content in the second content area.
In an optional implementation manner, the content determining unit is further configured to:
and if the historical sight line moving track indicates that the turning times of the sight line falling point between the first display content and the fourth display content are greater than or equal to a third threshold value, and the stay time of the sight line falling point on the fourth display content is greater than or equal to a fourth threshold value, determining the fourth display content as the second display content, wherein the fourth display content is any corresponding display content in the second content area.
In an optional implementation, the display module is configured to:
the number of the second display contents is plural, and each of the second display contents is highlighted in turn in the second content area based on the display order of the second display contents and the movement locus of the sight-line drop point.
In an optional implementation manner, the display module is configured to display target display content in the second content area according to a scrolling mode; the display module is further configured to determine the second display content from the target display content based on the first display content, suspend the scrolling mode, and highlight the second display content in the second content area.
In an alternative implementation, the highlighting includes any one of: highlighting, thickening, prompting, adding a frame, adding a shadow and adding a dynamic special effect.
In another aspect, a computer device is provided, and the computer device includes a processor and a memory, where the memory is used to store at least one computer program, and the at least one computer program is loaded and executed by the processor to implement the operations performed in the content display method in the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to implement the operations performed in the content display method in the embodiments of the present application.
In another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The processor of the computer device reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the computer device performs the content display method provided in the above-described various alternative implementations.
In the embodiment of the application, under the condition that two content areas are displayed, when the sight line drop point of a user is in the corresponding first display content in one content area, the second display content related to the first display content is displayed in the other content area in a highlighted mode to prompt the user that the two display contents are related, so that the user is helped to quickly locate the related content, the man-machine interaction efficiency is improved, and the information obtaining efficiency of the user is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment of a content display method according to an embodiment of the present application;
FIG. 2 is a flow chart of a content display method provided according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a content area provided according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another content area provided in accordance with an embodiment of the present application;
fig. 5 is a schematic structural diagram of an eye tracking model provided according to an embodiment of the present application;
FIG. 6 is a schematic illustration of a highlighting provided according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a text detection algorithm provided in an embodiment of the present application;
FIG. 8 is a structural diagram of a text recognition algorithm provided in accordance with an embodiment of the present application;
FIG. 9 is a schematic diagram of a historical gaze movement trajectory provided in accordance with an embodiment of the present application;
FIG. 10 is a diagram illustrating a content display method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of another content display method provided in accordance with an embodiment of the present application;
FIG. 12 is a schematic diagram of another content display method provided in accordance with an embodiment of the present application;
FIG. 13 is a schematic diagram of another content display method provided in accordance with an embodiment of the present application;
FIG. 14 is a schematic diagram of another content display method provided in accordance with an embodiment of the present application;
fig. 15 is a schematic structural diagram of a content display device according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal provided according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by these terms.
These terms are only used to distinguish one element from another. For example, the first display region can be referred to as a second display region, and similarly, the second display region can also be referred to as a first display region without departing from the scope of various examples. Both the first display region and the second display region may be display regions, and in some cases, may be separate and distinct display regions.
For example, the at least one display region may be an integral number of display regions of one or more display regions, such as one display region, two display regions, and three display regions. The plurality of display regions may be two or more, and the plurality of display regions may be an integer number of display regions equal to or larger than two, such as two display regions or three display regions.
The following describes possible techniques for the content display scheme provided by the embodiments of the present application.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
Computer Vision technology (CV) is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, Object-context representation (OCR), video processing, video semantic understanding, video content/behavior recognition, three-dimensional (3-Dimension, 3D) reconstruction technology, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
The following describes an implementation environment of the content display method provided by the embodiment of the present application.
Fig. 1 is a schematic diagram of an implementation environment of a content display method according to an embodiment of the present application. The implementation environment includes: a terminal 101 and a server 102. The terminal 101 and the server 102 can be directly or indirectly connected through a wired network or a wireless network, and the present application is not limited thereto.
The terminal 101 has a display function for displaying a content area, and the terminal 101 also has an image acquisition function for acquiring a face image of a user, so that a sight line drop point of the user can be determined conveniently. In some embodiments, the terminal 101 is an integrated device, for example, the terminal 101 is a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like, and for example, the terminal 101 is an electronic display screen provided with a camera, and the like, which is not limited in this embodiment of the present application. In some embodiments, the terminal 101 is a split device, and includes a display device and an image capturing device, where the display device and the image capturing device are directly or indirectly connected through a wired network or a wireless network, for example, the display device is an electronic display screen, and the image capturing device is a camera, a video camera, or a smartphone with an image capturing function, which is not limited in this embodiment of the present application. In some embodiments, the terminal 101 has applications installed and running on it. For example, the application is an educational application, a social application, a video application, an online meeting application, a reading application, a retrieval application, or a game application. Illustratively, the terminal 101 is a terminal used by a user, and a user account of the user is registered in an application running in the terminal 101.
In some embodiments, the server 102 is an independent physical server, and in other embodiments, the server 102 is a server cluster or distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, web services, cloud communication, middleware services, domain name services, security services, Content Delivery Networks (CDNs), big data and artificial intelligence platforms, and the like. The server 102 is configured to provide background services for the application program executed by the terminal 101.
In some embodiments, the server 102 undertakes primary computational work and the terminal 101 undertakes secondary computational work in the process of displaying content; or, the server 102 undertakes the secondary computing work, and the terminal 101 undertakes the primary computing work; alternatively, the server 102 or the terminal 101 can be respectively capable of separately assuming the calculation work.
In some embodiments, terminal 101 generally refers to one of a plurality of terminals, and this embodiment is illustrated only by terminal 101. Those skilled in the art will appreciate that the number of terminals 101 can be greater. For example, the number of the terminals 101 is several tens or several hundreds, or more, and the environment for implementing the content display method includes other terminals. The number of terminals and the type of the device are not limited in the embodiments of the present application.
In some embodiments, the wireless or wired networks described above use standard communication technologies and/or protocols. The Network is typically the Internet, but can be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Markup Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links can also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques can also be used in place of or in addition to the data communication techniques described above.
In the embodiment of the application, a content display method is provided, which can be applied to a scene in which a user needs to view a plurality of content areas on a terminal at the same time. Illustratively, the application scenarios of the content display method provided by the embodiment of the present application include, but are not limited to, the following exemplary several scenarios:
scene one, online education scene.
Illustratively, many educational applications provide a function of generating a homework correction entry (or a test paper correction entry, etc.) based on an AI technique (for example, the correction entry includes a correction suggestion, a correction result, etc.) so that a teacher checks the homework situation of a student according to the correction entry. For example, a user who is a teacher often needs to check the work situation of a student by checking the content areas corresponding to the student works while checking the content areas corresponding to the correction items displayed on the terminal screen.
Illustratively, many educational applications also provide a homework answer checking (or examination paper answer checking and the like) function for students to check their homework situation based on the homework answer. For example, a user who is a student often needs to check his/her homework situation by synchronously checking two content areas displayed on a terminal, that is, by checking a content area corresponding to a homework answer and checking an area corresponding to his/her homework.
In this scenario, the display content corresponding to the sight line drop point of the user in the content area is determined to highlight the associated content in the student homework (or test paper and the like), so that the user can be helped to quickly locate the associated content, the man-machine interaction efficiency is improved, and the information acquisition efficiency of the user is improved.
Scene two, screen navigation scene.
In some public places (such as shopping malls, hospitals, exhibition halls, etc.), an electronic display screen is often provided for providing a navigation function so that a user can find a destination in time. Taking a mall as an example, a user often needs to check a plurality of content areas displayed on a display screen synchronously, that is, the user checks the content area corresponding to a shop name list and checks the content area corresponding to a shop position distribution diagram to determine the position of a destination, and the shop name corresponding to a sight-line falling point of the user is determined to highlight the area corresponding to the shop in the position distribution diagram, so that the user can be helped to quickly locate the shop position, the human-computer interaction efficiency is improved, and the information obtaining efficiency of the user is improved.
And a third scene, an online conference scene.
At present, many online conference application programs provide a screen sharing function, and conference speakers can share their terminal screens when they speak contents, so that participants can intuitively know the conference contents. In some cases, there is an association relationship between multiple content areas displayed on a terminal screen, for example, when a conference speaker speaks content based on one slide, the slide includes a product picture and a detailed description of each component of the product, and participants often need to synchronously check the content area corresponding to the product picture and the content corresponding to the detailed description, that is, check the product picture and check the detailed description at the same time to know the condition of each component of the product.
Scene four, video watching scene.
Illustratively, in the case of video including subtitles, users tend to have synchronized viewing between the video pictures and the subtitles in order to accurately understand the video content. For example, a left area of a certain video displays that a group of people is exercising bodies, a right area displays that a group of people is dancing, meanwhile, subtitles are displayed on the video, and when a user watches the video, the user can be helped to quickly understand the video content by determining that the sight line of the user is located in the left area and highlighting a keyword 'exercise body' in the subtitles, so that the human-computer interaction efficiency is improved, and the information acquisition efficiency of the user is improved.
Illustratively, in the case where a video includes a plurality of content areas, a user tends to perform a synchronized viewing among the plurality of content areas in order to accurately understand the content of the video. For example, a certain video is a body-building teaching video, a name list (including detailed motion) of body-building motions is displayed at the upper right corner of the video, a user often needs to synchronously check between a picture of the body-building motions and the name list, and the corresponding motion names in the name list are highlighted by determining that the sight line falling point of the user is in the certain body-building motion, so that the user can be helped to quickly understand the video content, the human-computer interaction efficiency is improved, and the information acquisition efficiency of the user is improved.
And a scene five, displaying the scene in multiple windows.
Currently, many terminals provide a function of displaying different pages through a plurality of windows, respectively. For example, the terminal displays two content areas, one content area displays a plurality of pages through multiple windows, the other content area displays an icon corresponding to each page, and the window where the page corresponding to the icon is located is highlighted by determining that the sight line of the user is located on one icon, so that the user is helped to quickly locate the window to be checked, the man-machine interaction efficiency is improved, and the information obtaining efficiency of the user is improved.
It should be noted that the above-mentioned scenarios are only exemplary descriptions, and the content display method provided in the embodiment of the present application can also be applied to other scenarios that require synchronous viewing of multiple content areas, for example, a page search scenario, where a certain page includes two content areas, one for displaying a keyword list and one for displaying multiple sections of text, and a user needs to view the content area corresponding to the keyword list and view the content area corresponding to multiple sections of text at the same time to find out a text corresponding to a keyword. The application scenario of the content display method is not limited in the embodiment of the present application.
Fig. 2 is a flowchart of a content display method according to an embodiment of the present application, and as shown in fig. 2, the content display method is described in the embodiment of the present application by taking an application to a terminal as an example. The content display method includes the steps of:
201. the terminal displays the first content area and the second content area.
In the embodiment of the application, the terminal provides an information browsing function, and a user browses information by checking a plurality of content areas displayed on a terminal screen. The plurality of content areas include a first content area and a second content area, and the terminal may display the first content area and the second content area in an arbitrary distribution manner, for example, in an up-down distribution manner, a left-right distribution manner, an overlapping distribution manner, or a random distribution manner, which is not limited in this embodiment of the present application. In some embodiments, the user browses information by performing a swipe, click, etc. operation on the plurality of content areas.
In some embodiments, the terminal includes a first display screen for displaying the first content region and a second display screen for displaying the second content region. Through the mode of split screen display a plurality of content areas, can enlarge the display area, promote user's watching experience.
In some embodiments, the terminal displays the corresponding display content in the static browsing mode in the first content area and displays the corresponding display content in the scrolling browsing mode in the second content area. Wherein, the static browsing mode means that the position of the display content in the corresponding content area is fixed; the scroll browsing mode refers to that the position of the display content in the corresponding content area is changeable, for example, the terminal scrolls and displays the corresponding display content in response to a drag operation of the scroll bar in the corresponding content area by the user. In other embodiments, the terminal displays the corresponding display content in the first content area and the second content area according to the scroll browsing mode. In other embodiments, the terminal displays corresponding display contents in the first content area and the second content area according to a static browsing mode, and the display modes of the first content area and the second content area are not limited in this embodiment of the application.
In some embodiments, the first content area and the second content area are different display areas of the terminal screen. The first content area displays a window mark, and the second content area synchronously displays a plurality of windows. In some embodiments, the window is identified as an icon, text, or a combination of an icon and text, and the like, which is not limited in this application. For example, referring to fig. 3, fig. 3 is a schematic diagram of a content area provided according to an embodiment of the present application, and as shown in fig. 3, the terminal provides a multi-window display function, and when multiple windows are displayed on a terminal screen (for example, one window corresponds to one application), icons corresponding to the multiple windows are displayed in a first content area, and the multiple windows are displayed in a second content area.
In some embodiments, the first content area and the second content area are different display areas of the same file. For example, the terminal provides a document splitting reading function, when the splitting reading function for a certain document is started, the terminal displays a first content area and a second content area, the first content area displays 1-10 pages of the document, and the second content area displays the last 11-20 pages of the document. Referring to fig. 4 schematically, fig. 4 is a schematic diagram of another content area provided according to an embodiment of the present application, and as shown in (a) of fig. 4, a file is a video including subtitles, a first content area displays picture content of the video, and a second content area displays subtitles, where it is to be noted that the subtitles and the picture content may be distributed in an overlapping manner, or may be distributed vertically, and the like, which is not limited in the embodiment of the present application. As shown in fig. 4 (b), the file is a body-building teaching video, the first content area displays the body-building action of the video, and the second content area displays the action name list. It should be noted that, what is shown in the drawings is merely an example, and the embodiment of the present application is not limited to specific contents of a video, and any video including a plurality of content areas associated with content, such as a movie video, a sports game video, a live shopping video, and the like, is applicable to the content display method provided in the embodiment of the present application.
In some embodiments, the first content area and the second content area are display areas of different files. For example, the terminal provides a split-screen browsing function, and different documents are displayed in the first content area and the second content area, respectively, for the user to view. In the embodiment of the present application, the type of the file is not limited, and the file is, for example, a text file, a picture file, a video file, or the like.
In some embodiments, the first content region and the second content region are different display regions of the same page. For example, the terminal provides a partitioned browsing function of a web page, and the terminal displays different content areas for the same page, wherein a first content area displays text content and a second content area displays related recommendations for the text content.
In some embodiments, the first content region and the second content region are display regions of different pages. For example, the terminal provides a function of simultaneously displaying corresponding pages of a plurality of applications, wherein the application a is a music-type application, the application B is a document-reading-type application, and the terminal displays corresponding pages of the two applications in the first content area and the second content area, respectively. For another example, the terminal provides a function of simultaneously displaying two pages corresponding to one application, the two pages corresponding to the first content area and the second content area, respectively. It should be noted that, in the embodiment of the present application, the type of the page is not limited, and for example, the page is a dynamic page, a static page, or the like.
In some embodiments, the first content region and the second content region are both text display regions; or one of the first content area and the second content area is a text display area, and the other one is a graphic display area; or both the first content area and the second content area are graphic display areas. That is, the embodiment of the present application does not limit the types of the display contents in the first content area and the second content area. It should be noted that the embodiment of the present application does not limit the type of the above-mentioned graphics, for example, the graphics are pictures, dynamic emoticons, page hangers or controls, and the like.
While the foregoing description has described specific forms of the first content area and the second content area in the embodiments of the present application, it should be understood that the embodiments of the present application can be applied to any one of the above-described alternative embodiments or a combination of multiple alternative embodiments, and the embodiments of the present application are not limited thereto.
202. The terminal determines first display content corresponding to the sight line drop point in the first content area.
In the embodiment of the application, the sight line drop point refers to a position watched by a sight line when a user browses a terminal screen. In some embodiments, the gaze point is also referred to as a gaze point. The first display content is any one of the display contents in the first content area. For example, the first display content is a picture, a video frame, a word, a text, or a control in the first content area, which is not limited in the embodiment of the present application.
In some embodiments, the first content area includes a plurality of display entries, and the terminal determines a display entry corresponding to the gaze point in the first content area, the display entry being determined as the first display content. Wherein the display entry comprises a plurality of texts; or; the display item includes a plurality of graphics; alternatively, the display items include texts, graphics, and the like, which are not limited in the embodiment of the present application. Illustratively, taking the example that the plurality of display entries include a plurality of texts, the terminal determines that the gaze point corresponds to one text in any display entry based on the gaze point, and determines the display entry as the first display content based on the display entry to which the text belongs.
In some embodiments, in the case that the first display content is a corresponding piece of video picture in the first content area, the process of the terminal determining the first display content includes: the method comprises the steps that a terminal obtains watching information of a sight line drop point in a first content area, the watching information indicates a watching area and a watching duration of the sight line drop point in the first content area, and if the watching information indicates that the watching duration of the sight line drop point in a target watching area is larger than or equal to a watching threshold value, the first display content is determined based on the watching information. In some embodiments, the display duration of the first display content is equal to the gaze duration. That is, the terminal determines a video screen segment at which the user continuously gazes as the first display content. In some embodiments, the gaze threshold is a preset threshold, for example, the gaze threshold is 10s, which is not limited in this application.
Illustratively, in the case where the first content area and the second content area are different display areas of the same file, for example, the file is a video including subtitles, the video is displayed in the first content area, a left area of the video displays that a group of people is exercising, a right area of the video displays that the group of people is dancing, if the gaze information of the user indicates that the gaze duration of the gaze point in the left area is equal to a gaze threshold value, for example, the gaze duration is 10s, and the first display content is determined to be a video picture of the left area of the video within 10s based on the gaze information. For another example, the file is a fitness teaching video, the video includes a region where a person is located, a region where an instrument is located, a background region, and the like, if the gaze information of the user indicates that the gaze duration of the gaze point in the region where the person is located is equal to the gaze threshold, for example, the gaze duration is 10s, based on the gaze information, it is determined that the first display content is a video picture of the region where the person is located in the video within 10s, that is, a video picture of the fitness action. It should be noted that the foregoing examples are merely exemplary, and the embodiments of the present application are not limited to specific contents of videos.
In some embodiments, the terminal has an eye tracking function, and can determine the sight line landing point of the user according to the face image of the user. In some embodiments, in the case that the terminal acquires face images of a plurality of users, a target user is determined from the plurality of users, and based on the face image of the target user, a sight line landing point of the target user is determined. In some embodiments, the target user refers to a user closest to the terminal screen among the plurality of users. Illustratively, the terminal acquires face images based on the plurality of users, calculates the distance of each user from the terminal screen, and determines a target user based on the distance of each user from the terminal screen. For example, the terminal acquires face images of 3 users, and the distances between the three users and the terminal screen are calculated to be 1m, 30cm and 50cm, respectively, and then the terminal determines the user corresponding to 30cm as the target user, which is not limited in the embodiment of the present application. By the method, the terminal determines the target user when detecting a plurality of users, thereby avoiding confusion and improving content display efficiency.
In other embodiments, the target user refers to a pre-set user. Illustratively, the terminal stores face information of a target user, and the terminal determines the target user from the plurality of users based on the face images of the plurality of users and the face information of the target user. Taking an online education scene as an example, the target user is a student user or a teacher user; by taking a screen navigation scene as an example, the target user is a member user of a shopping mall, and the like, corresponding services can be provided for the designated user in such a way, so that the watching experience of the user is improved, and the man-machine interaction efficiency is improved.
The following describes a process of determining a gaze point based on a face image of any user by the terminal, taking the user as an example. Schematically, a terminal acquires a face image of a user, inputs the face image into an eye tracking model, and processes the face image by the eye tracking model to obtain a sight line drop point of the user. This process is also understood as locating the pupil position of the user by image processing technology, and obtaining the center coordinates of the pupil, thereby calculating the gaze point of the user and letting the terminal know what the user is looking at. For example, the eye tracking model is a model based on a Convolutional Neural Network (CNN), referring to fig. 5 schematically, fig. 5 is a schematic structural diagram of the eye tracking model provided according to an embodiment of the present application, and as shown in fig. 5, a face image is used as an input of the eye tracking model to obtain a right eye image, a left eye image, a face image, and a face position (that is, a position of the face image in the whole face image, which is also referred to as a face mesh), and further, after the four types of information are respectively processed, the information is fused to obtain a two-dimensional coordinate position, where the two-dimensional coordinate position indicates a gaze point of a user. Of course, other ways may also be adopted to determine the sight line landing point of the user, which is not limited in this application embodiment. By locally finishing the process of determining the user sight line drop point at the terminal, the time consumed in the image acquisition and processing process is shortened, and the determination speed of the user sight line drop point is improved.
In addition, the process of determining the sight line drop point can be processed locally through the terminal and can also be processed remotely through the server. The terminal obtains a face image of a user, sends the face image to the server, the server completes the process of determining the sight line drop point, and then the terminal receives a determination result returned by the server to further obtain the sight line drop point. Through remote processing, the terminal does not need to be configured with an eye tracking function, the structure of the terminal is simplified, meanwhile, the image does not need to be processed by the terminal, and the calculation burden of the terminal is reduced.
203. The terminal highlights second display content associated with the first display content in the second content area.
In the embodiment of the present application, the association means that two display contents are related. The second display content refers to display content associated with the first display content in the second content region. For example, the second display content is a picture, a word, a piece of text, or a control, etc. associated with the first display content in the second content area, which is not limited in this embodiment of the application.
In some embodiments, the highlighting includes any of: highlighting, thickening, prompting, adding a frame, adding a shadow and adding a dynamic special effect.
The highlighting is schematically illustrated below with reference to fig. 6. FIG. 6 is a schematic illustration of a highlighting provided according to an embodiment of the present application. As shown in fig. 6 (a), highlighting refers to adding a background color to the corresponding display content or increasing the display brightness of the display content, so as to achieve the purpose of highlighting. As shown in fig. 6 (b), the bold text means that, when the display content is a text, the text line is adjusted in thickness to highlight the text. As shown in fig. 6 (c) and (d), the text prompt and the drawing prompt mean that prompt messages are displayed in the form of text or drawing near the corresponding display contents. As shown in fig. 6 (e), adding a frame means adding a frame around the corresponding display content to achieve the purpose of highlighting. As shown in (f) of fig. 6, adding shading means adding a background line on the corresponding display content for highlighting. As shown in fig. 6 (g), adding a dynamic special effect means adding a dynamic explanatory text or a special effect on (or near) corresponding display content to render an effect, so as to achieve the purpose of highlighting. It should be understood that the above examples are merely illustrative, and the embodiment of highlighting is not limited in the examples of the present application, and all highlighting modes for highlighting a certain display content and prompting a user may be taken as the highlighting modes of the present application. Through the highlighting mode, a user can intuitively and conveniently determine which display contents in the second content area are associated with the first display contents, so that the user is helped to quickly locate the associated contents, the man-machine interaction efficiency is improved, and the information acquisition efficiency of the user is effectively improved.
In some embodiments, the terminal highlights the second display content in different highlighting manners based on the degree of association between the first display content and the second display content. For example, if the degree of association between the first display content and the second display content is high (e.g., greater than or equal to a preset threshold), the second display content is highlighted, and if the degree of association between the first display content and the second display content is low (e.g., less than the preset threshold), the second display content is highlighted by adding a frame. For another example, if the degree of association between the first display content and the second display content is high (e.g., greater than or equal to a predetermined threshold), the second display content is highlighted in a highlighting manner with a transparency of 10%, and if the degree of association between the first display content and the second display content is low (e.g., less than the predetermined threshold), the second display content is highlighted in a highlighting manner with a transparency of 50%. By the method, the user can intuitively and conveniently determine which display contents in the second content area are higher in association degree with the first display contents and which display contents are lower in association degree with the first display contents, so that the user is helped to distinguish whether the associated contents are proper or not, the man-machine interaction efficiency is further improved, and the watching experience and the information obtaining efficiency of the user are also improved.
In some embodiments, the first display content and the second display content are either one of the following.
(1) The first display content is the same as the second display content. For example, the first display content is a text "123", the second display content is a text "123", and the terminal highlights the text "123" in the second content area when determining the gaze point of the first display content.
(2) The similarity between the first display content and the second display content is greater than or equal to a first threshold. That is, the first display content is similar to the second display content. In some embodiments, the first threshold is a preset threshold, for example, the first threshold is 80%, which is not limited in this application.
(3) The first display content and the second display content have a target correspondence. Wherein the target correspondence indicates an association between the first display content and the second display content. Taking an online education scene as an example, the first display content is a homework correction suggestion that 'applets are corrected into applets', and the second display content is 'applets' in student homework. Taking a mall screen navigation scene as an example, the first display content is shop C in the shop name list, and the second display content is a position corresponding to shop C in the shop position distribution map. Taking an online conference scene as an example, the first display content is a certain component in a product picture spoken by a conference speaker, and the second display content is a detailed introduction of the component. Taking a video viewing scene as an example, the first display content is a video picture of a group of people exercising the body, and the second display content is a keyword "exercise body" in a subtitle. Taking the video watching scene as an example, the first display content is a video picture corresponding to the fitness action of "squatting deeply", and the second display content is an action name and an action detailed solution corresponding to the fitness action. Taking a multi-window display scene as an example, the first display content is a window identifier, and the second display content is a window corresponding to the window identifier.
The relationship between the first display content and the second display content is described in the above several cases, and it should be understood that the embodiment of the present application can be applied to any of the above cases, and the embodiment of the present application is not limited thereto. By highlighting the display contents which are the same as, similar to or have the target corresponding relation with the first display contents in the second content area, the user can be helped to quickly locate the corresponding contents under the condition that the user needs to check the corresponding contents in the plurality of content areas at the same time, and the man-machine interaction efficiency and the information obtaining efficiency of the user are improved.
In some embodiments, this step 203 includes the following steps 2031 and 2032:
2031. the terminal determines second display content associated with the first display content in the second content area based on the first display content.
The terminal determines the second display content in various ways, for example, by acquiring the association information of the first display content when the target correspondence between the first display content and the second display content is known. For another example, in a case where the target correspondence between the first display content and the second display content is unknown, the second display content is determined from the second content area according to the information indicated by the first display content, and so on, which is not limited in this embodiment of the present application. It should be noted that, various ways for the terminal to determine the second display content will be described in detail in the following embodiments, and are not described herein again.
2032. The terminal highlights the second display content.
In the embodiment of the application, under the condition that two content areas are displayed, when the sight line drop point of a user is in the corresponding first display content in one content area, the second display content related to the first display content is displayed in the other content area in a highlighted mode to prompt the user that the two display contents are related, so that the user is helped to quickly locate the related content, the man-machine interaction efficiency is improved, and the information obtaining efficiency of the user is effectively improved.
The process of the terminal determining the second display content based on the first display content in step 203 is described in detail below in several cases.
First, the terminal determines the second display content based on the association information of the first display content.
Before the terminal determines the sight line drop point, the target corresponding relation between the first display content and the second display content is known, and the process of determining the second display content specifically comprises the following two steps:
step 1, the terminal acquires the associated information corresponding to the first display content.
Wherein the association information indicates display content in the second content region having a target correspondence with the first display content.
In some embodiments, the association information includes position information indicating a position in the second content region where the display content having the target correspondence with the first display content is located. For example, the position information is a two-dimensional coordinate, and each of the display contents in the first content area and the second content area corresponds to one two-dimensional coordinate for uniquely identifying each of the display contents. For another example, the position information is an arrangement order, the corresponding display contents in the second content region are arranged in a certain order, each display content corresponds to an arrangement order, and so on.
In some embodiments, the association information includes identification information for identifying the first display content. For example, the identification information is id, and each display content in the first content area corresponds to an id for uniquely identifying each display content. For another example, the identification information is a tag, that is, each display content in the first content area corresponds to a tag for uniquely identifying each display content, and so on.
In some embodiments, the terminal has an information processing function, configured to identify the display contents in the first content area and the second content area before determining the gaze drop point, and obtain associated information corresponding to each display content. Illustratively, when the first content area and the second content area are both text display areas, the process of the terminal obtaining the associated information corresponding to each display content is as follows: determining text information corresponding to a plurality of display contents in the first content area and the second content area by applying a text detection algorithm and a text recognition algorithm based on the first content area and the second content area; and generating associated information corresponding to the plurality of display contents based on the text information corresponding to the plurality of display contents.
Next, referring to fig. 7 and fig. 8, a process of generating the association information corresponding to each display content by applying the text detection algorithm and the text recognition algorithm will be described. Fig. 7 is a schematic structural diagram of a text detection algorithm provided in an embodiment of the present application, and fig. 8 is a schematic structural diagram of a text recognition algorithm provided in an embodiment of the present application.
As shown in fig. 7, the Text detection algorithm is a CNN-based Text detection model (CTPN), in which a VGG16 Network is used to extract features of a terminal screen image, and Conv5 is used as a feature map; scanning a feature map by using a 3 multiplied by 3 sliding window, and predicting a plurality of anchor points (anchors) by using the features; inputting the modified features (reshape) into a bidirectional Short-Term Memory network (LSTM); inputting the output result into a full connection layer for classification regression to obtain a text rectangular box; and combining the rectangular frames into a sequence frame of the text to obtain a final text position.
As shown in fig. 8, the text recognition algorithm is a Convolutional Recurrent Neural Network (CRNN), and includes a CNN layer, a Recurrent Neural Network (RNN) layer, and a Transcription (Transcription) layer, where the CNN layer is configured to extract image features according to a text position obtained by CTPN to generate a feature vector; the RNN layer is used for identifying the feature vectors by using the bidirectional LSTM to obtain the probability distribution of each row of features; the transformation layer is used for solving a sequence according to a neural network-based time sequence class Classification (CTC) algorithm and a forward-backward algorithm to obtain final text information.
After the text information is obtained, the content associated with the text information is bound according to the text information corresponding to each display content in the first content area and the text information corresponding to each display content in the second content area, and corresponding associated information is obtained. In some embodiments, when the display content is a display item, the text detection algorithm and the text recognition algorithm are applied according to the text included in each display item to obtain the text information corresponding to each display item, so as to generate the corresponding associated information.
In some embodiments, in the case that the first content area and the second content area are both graphic display areas, the terminal determines image information corresponding to a plurality of display contents in the first content area and the second content area by applying an image recognition algorithm based on the first content area and the second content area, and generates associated information corresponding to the plurality of display contents based on the image information corresponding to the plurality of display contents.
Of course, the above process is only exemplary, in some embodiments, in a case where the first content area and the second content area are the same file, taking the file as a video including subtitles as an example, the first content area displays the picture content of the video, the second content area displays subtitles, the terminal applies an image recognition algorithm based on the video, extracts video information (such as character behaviors and the like) of the video, applies a text recognition algorithm and a text detection algorithm based on the subtitles, and extracts text information of the subtitles, thereby generating associated information corresponding to the video based on the video information and the text information (for example, if the video information of the left area of the video within 1-10s indicates that the video picture is "a group of people is exercising the body", then generates a tag "exercising the body" for the video picture). In addition, taking a file as a body-building teaching video as an example, the terminal also extracts video information (such as body-building actions) of the video based on the video by applying an image recognition algorithm, so as to generate corresponding associated information, which is not described herein again. In some embodiments, in the case that the first content area and the second content area are different display areas of the terminal screen, the first content area displays a window identifier, the second content area synchronously displays a plurality of windows, and the terminal generates corresponding association information (such as window id) for each window identifier in the first content area.
It should be noted that the terminal may also apply other manners to generate the related information, which is not limited in the embodiment of the present application. In addition, the generation process of the related information may be performed not only locally by the terminal, but also remotely by the server, which is not limited in this embodiment of the present application.
And 2, the terminal determines the second display content based on the associated information.
And the terminal determines the display content corresponding to the association relationship in the second content area as the second display content according to the association relationship.
In some embodiments, taking the example that the association information includes the position information as an example, the terminal determines, as the second display content, the display content corresponding to the position information in the second content area based on the association relationship. In other embodiments, taking the example that the association information includes the identification information, the terminal determines, as the second display content, the display content corresponding to the identification information in the second content area based on the association relationship.
In the process, under the condition that the target corresponding relation between the first display content and the second display content is known, the terminal determines the second display content by acquiring the association relation of the first display content, and on the basis, the second display content is highlighted, so that the user is helped to quickly locate the association content, the man-machine interaction efficiency is improved, and the information acquisition efficiency of the user is effectively improved.
Second, the terminal determines second display contents based on key information of the first display contents.
Before the terminal determines the sight line drop point, the target corresponding relation between the first display content and the second display content is unknown, and the process of determining the second display content specifically comprises the following two steps:
step 1: the terminal extracts key information corresponding to the first display content.
In some embodiments, the first display content is a text display content, and the terminal applies a text recognition algorithm to extract key information, such as keywords, and the like, of the first display content. For example, the terminal applies a word2vec model, performs word segmentation on the text, finds out a word vector corresponding to each word, sums the word vectors of all the words to obtain a sentence vector of the text, and uses the sentence vector as key information of the first display content.
In some embodiments, the first display content is a graphic display content, and the terminal applies an image recognition algorithm to extract key information of the first display content, such as image semantic information. In some embodiments, the first display content is a video picture, the terminal applies an image recognition algorithm, extracts image semantic information of video frames in the video picture frame by frame, and generates key information of the video picture based on the extracted image semantic information, which is not limited in this application embodiment.
Step 2: the terminal determines, as the second display content, the display content in the second content area corresponding to the key information based on the key information.
In some embodiments, the first display content is text display content, the second content area is a text display area, the terminal applies a text recognition algorithm, extracts key information corresponding to a plurality of display contents in the second content area, calculates text similarity between the key information corresponding to the first display content and the key information corresponding to the plurality of display contents, and determines display contents meeting a text similarity condition among the plurality of display contents as the second display content. For example, the text similarity condition indicates that the text similarity is greater than or equal to a text threshold, and for example, the text similarity condition indicates that the text similarity is maximum, which is not limited in this embodiment of the present application.
In some embodiments, the first display content is a graphic display content, the second content area is a graphic display area, the terminal applies an image recognition algorithm, extracts key information corresponding to a plurality of display contents in the second content area, calculates a graphic similarity between the key information corresponding to the first display content and the key information corresponding to the plurality of display contents, and determines a display content meeting a graphic similarity condition among the plurality of display contents as the second display content. For example, the graph similarity condition indicates that the graph similarity is greater than or equal to a graph threshold, and for example, the graph similarity condition indicates that the graph similarity is maximum, which is not limited in this embodiment of the present application.
In some embodiments, when the first content area and the second content area are the same file, taking the file as a video including subtitles as an example, the first display content is a video picture, the second content area is subtitles, the terminal applies a text detection algorithm and a text recognition algorithm to extract key information of the subtitles, calculates semantic similarity between the key information corresponding to the first display content and the key information corresponding to the subtitles, and determines a text meeting the semantic similarity condition in the subtitles as the second display content. For example, the semantic similarity condition indicates that the semantic similarity is greater than or equal to a semantic threshold, and for example, the semantic similarity condition indicates that the semantic similarity is maximum, which is not limited in this embodiment of the present application.
In addition, the above-mentioned extraction process of the key information may be performed not only locally through the terminal, but also remotely through the server, which is not limited in this embodiment of the application.
In the process, under the condition that the target corresponding relation between the first display content and the second display content is unknown, the terminal determines the second display content by extracting the key information of the first display content, and on the basis, the second display content is highlighted, so that a user is helped to quickly locate the associated content, the man-machine interaction efficiency is improved, and the information obtaining efficiency of the user is effectively improved.
And thirdly, the terminal determines second display content based on the historical sight line movement track of the user.
Before the terminal determines the sight line drop point, the target corresponding relation between the first display content and the second display content is unknown, and the process of determining the second display content specifically comprises the following two steps:
step 1: the terminal acquires the historical sight line movement track of the user.
Wherein the historical gaze movement trajectory indicates a foldback situation of the gaze drop point between the first content area and the second content area. Referring to fig. 9, fig. 9 is a schematic diagram of a historical sight-line movement track provided according to an embodiment of the present application. As shown in fig. 9, the first content area corresponds to display contents a1 and B1, and the second content area corresponds to display contents a2 and B2, and the historical trajectory of the movement of the line of sight of the user indicates a return situation of the line of sight falling point between the above-mentioned display contents.
In some embodiments, the terminal generates the historical gaze movement trajectory based on a facial image of the user and stores it locally. In other embodiments, the terminal sends the face image of the user to the server, the server performs remote processing to generate the historical movement track of the sight line, and the terminal can obtain the historical movement track of the sight line by sending an acquisition request to the server, which is not limited in this embodiment of the present application.
Step 2: and if the historical sight line moving track indicates that the turn-back times of the sight line falling point between the first display content and the third display content are larger than or equal to a second threshold value, determining the third display content as the second display content, wherein the third display content is any one corresponding display content in the second content area.
Wherein the second threshold is a preset threshold. For example, the second threshold is 3. When the historical sight line moving track indicates that the turn-back times of the sight line drop point between the first display content and the third display content are larger than or equal to a second threshold value, the fact that the user often turns back between the first display content and the third display content is indicated, a target corresponding relation exists between the first display content and the third display content, the terminal determines the third display content as the second display content, and when the sight line drop point of the user stays at the first display content again, the second display content is highlighted. Illustratively, with continuing reference to fig. 9, as shown in fig. 9, taking the second threshold value of 3 as an example, the number of times of switchback between the display content a1 and the display content a2 is 2, the number of times of switchback between the display content B1 and the display content B2 is 3, and when the line of sight of the user is at the display content B1, the display content B2 is highlighted. In addition, the above process may also be understood as learning the movement law of the sight line drop point of the user, that is, the sight line drop point is often folded back between two points, then the association between the two points is established, and when the sight line of the user stays at one point again, the other point is highlighted.
In some embodiments, if the historical gaze movement trajectory indicates that the number of turns of the gaze point between the first display content and the fourth display content is greater than or equal to a third threshold, and the length of time that the gaze point stays on the fourth display content is greater than or equal to a fourth threshold, the fourth display content is determined to be the second display content, and the fourth display content is any corresponding display content in the second content area. The third threshold and the fourth threshold are both preset thresholds, for example, the third threshold is 2, and the fourth threshold is 1 second, which is not limited in this embodiment of the application. When the historical sight line moving track indicates that the turning-back times of the sight line falling point between the first display content and the fourth display content are larger than or equal to a third threshold value, and the stay time of the sight line falling point on the fourth display content is larger than or equal to a fourth threshold value, the fact that the user often turns back between the first display content and the fourth display content is shown, the user pays attention to the fourth display content, the terminal determines the fourth display content as the second display content, and when the sight line falling point of the user stays on the first display content again, the second display content is highlighted.
In some embodiments, if the historical sight-line moving trajectory indicates that the number of turns of the sight-line drop point between the first display content and the fifth display content is the largest within the target duration, the fifth display content is determined to be the second display content, and the fifth display content is any corresponding display content in the second content area. The target time duration is a preset time duration, for example, the target time duration is 10 seconds, which is not limited in the embodiment of the present application. The historical sight line moving track indicates that within the target duration, the turn-back frequency of the sight line falling point between the first display content and the fifth display content is the largest, which indicates that the attention of the user to the first display content and the fifth display content is higher within a certain time, the terminal determines the fifth display content as the second display content, and when the sight line falling point of the user stays at the first display content again, the second display content is highlighted.
In the process, under the condition that the target corresponding relation between the first display content and the second display content is unknown, the terminal determines the second display content by acquiring the historical sight line moving track of the user, and on the basis, the second display content is highlighted, so that the user is helped to quickly locate the associated content, the man-machine interaction efficiency is improved, and the watching experience of the user and the information acquisition efficiency are effectively improved.
The process of highlighting the second display content by the terminal in step 203 is described in detail below in several cases.
First, in the second content area, the terminal displays the target display content in accordance with the scroll browsing mode.
The target display content is a document, a long figure, and the like, which is not limited in the embodiment of the present application. The scroll browsing mode may be automatically executed by the terminal, or may be in response to a drag operation of the scroll bar by the user, which is not limited in the embodiment of the present application. In this embodiment, when the gaze point is at the first display content, the terminal determines the second display content from the target display content based on the first display content, pauses the scroll browsing mode, and highlights the second display content in the second content area. For example, the terminal displays a document in a scrolling mode in the second content area, displays a keyword list associated with the document in the first content area, pauses the scrolling mode when the gaze point is at a certain keyword (i.e., the first display content), determines text content corresponding to the keyword from the document, and highlights the text content.
By the method, under the condition that the terminal displays the target display content in the second content area according to the scroll browsing mode, the terminal highlights the second display content in the mode of suspending the scroll browsing mode, so that the user is helped to quickly locate the associated content, the man-machine interaction efficiency is improved, and the watching experience and the information acquisition efficiency of the user are effectively improved.
The second display content is plural in number.
The terminal can determine a plurality of second display contents associated with the first display contents from the second content area based on the first display contents. For example, taking the first display content as one keyword as an example, a plurality of text contents corresponding to the keyword exist in the second content region.
In this embodiment, the terminal sequentially highlights each of the second display contents in the second content area based on the display order of the second display contents and the movement locus of the line-of-sight falling point. Wherein the display order indicates the positions of the plurality of second display contents in the second content area, and the highlighting in sequence comprises: and highlighting the second display content within a certain time after the first second display content is highlighted, highlighting the third second display content after the certain time, and so on. For example, the number of the second display contents is 3, the second display contents are arranged in the second content region in the order from top to bottom, and the terminal highlights a second one of the second display contents when the gaze point is away from the first one of the second display contents and highlights a third one of the second display contents when the gaze point is away from the second one of the second display contents, based on the display order and the movement trajectory of the gaze point.
In some embodiments, the terminal does not highlight the previous second display while highlighting the current second display. In other embodiments, the terminal continues to highlight the previous second display content while highlighting the current second display content, which is not limited in this embodiment of the application.
Of course, in the case that the number of the second display contents is multiple, the terminal may also highlight each second display content at the same time, which is not limited in the embodiment of the present application.
Through the mode, under the condition that the number of the second display contents is multiple, the terminal highlights the second display contents through the mode of highlighting in sequence, so that the terminal not only helps a user to quickly locate the associated contents, but also can remind the user to watch the associated contents in order, the man-machine interaction efficiency is improved, and the watching experience of the user and the information obtaining efficiency are effectively improved.
The following describes the content display method by taking the application of the content display method to different scenes as an example with reference to fig. 10 to 14.
Fig. 10 is a schematic diagram of a content display method according to an embodiment of the present application. As shown in fig. 10, the content display method is applied to an online education scene. Schematically, as shown in (a) of fig. 10, the terminal displays a first content area and a second content area, wherein, the first content area is the right homework correction item (such as correction suggestion), the second content area is the left student homework, in this scenario, the target correspondence between the first display content and the second display content is known, when the user is in the correction operation, if the sight line drop point is on the display content corresponding to the right serial number 1, the display content corresponding to the left serial number 1 is highlighted (for example, highlighted), if the sight line drop point is on the display content corresponding to the right serial number 2, the display content corresponding to the left serial number 2 is highlighted (for example, highlighted in a manner of bold), therefore, the teacher user is helped to quickly locate the associated content, the man-machine interaction efficiency is improved, and the information obtaining efficiency of the user is improved. As shown in fig. 10 (b), the terminal displays a first content area and a second content area, wherein the first content area is a left student assignment and the second content area is a right assignment answer, in this scenario, the target correspondence between the first display content and the second display content is known, and when the student checks the assignment answer, if the sight line falls on the display content corresponding to the left sequence number 3, the display content corresponding to the right sequence number 3 is highlighted (for example, highlighted). Therefore, the student user is helped to quickly locate the associated content, the man-machine interaction efficiency is improved, and the information obtaining efficiency of the user is improved.
Fig. 11 is a schematic diagram of another content display method provided according to an embodiment of the present application. As shown in fig. 11, the content display method is applied to a screen navigation scene. Illustratively, the terminal displays a first content area and a second content area, where the first content area is a shop list below a screen, and the second content area is a shop location distribution map above the screen, and in this scenario, if a target correspondence between the first display content and the second display content is unknown, when a user views the guide, a large number of retrieval behaviors may be generated, and by learning a line-of-sight movement rule of the user, if a line-of-sight falling point is a display content corresponding to a lower shop a, the display content corresponding to the upper shop a is highlighted (for example, highlighted in a manner of adding a shadow), which can help the user to quickly locate to the shop location, provide reading assistance, improve human-computer interaction efficiency, and improve viewing experience of the user and efficiency of obtaining information.
Fig. 12 is a schematic diagram of another content display method provided according to an embodiment of the present application. As shown in fig. 12, the content display method is applied to an online conference scene. In this scenario, when a conference speaker speaks a certain component B of the product, the sight line drop point of the participant often falls on the component B, and at this time, the detailed description corresponding to the component B is highlighted (for example, highlighted in a manner of adding a dynamic special effect), so that the participant is helped to quickly locate the relevant description, the human-computer interaction efficiency is improved, and the information acquisition efficiency and the conference efficiency are improved.
Fig. 13 is a schematic diagram of another content display method provided according to an embodiment of the present application. As shown in fig. 13, the content display method is applied to a video viewing scene. Schematically, as shown in (a) of fig. 13, the terminal displays a first content area displaying the screen content of the video and a second content area displaying the subtitles. In such a scenario, if the user's gaze point is on the video picture C corresponding to the left area in the first content area, the text corresponding to the video picture C in the subtitle is highlighted (for example, highlighted in a manner of bold characters), so as to help the user quickly understand the video content, improve the human-computer interaction efficiency, and improve the efficiency of the user for obtaining information. As shown in fig. 13 (b), the terminal displays a first content area and a second content area, where the first content area displays a fitness action, and the second content area displays an action name list, and in this scenario, if the gaze point of the user is at a certain fitness action D, the action name of the fitness action D in the action name list is highlighted, so as to help the user quickly understand the video content, improve the human-computer interaction efficiency, and improve the efficiency of the user to obtain information.
Fig. 14 is a schematic diagram of another content display method provided according to an embodiment of the present application. As shown in fig. 14, the content display method is applied to a multi-window display scene. Illustratively, as shown in fig. 14, the terminal displays a first content area and a second content area, wherein the first content area displays a window identifier and the second content area synchronously displays a plurality of windows. In this scenario, if the sight line of the user is located at the window identifier E, a window corresponding to the window identifier E in the multiple windows is highlighted (for example, highlighted in a manner of adding a text prompt), so that the user can quickly locate the window to be checked, the human-computer interaction efficiency is improved, and the information acquisition efficiency of the user is improved.
Fig. 15 is a schematic structural diagram of a content display device according to an embodiment of the present application. The apparatus is used for executing the steps when the content display method is executed, and referring to fig. 15, the apparatus comprises: a display module 1501 and a determination module 1502.
A display module 1501, configured to display a first content area and a second content area;
a determining module 1502, configured to determine first display content corresponding to the gaze drop point in the first content area;
the display module 1501 is further configured to highlight the second display content associated with the first display content in the second content area.
In an optional implementation manner, the first content area and the second content area are different display areas of the same file; or the first content area and the second content area are display areas of different files; or the first content area and the second content area are different display areas of the same page; or the first content area and the second content area are display areas of different pages.
In an optional implementation manner, the first content area and the second content area are both text display areas; or, one of the first content area and the second content area is a text display area, and the other is a graphic display area; or, the first content area and the second content area are both graphic display areas.
In an alternative implementation, the first display content is the same as the second display content; or the similarity between the first display content and the second display content is greater than or equal to a first threshold; or, a target corresponding relation exists between the first display content and the second display content.
In an optional implementation, the display module 1501 includes:
a content determination unit configured to determine, in the second content area, second display content associated with the first display content based on the first display content;
and the highlighting unit is used for highlighting the second display content.
In an optional implementation, the content determining unit is configured to: acquiring associated information corresponding to the first display content, wherein the associated information indicates display content in the second content area and having a target corresponding relation with the first display content; based on the association information, the second display content is determined.
In an optional implementation manner, the association information includes position information, where the position information indicates a position where display content having the target correspondence with the first display content is located in the second content region; the content determining unit is configured to determine, as the second display content, the display content in the second content area corresponding to the position information based on the association information.
In an optional implementation manner, the association information includes identification information, and the identification information is used for identifying the first display content; the content determining unit is configured to determine, as the second display content, the display content corresponding to the identification information in the second content area based on the association relationship.
In an optional implementation, the apparatus further comprises:
the information determining module is used for applying a text detection algorithm and a text recognition algorithm based on the first content area and the second content area to determine text information corresponding to a plurality of display contents in the first content area and the second content area;
and the information generating module is used for generating the associated information corresponding to the plurality of display contents based on the text information corresponding to the plurality of display contents.
In an optional implementation, the content determining unit is configured to:
extracting key information corresponding to the first display content;
and determining the display content corresponding to the key information in the second content area as the second display content based on the key information.
In an optional implementation, the content determining unit is configured to:
obtaining a historical sight line moving track of a user, wherein the historical sight line moving track indicates the turn-back condition of the sight line falling point between the first content area and the second content area;
and if the historical sight line moving track indicates that the turn-back times of the sight line falling point between the first display content and the third display content are greater than or equal to a second threshold value, determining the third display content as the second display content, wherein the third display content is any one corresponding display content in the second content area.
In an optional implementation manner, the content determining unit is further configured to:
and if the historical sight line moving track indicates that the turning times of the sight line falling point between the first display content and the fourth display content are greater than or equal to a third threshold value, and the stay time of the sight line falling point on the fourth display content is greater than or equal to a fourth threshold value, determining the fourth display content as the second display content, wherein the fourth display content is any corresponding display content in the second content area.
In an alternative implementation, the display module 1501 is configured to:
the number of the second display contents is plural, and each of the second display contents is highlighted in turn in the second content area based on the display order of the second display contents and the movement locus of the sight-line drop point.
In an optional implementation manner, the display module 1501 is configured to display target display content in the second content area according to a scrolling browsing mode; the display module is further configured to determine the second display content from the target display content based on the first display content, suspend the scrolling mode, and highlight the second display content in the second content area.
In an alternative implementation, the highlighting includes any one of: highlighting, thickening, prompting, adding a frame, adding a shadow and adding a dynamic special effect.
It should be noted that: in the content display device provided in the above embodiment, when displaying content, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the content display apparatus and the content display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
In an exemplary embodiment, a computer device is also provided. Taking a computer device as an example of a terminal, fig. 16 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be non-transitory. The memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1602 is used to store at least one program code for execution by the processor 1601 to implement the content display method provided by the method embodiments of the present application.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1604, a display 1605, a camera assembly 1606, audio circuitry 1607, a positioning assembly 1608, and a power supply 1609.
Peripheral interface 1603 can be used to connect at least one I/O (Input/Output) related peripheral to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1601, the memory 1602 and the peripheral device interface 1603 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1604 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 1605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 can be one, disposed on the front panel of the terminal 1600; in other embodiments, the display screens 1605 can be at least two, respectively disposed on different surfaces of the terminal 1600 or in a folded design; in other embodiments, display 1605 can be a flexible display disposed on a curved surface or a folded surface of terminal 1600. Even further, the display 1605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 can also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 to achieve voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of terminal 1600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1607 may also include a headphone jack.
The positioning component 1608 is configured to locate a current geographic Location of the terminal 1600 for purposes of navigation or LBS (Location Based Service). The Positioning component 1608 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
Power supply 1609 is used to provide power to the various components of terminal 1600. Power supply 1609 may be alternating current, direct current, disposable or rechargeable. When power supply 1609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyro sensor 1612, pressure sensor 1613, fingerprint sensor 1614, optical sensor 1615, and proximity sensor 1616.
Acceleration sensor 1611 may detect acceleration in three coordinate axes of a coordinate system established with terminal 1600. For example, the acceleration sensor 1611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1601 may control the display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1611. The acceleration sensor 1611 may also be used for acquisition of motion data of a game or a user.
Gyroscope sensor 1612 can detect the organism direction and the turned angle of terminal 1600, and gyroscope sensor 1612 can gather the 3D action of user to terminal 1600 with acceleration sensor 1611 in coordination. From the data collected by the gyro sensor 1612, the processor 1601 may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1613 may be disposed on the side frames of terminal 1600 and/or underlying display 1605. When the pressure sensor 1613 is disposed on the side frame of the terminal 1600, a user's holding signal of the terminal 1600 can be detected, and the processor 1601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613. When the pressure sensor 1613 is disposed at the lower layer of the display 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the display 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1614 is configured to collect a fingerprint of the user, and the processor 1601 is configured to identify the user based on the fingerprint collected by the fingerprint sensor 1614, or the fingerprint sensor 1614 is configured to identify the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1614 may be disposed on the front, back, or side of the terminal 1600. When a physical key or vendor Logo is provided on the terminal 1600, the fingerprint sensor 1614 may be integrated with the physical key or vendor Logo.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the display screen 1605 based on the ambient light intensity collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the display screen 1605 is adjusted down. In another embodiment, the processor 1601 may also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1600. The proximity sensor 1616 is used to collect the distance between the user and the front surface of the terminal 1600. In one embodiment, the processor 1601 controls the display 1605 to switch from the light screen state to the clear screen state when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually decreased; when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually increased, the display 1605 is controlled by the processor 1601 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The embodiment of the present application further provides a computer-readable storage medium, which is applied to a computer device, and the computer-readable storage medium stores at least one computer program, and the at least one computer program is loaded and executed by a processor to implement the operations performed by the computer device in the content display method of the foregoing embodiment.
Embodiments of the present application also provide a computer program product or a computer program comprising computer program code stored in a computer readable storage medium. The processor of the computer device reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the computer device performs the content display method provided in the above-described various alternative implementations.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (19)

1. A method for displaying content, the method comprising:
displaying a first content area and a second content area;
determining first display content corresponding to a sight line drop point in the first content area;
highlighting, in the second content region, second display content associated with the first display content.
2. The method of claim 1,
the first content area and the second content area are different display areas of the same file; or the like, or, alternatively,
the first content area and the second content area are display areas of different files; or the like, or, alternatively,
the first content area and the second content area are different display areas of the same page; or the like, or, alternatively,
the first content area and the second content area are display areas of different pages.
3. The method of claim 1,
the first content area and the second content area are both text display areas; or the like, or, alternatively,
one of the first content area and the second content area is a text display area, and the other one is a graphic display area; or the like, or, alternatively,
the first content area and the second content area are both graphic display areas.
4. The method of claim 1,
the first display content is the same as the second display content; or the like, or, alternatively,
a similarity between the first display content and the second display content is greater than or equal to a first threshold; or the like, or, alternatively,
and a target corresponding relation exists between the first display content and the second display content.
5. The method of claim 1, wherein highlighting, in the second content region, second display content associated with the first display content comprises:
determining, in the second content region, second display content associated with the first display content based on the first display content;
highlighting the second display content.
6. The method of claim 5, wherein determining, in the second content region based on the first display content, second display content associated with the first display content comprises:
acquiring associated information corresponding to the first display content, wherein the associated information indicates display content in the second content area and having a target corresponding relation with the first display content;
determining the second display content based on the association information.
7. The method of claim 6,
the associated information comprises position information, and the position information indicates the position of the display content in the second content area, wherein the display content has the target corresponding relation with the first display content;
the determining the second display content based on the association information comprises:
and determining display content corresponding to the position information in the second content area as the second display content based on the association information.
8. The method of claim 6,
the associated information comprises identification information, and the identification information is used for identifying the first display content;
the determining the second display content based on the association information comprises:
and determining the display content corresponding to the identification information in the second content area as the second display content based on the association relation.
9. The method of claim 6, further comprising:
determining text information corresponding to a plurality of display contents in the first content area and the second content area by applying a text detection algorithm and a text recognition algorithm based on the first content area and the second content area;
and generating associated information corresponding to the plurality of display contents based on the text information corresponding to the plurality of display contents.
10. The method of claim 5, wherein determining, in the second content region based on the first display content, second display content associated with the first display content comprises:
extracting key information corresponding to the first display content;
determining display content corresponding to the key information in the second content region as the second display content based on the key information.
11. The method of claim 5, wherein determining, in the second content region based on the first display content, second display content associated with the first display content comprises:
obtaining a historical sight line moving track of a user, wherein the historical sight line moving track indicates a turn-back condition of the sight line falling point between the first content area and the second content area;
and if the historical sight line moving track indicates that the turn-back times of the sight line falling point between the first display content and the third display content are larger than or equal to a second threshold value, determining the third display content as the second display content, wherein the third display content is any one corresponding display content in the second content area.
12. The method of claim 11, further comprising:
and if the historical sight line moving track indicates that the turning times of the sight line falling point between the first display content and the fourth display content are greater than or equal to a third threshold value, and the stay time of the sight line falling point on the fourth display content is greater than or equal to a fourth threshold value, determining the fourth display content as the second display content, wherein the fourth display content is any corresponding display content in the second content area.
13. The method of claim 1, wherein highlighting, in the second content region, second display content associated with the first display content comprises:
the number of the second display contents is multiple, and each second display content is highlighted in the second content area in sequence based on the display sequence of the second display contents and the movement track of the sight line drop point.
14. The method of claim 1, wherein displaying the first content region and the second content region comprises:
displaying target display content in the second content area according to a scrolling browsing mode;
said highlighting, in the second content region, second display content associated with the first display content, comprising: determining the second display content from the target display content based on the first display content, pausing the scrolling mode, and highlighting the second display content in the second content region.
15. The method of claim 1, wherein the highlighting comprises any one of: highlighting, thickening, prompting, adding a frame, adding a shadow and adding a dynamic special effect.
16. A content display apparatus, characterized in that the apparatus comprises:
the display module is used for displaying the first content area and the second content area;
the determining module is used for determining first display content corresponding to the sight line drop point in the first content area;
the display module is further configured to highlight, in the second content area, second display content associated with the first display content.
17. A computer device, characterized in that the computer device comprises a processor and a memory for storing at least one computer program, which is loaded by the processor and which performs the content display method according to any one of claims 1 to 15.
18. A computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the content display method according to any one of claims 1 to 15.
19. A computer program product, characterized in that the computer program product comprises at least one computer program which is loaded and executed by a processor to implement the content display method according to any one of claims 1 to 15.
CN202111233652.6A 2021-10-22 2021-10-22 Content display method, device, equipment and storage medium Active CN114296627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111233652.6A CN114296627B (en) 2021-10-22 2021-10-22 Content display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111233652.6A CN114296627B (en) 2021-10-22 2021-10-22 Content display method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114296627A true CN114296627A (en) 2022-04-08
CN114296627B CN114296627B (en) 2023-06-23

Family

ID=80964467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111233652.6A Active CN114296627B (en) 2021-10-22 2021-10-22 Content display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114296627B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114996225A (en) * 2022-07-18 2022-09-02 成都中科合迅科技有限公司 Development method for user-defined visual combined instrument control

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050193330A1 (en) * 2004-02-27 2005-09-01 Exit 33 Education, Inc. Methods and systems for eBook storage and presentation
US20150326925A1 (en) * 2014-05-06 2015-11-12 At&T Intellectual Property I, L.P. Embedding Interactive Objects into a Video Session
JP2017146672A (en) * 2016-02-15 2017-08-24 富士通株式会社 Image display device, image display method, image display program, and image display system
CN109670456A (en) * 2018-12-21 2019-04-23 北京七鑫易维信息技术有限公司 A kind of content delivery method, device, terminal and storage medium
CN111368114A (en) * 2018-12-25 2020-07-03 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium
CN111488057A (en) * 2020-03-30 2020-08-04 维沃移动通信有限公司 Page content processing method and electronic equipment
CN111638835A (en) * 2020-04-28 2020-09-08 维沃移动通信有限公司 Note generation method and electronic equipment
US20200382845A1 (en) * 2019-05-31 2020-12-03 Apple Inc. Notification of augmented reality content on an electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050193330A1 (en) * 2004-02-27 2005-09-01 Exit 33 Education, Inc. Methods and systems for eBook storage and presentation
US20150326925A1 (en) * 2014-05-06 2015-11-12 At&T Intellectual Property I, L.P. Embedding Interactive Objects into a Video Session
JP2017146672A (en) * 2016-02-15 2017-08-24 富士通株式会社 Image display device, image display method, image display program, and image display system
CN109670456A (en) * 2018-12-21 2019-04-23 北京七鑫易维信息技术有限公司 A kind of content delivery method, device, terminal and storage medium
CN111368114A (en) * 2018-12-25 2020-07-03 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium
US20200382845A1 (en) * 2019-05-31 2020-12-03 Apple Inc. Notification of augmented reality content on an electronic device
CN111488057A (en) * 2020-03-30 2020-08-04 维沃移动通信有限公司 Page content processing method and electronic equipment
CN111638835A (en) * 2020-04-28 2020-09-08 维沃移动通信有限公司 Note generation method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114996225A (en) * 2022-07-18 2022-09-02 成都中科合迅科技有限公司 Development method for user-defined visual combined instrument control

Also Published As

Publication number Publication date
CN114296627B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN110544272B (en) Face tracking method, device, computer equipment and storage medium
WO2020224479A1 (en) Method and apparatus for acquiring positions of target, and computer device and storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
CN113395542A (en) Video generation method and device based on artificial intelligence, computer equipment and medium
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN110796005A (en) Method, device, electronic equipment and medium for online teaching monitoring
CN112749728A (en) Student model training method and device, computer equipment and storage medium
CN112578971B (en) Page content display method and device, computer equipment and storage medium
KR20170012979A (en) Electronic device and method for sharing image content
CN110457571B (en) Method, device and equipment for acquiring interest point information and storage medium
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
CN111178343A (en) Multimedia resource detection method, device, equipment and medium based on artificial intelligence
CN113516143A (en) Text image matching method and device, computer equipment and storage medium
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN114296627B (en) Content display method, device, equipment and storage medium
CN112860046B (en) Method, device, electronic equipment and medium for selecting operation mode
CN112257594A (en) Multimedia data display method and device, computer equipment and storage medium
CN111753813A (en) Image processing method, device, equipment and storage medium
WO2023066373A1 (en) Sample image determination method and apparatus, device, and storage medium
CN110795660A (en) Data analysis method, data analysis device, electronic device, and medium
CN110110142A (en) Method for processing video frequency, device, electronic equipment and medium
CN112711335B (en) Virtual environment picture display method, device, equipment and storage medium
CN114996515A (en) Training method of video feature extraction model, text generation method and device
CN115130456A (en) Sentence parsing and matching model training method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant