CN108429927B - Smart television and method for searching virtual commodity information in user interface - Google Patents

Smart television and method for searching virtual commodity information in user interface Download PDF

Info

Publication number
CN108429927B
CN108429927B CN201810130360.1A CN201810130360A CN108429927B CN 108429927 B CN108429927 B CN 108429927B CN 201810130360 A CN201810130360 A CN 201810130360A CN 108429927 B CN108429927 B CN 108429927B
Authority
CN
China
Prior art keywords
interface
user
user interface
payment
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810130360.1A
Other languages
Chinese (zh)
Other versions
CN108429927A (en
Inventor
周杉
谢尧
王会朝
李娜
王丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Media Network Technology Co Ltd
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202110605678.2A priority Critical patent/CN113422999B/en
Priority to CN201810130360.1A priority patent/CN108429927B/en
Publication of CN108429927A publication Critical patent/CN108429927A/en
Application granted granted Critical
Publication of CN108429927B publication Critical patent/CN108429927B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Abstract

The application provides a method for searching virtual commodity information in a user interface based on display equipment, which relates to the field of display and comprises the following steps: displaying a first user interface based on a selection input in any purchasing application, wherein the first user interface comprises commodity information and a first interface which are associated with a first virtual commodity, and the first virtual commodity comprises a learning class course; displaying a second user interface in response to an instruction input to the first interface, the second user interface including a second interface associated with a second virtual good, the second virtual good including a member for purchasing a learning-based course associated service; and responding to the instruction input to the second interface, displaying a third user interface, wherein the third user interface comprises payment information associated with the second virtual commodity, and seamlessly searching commodity contents for the user in the process of purchasing the virtual commodity, so that the user experience can be improved.

Description

Smart television and method for searching virtual commodity information in user interface
Technical Field
The application relates to a display receiving terminal, in particular to a smart television and a method for searching virtual commodity information in a user interface based on display equipment.
Background
Smart tvs generally focus on online interactive media, internet tv, and streaming media on demand, rather than traditional broadcast media. The system can provide richer contents and services for users, and television manufacturers are dedicated to developing various convenient functions which are convenient for users to use, so that the use experience of products is improved.
Therefore, it is very important to provide a simpler and more intuitive interface and a visual content link for the existing smart tv, and to seamlessly interface with the user habit so as to browse and/or execute various functions of the smart tv.
Disclosure of Invention
The application aims to provide the smart television to meet the requirements of a more visual user interface and a seamless user interaction function. This disclosure addresses these improved functional needs by way of various aspects, examples, and/or configurations thereof. Furthermore, while the present disclosure has been described in terms of exemplary embodiments, it should be appreciated that individual claims may be presented in terms of each aspect of the disclosure. The present disclosure may provide a number of advantages, depending upon the particular aspects, examples, and/or configurations.
First, the present application provides a method for searching for virtual commodity information in a user interface based on a display device, including: displaying a first user interface based on a selection entered in a purchasing application, the first user interface including a first interface and merchandise information associated with a first virtual merchandise, the first virtual merchandise including a learning-based course; displaying a second user interface in response to an instruction input to the first interface, the second user interface comprising a second interface associated with a second virtual good, the second virtual good comprising a member for purchasing the learning-class course associated service; in response to an instruction input to the second interface, displaying a third user interface that includes payment information associated with the second virtual good.
Optionally, a text window, a picture window and a video window for a user to browse commodity information associated with the first virtual commodity are displayed in the first user interface, and at least two second interfaces are displayed in the second user interface, where the second interfaces include role information for describing applicable user ranges thereof.
Optionally, prompt information and at least two payment labels associated with the payment information are displayed in the third user interface, part or all of the payment labels include an original payment price and a discount price, and the prompt information includes personalized identification for indicating a range of applicable users of the second virtual commodity.
Optionally, receiving and responding to a triggering instruction input to the purchasing application, and displaying a commodity home page associated with the first virtual commodity, wherein the commodity home page comprises a main navigation area, a preferred recommendation area and an additional recommendation area; in response to a default or selected input that focus is positioned on a classification option bar in the main navigation area, displaying a commodity recommendation position for a user to browse the first virtual commodity-related commodity information in the additional recommendation area, wherein the classification option bar is used for indicating that part of the commodity information displayed on the commodity recommendation position is applicable to different user ranges; and receiving and responding to an instruction input after the focus is switched to the commodity recommendation position, and linking the first user interface.
Optionally, receiving and responding to a trigger instruction input to the purchasing application, displaying a commodity home page associated with the first virtual commodity, wherein the commodity home page comprises a main navigation area and a preferred recommendation area located on the right side of the main navigation area, and the preferred recommendation area comprises a member fusion interface for a user to browse member content and a carousel window for playing a video related to the first virtual commodity; receiving and responding to the input that the focus is switched from the classification option bar in the main navigation area to the preferred recommendation area, and positioning the focus in the carousel window; and receiving and responding to an instruction input to the carousel window, wherein the carousel window is switched to the first user interface, and the first user interface is used for playing the video in a full screen mode.
Optionally, receiving and responding to an input of the focus to switch to the member fusion interface, positioning the focus to the member fusion interface; receiving and responding to an instruction input to the member fusion interface, and displaying a payment fusion interface, wherein the payment fusion interface comprises a navigation area and a payment detail area, the navigation area comprises a plurality of grading columns, and classification information displayed on part or all of the grading columns is used for indicating a user range to which the payment information displayed in the payment detail area is applicable.
Optionally, the first user interface displays the video in a full screen mode and continuously displays the first interface within a playing time, and the second user interface displays the second interface and other function interfaces distributed in a line direction with the second interface.
Optionally, receiving and responding to a return instruction input to the third user interface, and calling the first user interface; receiving and responsive to an input to position a focus in the first user interface, positioning the focus at the first interface; and receiving and responding to the instruction re-input to the first interface, and calling the second user interface.
Optionally, receiving and responding to a return instruction input to the third user interface, and calling the second user interface; receiving and in response to an input selecting one of the second user interfaces, panning focus to any of the plurality of second interfaces; receiving and responding to an instruction input to the different second interface where the focus is positioned, and linking to the different third user interface.
Second, the present application provides a smart tv, including: a display screen configured to display content associated with a virtual good in a user interface; a memory; and a processor in communication with the memory and the display screen, the processor configured to perform any of the methods described above.
The foregoing is a brief summary of the application to explain certain aspects of the application. This summary is not an extensive or exhaustive overview of the application and its various aspects, examples, and/or configurations. It is intended to neither identify key or critical elements of the application nor delineate the scope of the application but to present some concepts of the application in a simplified form as an introduction to the detailed description that follows. It should be understood that other aspects, examples, and/or configurations of the disclosure may utilize one or more features, alone or in combination, set forth above or described in detail below.
Drawings
FIG. 1A is a first view of an example of an environment or smart television;
FIG. 1B is a second view of an example of an environment or smart television;
fig. 2 is a first view of an example of a smart tv;
FIG. 3 is a block diagram of an example of smart television hardware;
FIG. 4 is a block diagram of an example of smart television software and/or firmware;
FIG. 5 is a second block diagram of an example of smart television software and/or firmware;
FIG. 6 is a third block diagram of an example of smart television software and/or firmware;
FIG. 7 is a block diagram of an example of a content data service;
FIG. 8 is a front view of an example smart television screen;
figure 9 is an illustrative pictorial example of a user interface for a content/silo selector;
FIG. 10 illustrates an example GUI of a first virtual good home page in a buy-class application;
FIG. 11A is a view of the primary navigational area of the present example;
FIG. 11B is a content selection area view of the present example;
FIG. 12A illustrates an exemplary GUI of a payment blend pane when focus is moved to the "old" tab;
FIG. 12B is a GUI of a payment blend pane with focus moved to the "high and middle" option bar;
FIG. 13A is a full screen GUI of a user toggling a carousel window in a content display page multiple times from the GUI of FIG. 12;
FIG. 13B shows an example of a GUI of the link after triggering the "buy lesson with OK" button in FIG. 13A;
FIG. 13C is an example GUI of a link after triggering the "buy a little VIP" control of FIG. 13B;
FIG. 14A is a GUI displayed after triggering a goods recommendation site in an additional recommendation area;
FIG. 14B is the GUI displayed after triggering the "buy second VIP" control of FIG. 14A;
FIG. 14C is the GUI overlaid on FIG. 14A after triggering the "buy second VIP" control of FIG. 14B;
FIG. 14D is another GUI displayed after triggering the "buy second VIP" control of FIG. 14B;
FIG. 15 is a GUI showing only the corresponding icons with the text of the primary navigation area hidden when the user moves the focus once to the right;
FIG. 16 is a GUI in which the user moves the focus downward multiple times in the additional recommendations field 164;
FIG. 17A is a flow diagram of an example of a method 2100 for implementing application display by moving focus;
FIG. 17B is a flowchart of an example of a method 2200 for enabling application display by moving focus;
FIG. 17C is a flowchart of an example of a method 2300 of implementing an application display by moving focus;
FIG. 17D is a flowchart of an example of a method 2400 for implementing application display by moving focus;
FIG. 18A is a flowchart of one example of a method 3100 of implementing an application display upon detecting a return instruction to a payment details page;
FIG. 18B is a flow diagram of an example of a method 3200 of detecting a return instruction to a payment details page to implement an application display;
FIG. 18C is a flow diagram of one example of a method 3300 of detecting a return instruction to a payment details page to implement an application display.
In these figures, similar components and/or features may have the same reference label. Also, various components of the same type may be distinguished from other similar components by reference to a letter in the label. If only the first reference label is used in the specification, the description is applicable to any one of the similar components that are identical to the first reference label, regardless of whether the second reference label is identical.
Detailed Description
In the following description, numerous specific details are set forth to provide a more thorough explanation of embodiments of the present invention. It will be apparent, however, to one skilled in the art that the specific details may not be employed to practice embodiments of the present invention.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description and claims of the present invention, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context specifically indicates otherwise. It is also to be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
And depending on the context, the term "if" may be interpreted to mean "when", "upon", "in response to" or "in response to determining" or "in response to detecting". Similarly, depending on the context, the phrase "if it is determined" or "if [ stated condition or event ] is detected" may be interpreted to mean "upon determining" or "in response to determining" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]".
Further, the following terms are explained and illustrated in the present application.
The term "web TV" is the original TV content broadcast over the world Wide Web. The major web TV distributors are YouTube, Myspace, Newgroups, Blip.
"network television" (also known as internet television, online television) is a digital distribution of television content delivered over the internet. Web tv, which is a short program or video created by various companies and individuals, should not be confused with web tv, which is an emerging internet technology standard used by television broadcasters, and Internet Protocol Television (IPTV), which is an emerging internet technology standard. Internet television is a general term that refers to the delivery of television programs and other video content over the internet by video streaming technology, typically used by large conventional television broadcasters. But not to the technology used to deliver the content (see internet protocol television).
"internet protocol television" (IPTV) refers to a system that uses the internet protocol suite to deliver television services over a packet-switched network, such as the internet, rather than via traditional terrestrial, satellite signal, and cable formats. IPTV services can be grouped into three major groups: live television, with or without interactivity related to the current television program; time-shifted television: program rewarming (rebroadcasting a television program that is hours or days ago), rebroadcasting (playing the current television program from the beginning); and Video On Demand (VOD): a video directory is browsed, which directory is independent of television programming. IPTV differs significantly from internet television in that it has a continuous standardization process (e.g., european telecommunications standards institute) and advantageous deployment schemes for consumer telecommunications networks that provide high-speed access to end-user locations via set-top boxes or other client devices.
"smart tv" sometimes referred to as hybrid tv describes the trend of integrating internet and web2.0 and above functionality in a tv or set-top box, as well as the convergence of computer part functionality and these tv/set-top box technologies. Compared with the traditional television receiver and the set-top box, the method focuses more on online interactive media, internet television, set-top box content and on-demand streaming media, and focuses less on or improves the traditional broadcast media.
A "television" is a telecommunications medium, device (or apparatus) or series of related devices, programs and/or transmission equipment for transmitting and receiving monochrome (black and white) or color motion pictures, with or without accompanying sound. Television is most commonly used to display broadcast television signals. Broadcast television systems typically travel by wire or radio over designated channels in the 54-890 MHz band. A visual display device without a tuner should be referred to as a video monitor rather than a television. Televisions differ from other monitors or displays in the distance a user maintains from the television while viewing media, and televisions have tuners or other circuitry for receiving broadcast television signals.
"cable television" refers to a system for delivering television programming to subscribers via coaxial cable, either as Radio Frequency (RF) signals or as optical pulse signals via fiber optic cable. This is in contrast to conventional broadcast television (terrestrial television) in which the television signal is transmitted over the air by radio waves and received by a television antenna on the television. The term "channel" or "television channel" as used in this application may be a physical channel or a virtual channel, which are both paths for a television station or a television network to transmit programs. The physical channels in analog television have a certain amount of bandwidth, typically 6, 7 or 8MHz, occupying a predetermined channel frequency. In cable or satellite television, a virtual channel is representative of the data stream of a particular television media provider (e.g., television station such as CDS, TNT, HBO, etc.).
The term "satellite television" refers to television programming transmitted via a communications satellite and received via an outdoor antenna (typically a parabolic dish, commonly referred to as a satellite dish), and in domestic applications, the satellite receiver may be an external set-top box or a satellite tuner module built into the television receiver.
The term "live television" as used in this application refers to television production broadcast in real time or substantially synchronized with the time of occurrence of an event.
The term "video on demand" (VOD) as used in this application refers to systems and processes that allow a user to select and view/listen to video or audio content on demand. The VOD system may stream the content to view the real-time content or download it to a storage medium for later review.
A "blog" (also known as a "weblog") is a web site or portion of a web site that is supplemented with new content from time to time. Blogs are typically maintained by individuals, such as by adding comments, activity descriptions or other material such as pictures, videos, etc. on a regular basis. These contents are usually displayed in reverse chronological order.
"blog service" refers to a service that publishes blogs, which may be time-stamped by private or multiple users.
The term "social networking service" is a service provider that establishes an online community in which members have the same interest and/or activity, or are interested in learning about the interests and activities of others. Most social networking services are web-based, providing users with a variety of interactive means, such as email and instant messaging services.
The term "social network" refers to a web-based social network.
The terms "instant messaging" and "instant messaging" refer to a form of real-time text communication between two or more people, typically based on text input.
"Internet search engine" refers to a web search engine designed to search information on the world Wide Web and FTP servers. Search results are typically displayed in a result list, referred to as a SERPS or "search engine results page". The information may include web pages, images, information, and other types of files. Some search engines also collect data available in a database or open directory. Web search engines, when operated, store much of the web page information and then retrieve it from the HTML itself. These web pages are retrieved by a web crawler (sometimes referred to as a web spider, an automated web browser that tracks each link on the web site). The content of each page is then analyzed to determine how to index (e.g., extract text from a title, heading, or special field called a metatag). Data relating to the web pages is stored in an index database for future queries. Some search engines (e.g., Google)TM) Storing all or part of the content of the source page (called the cache) and information about the web page, other search engines (e.g. AltaVista)TM) Each word of each page found is stored.
The term "electronic address" refers to any reachable address, including telephone numbers, instant messaging processes, email addresses, global resource locators ("URLs"), universal resource identifiers ("URIs"), formal addresses ("AORs"), electronic aliases in databases (e.g., addresses), and combinations thereof.
The terms "online community," "electronic community," or "virtual community" refer to a group of people that communicate primarily over a computer network, rather than face-to-face, with the motivation for social, professional, educational, or other purposes. In interaction, a variety of media forms may be used, including Wikipedia, blogs, chat rooms, Internet forums, instant messaging, email, and other forms of electronic media. Many forms of media are used in social software, either alone or in combination, including text-based chat rooms and forums that use voice, video text or avatars.
The term "computer-readable medium" as used in this application refers to any tangible storage and/or transmission medium that participates in providing execution instructions to a processor. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes NVRAM, magnetic or optical disks, and the like. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, optical disk, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM (random access memory), a PROM (programmable read only memory), and EPROM (erasable programmable read only memory), a FLASH-EPROM, a solid state medium such as a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. A digital file attachment to an email or other self-contained information archive or set of archives is considered a distribution medium that corresponds to a tangible storage medium. When the computer-readable medium is configured as a database, it should be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the application is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and subsequent development media in which the software implementations of the application reside.
As used herein, the term "media" in "multimedia" refers to content in one of a set of different content formats. Multimedia may include, but is not limited to, one or more text, audio, still picture, animation, video, or interactive content formats.
The term "screen" as used herein refers to a physical structure containing one or more hardware components that enable a device to display a user interface and/or receive user input. The screen may include any combination of gesture capture regions, touch display screens, and/or configurable regions. The device may embed one or more actual screens in its hardware. However, the screen may also contain peripheral devices that are connected to or disconnected from the device. In some instances, multiple external devices may be connected on a device. For example, another screen with a remote control unit may be connected to the smart tv.
The term "display screen" refers to a portion of one or more screens for displaying computer output content to a user. The display screen may be a single screen display screen or a multi-screen display screen (referred to as a composite display screen). A single physical screen may contain multiple display screens that are managed as separate logical display screens. Thus, different content may be displayed on separate display screens, albeit in a portion of the same physical screen.
The term "gesture" refers to a user behavior that expresses an intended idea, action, meaning, effort, and/or result. User actions include operating a device (e.g., turning the device on or off, changing the device's orientation, moving a trackball or scroll wheel, etc.), movement of a body part relative to the device, movement of an implement or tool relative to the device, audio input, etc. The gestures may act directly on the device (e.g., on a screen) or interact with the device through the device.
The term "gesture capture" refers to the sensing or detection of an entity and/or type of user gesture. Gesture capture may occur in one or more regions of the screen. The gesture area may or may not be located on the display screen, referred to as a touch display screen, referred to as a gesture capture area.
The term "remote control" refers to a group of electronic devices (most commonly television receivers, DVD players and/or home cinema systems)The device can be controlled wirelessly within a short line of sight. The remote control typically uses infrared and/or Radio Frequency (RF) signals, and may include WiFi, wireless USB, BluetoothTMConnections, motion sensor enabled functions, and/or voice control. Touch screen remote controls are hand-held remote controls, replacing most of the physical built-in hard keys in a typical remote control with a touch screen user interface.
The term "display image" used in the present application refers to image content formed on a display screen. A typical display image is television broadcast content. The display image may occupy all or a portion of the display screen.
The term "display orientation" as used in this application refers to the display orientation of a rectangular display screen when viewed by a user. The two most common display directions are the column direction and the row direction. In the line mode, the width of the picture is greater than its height (e.g., 4: 3; or 16: 9).
The term "panel" as used in this application may refer to a user interface displayed on at least a portion of a display screen. The panel may be interactive (e.g., accept user input) or merely provide information (e.g., not accept user input). The panel may be translucent so that the panel may be obscured but not obscure the content on the display screen. The panels may vary according to user input from buttons or a remote control interface.
The term "silo" as used in this application may be a logical representation of an input, source, or application. The input may be an electronic device (e.g., DVD, video recorder, etc.) connected to the television through a port (e.g., HDMI, video/audio input port, etc.) or a network (e.g., local and wide area networks, etc.). Unlike a device, an input may be connected to one or more devices as an electrical or physical connection configuration. The source, and in particular the content source, may be a data service (e.g., media center, file system, etc.) that provides the content. The application may be a software service that provides a particular type of functionality (e.g., live television, video on demand, user applications, picture display, etc.). A silo, as a logical representation, may have other associated definitions or attributes, such as settings, functions, or other characteristics.
The term "module" as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element. Further, while the present application is described in terms of exemplary examples, it should be understood that claims may be presented in this application in a separate manner in respect of each of its aspects.
As used in this application, the terms "determine," "calculate," and "computer" and variations thereof are used interchangeably and include any type of methodology, procedure, mathematical operation or technique.
Hereinafter, when the present disclosure refers to "selecting," "selected," "to select," or "selecting" a user interface element in a GUI, these terms should be understood to include using a mouse or other input device, clicking or "hovering" over the user interface element, or using one or more fingers or styli to touch a screen, tap, or make a gestural action on the user interface element. The user interface elements may be virtual buttons, menu buttons, selectors, switches, sliders, erasers, knobs, thumbnails, links, icons, radio buttons, check boxes, and any other mechanism for receiving input from a user.
Smart Television (TV) environment:
reference is made to some embodiments of the smart tv 100 shown in fig. 1A and 1B. The smart tv 100 may be used for entertainment, business applications, social interactions, content creation and/or consumption, and/or further include one or more other devices for organizing and controlling communications with the smart tv 100. It can therefore be appreciated that smart tv can be used to enhance the user interaction experience, whether at home or at work.
In some instances, the smart tv 100 may be configured to receive and understand various user and/or device inputs. For example, the user may interact with the smart tv 100 through one or more physical or electronic controls, which may include buttons, switches, touch screens/zones (e.g., capacitive touch screens, resistive touch screens, etc.), and/or other controls associated with the smart tv 100. In some casesIn this case, the smart tv 100 may include one or more interactive controls. Additionally or alternatively, the one or more controls may be associated with a remote control. The remote controller may communicate with the smart tv 100 through wired and/or wireless signals. It will thus be appreciated that the remote control may communicate via Radio Frequency (RF), Infrared (IR) and/or a specific wireless communication protocol (e.g., Bluetooth)TMWi-Fi, etc.). In some cases, the physical or electronic controls described above may be configured (e.g., programmed) to suit the user's preferences.
Alternatively, a smart phone, tablet, computer, notebook, netbook, and other smart devices may be used to control the smart tv 100. For example, the smart tv 100 is controlled using an application running on the smart device. The application program may provide the user with various smart tv 100 controls in an intuitive User Interface (UI) on a screen associated with the smart tv 100 through configuration. The user's option input on the UI may be configured to control the smart device 100 via the application program using one or more communication functions associated with the smart device.
The smart television 100 may be configured to receive input through a variety of input devices including, but in no way limited to, video, audio, radio, light, tactile, and combinations thereof. Furthermore, these input devices may be configured to enable the smart tv 100 to see and recognize user gestures and react. For example, the user may speak with the smart tv 100 in a conversational manner. The smart television 100 receives and understands voice commands like intelligent personal assistants and voice-controlled navigation applications (such as Siri for apple, Skyvi for android, Robin, Iris, and others) that are smart devices.
In addition, the smart tv 100 may be configured as a communication device that may establish a network connection 104 and connect to a telephone network operated by a telephone company using a telephone line 120 in a number of different ways, including wired 108 or wireless 112, cellular network 116. These connections 104 enable the smart tv 100 to access one or more communication networks. A communication network encompasses any known communication medium or collection of communication media and may use any type of protocol to communicate information or signals between endpoints. The communication network may include wired and/or wireless communication technologies. The internet is an example of a communication network 132 that, together with many computers, computer networks, and other communication devices around the world, forms an Internet Protocol (IP) network, interconnected by many telephone systems and other means.
In some instances, the smart tv 100 may be equipped with a variety of communication tools. Various communication tools may allow the smart tv 100 to communicate over a Local Area Network (LAN)124, a Wireless Local Area Network (WLAN)128, and other networks 132. These networks may act as redundant connections to ensure network access. In other words, if one connection is interrupted, the smart tv 100 will re-establish and/or maintain the network connection 104 using another connection path. Moreover, the smart television 100 also uses these network connections 104 to send and receive information, as well as Electronic Program Guide (EPG)136 interactions, receive software updates 140, contact customer services 144 (e.g., to obtain help or services, etc.), and/or access remotely stored digital media library 148. In addition, these connections also allow the smart tv 100 to make phone calls, send and/or receive email messages, send and/or receive text messages (e.g., email and instant messages), surf the web using an internet search engine, blog through a blog service, and connect/interact with an online community maintained by a social media website and/or social networking service (e.g., Facebook, Twitter, LinkedIn, Pinterest, google plus, MySpace, etc.). When these network connections 104 are used in combination with other components of the smart tv 100 (described in more detail below), we can also hold video teleconferences, electronic conferences, and other types of communications on the smart tv 100. The smart tv 100 may capture and store images and sounds using a connected camera, microphone, and other sensors.
Additionally or alternatively, the smart tv 100 may create and save screenshots of media, images and data displayed on an associated screen of the smart tv 100.
As shown in fig. 1B, the smart tv 100 may interact with other electronic devices 168 via wired 108 and/or wireless 112 connections. As described herein, intelligenceComponents of the television 100 allow the device 100 to connect to devices 168, including but not limited to a DVD player 168a, a Blu-ray player 168b, a portable digital media device 168c, a smart phone 168d, a tablet device 168e, a personal computer 168f, an external cable box 168g, a keyboard 168h, a pointing device 168i, a printer 168j, a game controller and/or gamepad 168k, a satellite dish 168l, an external display device 168m, and other Universal Serial Bus (USB), Local Area Network (LAN), Bluetooth, a USBTMA High Definition Multimedia Interface (HDMI) component device, and/or a wireless device. When connected to the external cable box 168g or the satellite dish 168l, the smart tv 100 may access more media content.
Furthermore, as described in detail below, the smart tv 100 may receive digital and/or analog signal broadcasts of a tv station. It may operate as one or more of cable television, internet protocol television, satellite television, web television, and/or smart television. The smart television 100 may also be configured to control and interact with other intelligent components, such as a security system 172, a door entry/controller 176, a remote video camera 180, a lighting system 184, a thermostat 188, a refrigerator 192, and other devices.
The smart television:
fig. 2 illustrates the components of the smart tv 100. As shown in fig. 2, the smart tv 100 may be supported by a movable base or support 204 that is connected to a frame 208. The frame 208 surrounds the edges of the display screen 212 without obscuring its front face. The display screen 212 may comprise a Liquid Crystal Display (LCD), plasma screen, Light Emitting Diode (LED) screen, or other type of screen.
The smart television 100 may include an integrated speaker 216 and at least one microphone 220. In some examples, a first region of the frame 208 includes a horizontal gesture capture region 224 and a second region includes a vertical gesture capture region 228. The gesture capture areas 224 and 228 contain areas that can receive input by recognizing user gestures, and in some examples, the user need not actually touch the surface of the screen 212 of the smart tv 100 at all. The gesture capture regions 224 and 228 do not contain pixels that may perform a display function or capability.
In some examples, one or more image capture devices 232 (e.g., cameras) are added to capture still and/or video images. The image capture device 232 may contain or be connected to other elements, such as a flash or other light source 236 and a ranging device 240 to assist in focusing of the image capture device. In addition, the smart tv 100 may also identify the respective users using the microphone 220, the gesture capture areas 224 and 228, the image capture device 232, and the ranging device 240. Additionally or alternatively, the smart tv 100 may learn and remember preferences of individual users. In some instances, learning and memory (e.g., recognizing and recalling stored information) may be associated with user recognition.
In some examples, an infrared transmitter and receiver 244 may be further configured to connect to the smart tv 100 via a remote control device (not shown) or other infrared device. Additionally or alternatively, the remote control device may transmit wireless signals by other means besides radio frequency, light and/or infrared.
In some examples, the audio jack 248 is hidden behind a foldable or removable panel. Audio jack 248 contains a pointed round cannula (TRS) connector, for example, to allow a user to use headphones, or other external audio device.
In some examples, the smart tv 100 also includes several buttons 252. For example, fig. 2 shows buttons 252 on the top of the smart tv 100, which may be located elsewhere. As shown, the smart tv 100 includes six buttons 252 (from a to f) that can be configured for a particular input. For example, the first button 252 may be configured as an on/off button for controlling the system power of the entire smart tv 100. The buttons 252 may be configured together or separately to control various aspects of the smart tv 100. Some non-limiting examples include, but are not limited to, overall system volume, brightness, image capture device, microphone, and video conferencing hold/end. Instead of separate buttons, two buttons may be combined into one wave button. Such a waving button may be useful in certain situations, such as controlling a function of volume or brightness.
In some instances, one or more buttons 252 may be used to support different user commands. For example, the duration of a normal press is typically less than 1 second, similar to a quick input. The duration of the medium press down is generally 1 second or more but not more than 12 seconds. The duration of the long press is typically 12 seconds or more. This function of the button is generally dependent on the application activated on the smart tv 100. For example, in a video conferencing application, a normal, medium, or long press may mean ending the video conference, increasing or decreasing the volume, increasing the input response speed, and switching the microphone on or off, depending on the particular button. Depending on the particular button, a normal, medium, or long press may also control image capture device 232 to increase or decrease zoom, take a picture, or record a video.
Hardware functions:
fig. 3 illustrates some components of a smart tv 100 according to an example of the present application. The smart tv 100 comprises a display screen 304.
One or more display controllers 316 may be used to control the operation of the display screen 304. The display controller 316 may control the operation of the display screen 304, including input and output (display) functions. The display controller 316 may also control the operation of the display screen 304 and interact with other inputs, such as infrared and/or radio input signals (e.g., door access/gate controllers, alarm system components, etc.). In accordance with other examples, the functionality of the display controller 316 may be incorporated into other components, such as the processor 364.
Processor 364 may include a general-purpose programmable processor or controller that executes application programming or instructions. In accordance with at least some examples, processor 364 includes multiple processor cores and/or executes multiple virtual processors. In accordance with other examples, processor 364 may comprise a plurality of physical processors. As a particular example, the processor 364 may comprise a specially configured Application Specific Integrated Circuit (ASIC) or other integrated circuit, a digital signal processor, a controller, a hardwired electronic or logic circuit, a programmable logic device or gate array, a special purpose computer, or the like. The processor 364 is generally configured to execute program code or instructions to perform various functions of the smart tv 100.
To support the connection function or capability, the smart tv 100 may include an encode/decode and/or compress/decompress module 366 to receive and manage digital tv information. The encode/decode compress/decompress module 366 may decompress and/or decode analog and/or digital information transmitted from the public television chain or in the private television network received via the antenna 324, the I/O module 348, the wireless connection module 328, and/or the other wireless communication module 332. Television information may be sent to the display screen 304 and/or an attached speaker that receives analog or digital received signals. Any encoding/decoding and compression/decompression may be performed based on a variety of formats (e.g., audio, video, and data). The encryption module 324 communicates with the encode/decode compression/decompression module 366 so that all data received or transmitted by a user or vendor is kept secret.
In some examples, the smart tv 100 includes an additional or other wireless communication module 332. For example, other wireless communication modules 332 may include Wi-Fi, BluetoothTMWiMax, infrared, or other wireless communication link. The wireless connection module 328 and the other wireless communication module 332 may each be interconnected with a common or dedicated antenna 324 and a common or dedicated I/O module 348.
In some examples, to support communication functions or capabilities, smart tv 100 may include wireless connection module 328. For example, wireless connection module 328 may include a GSM, CDMA, FDMA and/or analog cellular telephone transceiver capable of transmitting voice, multimedia and/or data over a cellular network.
An input/output module 348 and associated ports may be added to support communication with other communication devices, servers and/or peripherals, etc., over a wired network or link. Examples of the input/output module 348 include an Ethernet port, a Universal Serial Bus (USB) port, a ThunderboltTMOr a Light Peak interface, an Institute of Electrical and Electronics Engineers (IEEE)1394 port, or other interface.
An audio input/output interface/device 344 may be added to output analog audio to an interconnected speaker or other device, and to receive analog audio input from a connected microphone or other device. For example, the audio input/output interface/device 344 may include an associated amplifier and analog-to-digital converter. Alternatively or additionally, the smart tv 100 may include an integrated audio input/output device 356 and/or an audio jack to which an external speaker or microphone is connected. For example, adding an integrated speaker and integrated microphone provides support for near-end speech or speakerphone operation.
A port interface 352 may be added. The port interface 352 comprises a peripheral or general purpose port that provides support for the device 100 to connect to other devices or components (e.g., docking stations) that may or may not provide additional or different functionality to the device 100 after interconnection. In addition to supporting the exchange of communication signals between device 100 and other devices or components, docking port 136 and/or port interface 352 may provide power to device 100 or to output power from device 100. The docking port 352 also contains an intelligent component that includes a docking module that controls communication or other interaction between the smart television 100 and the connected devices or components. The docking module may interact with software applications to remotely control other devices or components (e.g., media centers, media players, and computer systems).
The smart tv 100 may also include a memory 308 for the processor 364 to execute application programming or instructions and for temporary or long-term storage of program instructions and/or data. For example, the memory 308 may include RAM, DRAM, SDRAM, or other solid state memory. In some examples, a data store 312 is added. Similar to the memory 308, the data storage 312 may include one or more solid-state memories. In some examples, data storage 312 may include a hard disk drive or other random access memory.
For example, hardware buttons 358 may be used for certain control operations. One or more image capture interfaces/devices 340 (e.g., cameras) may be added to capture still and/or video images. In some examples, the image capture interface/device 340 may include a scanner, code reader, or motion sensor. The image capture interface/device 340 may contain or be connected to other elements, such as a flash or other light source. The image capture interface/device 340 may interact with a user ID module 350 that helps identify the identity of the user of the smart tv 100.
The smart tv 100 may also include a Global Positioning System (GPS) receiver 336. According to some examples of the invention, the GPS receiver 336 may further include a GPS module to provide absolute positioning information to other components of the smart tv 100. It will therefore be appreciated that other satellite positioning system receivers may be used instead of or in addition to GPS.
The components of the smart television 100 may draw power through the main power source and/or the power control module 360. For example, the power control module 360 includes a battery, an ac-to-dc converter, power control logic, and/or ports for interconnecting the smart tv 100 to an external power source.
Firmware and software:
FIG. 4 shows an example of software system components and modules 400. Software system 400 may contain one or more layers including, but not limited to, an operating system kernel 404, one or more libraries 408, an application framework 412, and one or more applications 416. One or more layers 404 and 416 may communicate with each other to perform the functions of the smart tv 100.
The Operating System (OS) kernel 404 contains the primary functions that allow software to interact with the hardware associated with the smart tv 100. The kernel 404 may comprise a series of software to manage computer hardware resources and provide services to other computer programs or software code. Operating system kernel 404 is a primary component of the operating system and acts as a man-in-the-middle between application programs and data processing done using hardware components. Portions of the operating system kernel 404 may contain one or more device drivers 420. The device driver 420 may be any code in an operating system to assist in operating or controlling devices or hardware connected to or associated with the smart television. The driver 420 may contain code to operate video, audio, and/or other multimedia components of the smart television 100. Examples of drivers include display screens, cameras, Flash, Binder (IPC), keyboards, WiFi, and audio drivers.
Library 408 may contain code or other components that are accessed and executed during operation of software system 400. Libraries 408 may include, but are not limited to, one or more operating system runtime libraries 424, a television system Hypertext Application Language (HAL) library 428, and/or a data services library 432. Operating system runtime library 424 may contain code required by operating system kernel 404 and other operating system functions performed during the operation of software system 400. The library may contain code that is initiated during the operation of software system 400.
The tv service hypertext application language 428 may contain code required by the tv service, for execution by the application framework 412 or the application 416. The tv service HAL library 428 is specific to the smart tv 100 operation controlling the different smart tv functions. Furthermore, the tv services HAL library 428 may also consist of instances of other types of application languages than hypertext application languages or different code types or code formats.
The data services library 432 may contain one or more components or code to execute components that implement data service functionality. Data service functions may be performed in the application framework 412 and/or the application layer 416. FIG. 6 shows examples of data service functions and component types that may be included.
The application framework 412 may contain a general abstraction for providing functionality that may be selected by one or more applications 416 to provide specific application functionality or software for those applications. Thus, the framework 412 can include one or more different services or other applications that can be accessed by the application 416 to provide general functionality on two or more applications. Such functionality includes, for example, management of one or more windows or panels, planes, activities, content, and resources. The application framework 412 may include, but is not limited to, one or more television services 434, a television services framework 440, television resources 444, and user interface components 448.
The television services framework 440 may provide additional abstractions for different television services. The television services framework 440 allows for conventional access and operation of services related to television functions. The television services 436 are general services provided in a television services framework 440, which television services framework 440 may be accessed through applications in the application layer 416. The television resources 444 provide code for accessing television resources including any type of stored content, video, audio, or other functionality provided by the smart television 100. Television resources 444, television services 436, and television services framework 440 serve to perform various television functions accompanying smart television 100.
The one or more user interface components 448 may provide general components for the display of the smart tv 100. The user interface component 448 can be accessed as a generic component through various applications provided by the application framework 412. The user interface component 448 may be accessed to provide services for panels and silos as described in figure 5.
The application layer 416 contains and executes applications associated with the smart tv 100. The application layer 416 may include, but is not limited to, one or more live television applications 452, video-on-demand applications 456, media center applications 460, application center applications 464, and user interface applications 468. The live television application 452 may provide live television through different signal sources. For example, the live television application 452 may provide television using input from cable television, radio broadcast, satellite service, or other types of live television services. The live television application 452 may then display a multimedia presentation or a video and audio presentation of the live television signal on the display screen of the smart television 100.
The video-on-demand application 456 may provide video from different storage sources. Unlike the live television application 452, the video on demand 456 provides a video display from some stored source. The video-on-demand source may be associated with a user or a smart tv or some other type of service. For example, video on demand 456 may be provided from an iTunes library stored in cloud technology, from local hard disk storage containing stored video programs, or some other source.
Media center application 460 may provide applications needed for various media presentations. For example, media center 460 may service the display of images or audio other than live television or video on demand but still accessible to the user. The media center 460 may obtain media displayed on the smart tv 100 by accessing different sources.
The application center 464 may provide, store, and use applications. The application may be a game, a productivity application or some other application commonly associated with computer systems or other devices but which may run in a smart tv. The application center 464 may obtain these applications from different sources, store them in local memory, and then execute them for the user on the smart tv 100.
The user interface application 468 may provide services for a particular user interface associated with the smart television 100. These user interfaces may include the silo and panels described in figure 5. An example of user interface software 500 is shown in FIG. 5. Here, the application framework 412 includes one or more code components that help control user interface events, while one or more applications in the application layer 416 affect the use of the user interface of the smart tv 100. Application framework 412 may include a silo switch controller 504 and/or input event transmitter 508. There may be more or fewer code components in the application framework 412 than shown in FIG. 5. Silo switch controller 504 contains code and languages that manage the switching between one or more silos. The silo can be a vertical user interface function on the intelligent television and contains information available for users. Switch controller 504 may manage the switching between the two silos upon the occurrence of an event at the user interface. The input event transmitter 508 may receive event information for the user interface from the operating system and then transmit to the input event transmitter 508. Such event information may include button selections on a remote control or television or other type of user interface input. The input event sender may then send these event information to the silo manager 532 or the panel manager 536 (depending on the event type). Silo switch controller 504 may interact with silo manager 532 to affect changes to the silo.
Application framework 416 may contain user interface application 468 and/or silo application 512. The application framework 416 may include more or fewer user interface applications than are necessary to control the smart tv 100 than shown in fig. 5. The user interface application may include a silo manager 532, a panel controller 536, and one or more panels 516-528. Silo manager 532 manages the display and/or functionality of the silo. Silo manager 532 may receive or transmit information from silo switch controller 504 or input event transmitter 508 to modify the displayed silo and/or to determine the type of input the silo receives.
The panel manager 536 may display panels in the user interface to manage switching between the panels or to affect user interface inputs received in the panels. Accordingly, the panel manager 536 may communicate with different user interface panels, such as the global panel 516, the volume panel 520, the settings panel 524, and/or the notification panel 528. The panel manager 536 may display these types of panels depending on the input from the input event transmitter 508. The global panel 516 may contain information related to the home screen or the user's highest level information. The volume panel 520 displays information related to audio volume controls or other volume settings. The information displayed by the settings panel 524 may relate to audio or video settings or other settable characteristics of the smart tv 100. The notification panel 528 may provide information related to user notifications. These notifications may be related to information such as video on demand displays, favorites, currently available programs, etc., or other information. The content of the notification is related to the media or some type of setting or operation or the smart tv 100. The panel manager 536 may communicate with the panel controller 552 of the silo application 512.
The panel controller 552 may control some of the several types described above. Thus, the panel controller 552 may communicate with the top panel application 540, the application panel 544, and/or the bottom panel 548. These several panels are different from each other when displayed in the user interface of the smart tv 100. Thus, the panel controls may set the panels 516-528 to a certain display orientation (determined by the top panel application 540, the application panel 544, or the bottom panel application 548) depending on the system configuration or the type of display currently in use.
FIG. 6 is an example of a data service 432 and data management operations. Data management 600 may include one or more code components associated with different types of data. For example, data service 432 may have several code components therein that may perform and correlate video on demand, electronic program guide, or media data. The data services 432 may have more or fewer component types than shown in fig. 6. Each of the different types of data may include the data model 604-612. These data models determine what information the data service stores and how it will store. Thus, the data model can manage any data, regardless of where they come from and how they will be received and managed in the smart tv system. Accordingly, the data models 604, 608, and/or 612 may provide the ability to translate or influence the translation of data from one form to another form that is available to the smart tv 100.
Various data services (video on demand, electronic program guide, media) have a data sub-service 620, 624 and/or 628 for communicating with one or more internal and/or external content providers 616. Data subservices 620, 624, and 628 communicate with content provider 616 to obtain data, which is then stored in databases 632, 636, and 640. The sub-services 620, 624, and 628 may communicate with the content provider, initiating or enabling one or more source plug- ins 644, 648, and 652 to communicate with the content provider. The source plug- ins 644, 648, and 652 differ for each content provider 616. Thus, if the data has multiple content sources, each data subservice 620, 624, and 628 may decide and then enable or launch a different source plug-in 644, 648, and/or 652. In addition, the content provider 616 may also provide information to the resource arbiter 656 and/or the thumbnail cache manager 660. The resource arbiter 656 may communicate with resources 664 external to the data services 432. Accordingly, the resource arbiter 656 may communicate with cloud storage, network storage, or other types of external storage in the resources 664. The information will then be provided to the data subservices 620, 624, 628 by the content provider module 616. Similarly, the thumbnail cache manager contains thumbnail information from one of the data subservices 620, 624, 628 and stores the information in the thumbnail database 666. Moreover, the thumbnail cache manager 660 may also extract or retrieve information from the thumbnail database 666 to provide to one of the data subservices 620, 624, 628.
Fig. 7 shows an exemplary content aggregation structure 1300. The structure may include a user interface and content aggregation layer 1304 and 1308. User interface layer 1304 may include a television application 1312, a media player 1316, and applications 1320. The television application 1312 enables a viewer to view channels received via an appropriate transmission medium, such as cable, satellite, and/or the internet. The media player 1316 may view other types of media received over an appropriate transmission medium, such as the internet. The applications 1320 include other television-related (pre-installed) applications such as content viewing, content searching, device viewing and setup algorithms, and may also cooperate with the media player 1316 to provide information to the viewer.
The content source layer 1308 contains, as data services, a content source service 1328, a content aggregation service 1332, and a content presentation service 1336. The content source service 1328 manages content source investigators including local and/or network file systems, digital network device managers (which discover handheld or non-handheld devices (e.g., digital media servers, players, renderers, controllers, printers, uploaders, downloaders, network connection functions, and interoperability units) via known techniques such as multicast universal plug and play or UPnP discovery techniques, and retrieve, parse, and encrypt device descriptors for each device discovered, notify content source services for newly discovered devices, and provide information about previously discovered devices, such as indexes), internet protocol television or IPTV, digital television or DTV (including high definition and enhanced television), third party services (such as the services referenced above), and applications (such as android applications).
The content source investigator may track content sources, typically configured as binary. The content source service 1328 may launch a content source investigator and maintain an open and persistent communication channel. The communication includes a query or command and a response pair. The content aggregation service 1332 manages content metadata obtainers, such as video, audio, and/or image metadata obtainers. The content presentation service 1336 provides content indexing interfaces, such as an android application interface and a digital device interface.
The content source service 1328 may send communications 1344 to and receive from the content aggregation service 1332. The communication contains notifications regarding the latest and deleted digital devices and/or content and search queries and results. The content aggregation service 1332 can send communications 1348 to and receive from the content presentation service 1336, including device and/or content lookup notifications, advisories and notifications of content of interest, and search queries and results.
When a search is performed, particularly when the user is searching or browsing for content, content presentation service 1336 may receive a user request from user interface layer 1300, opening a socket and sending the request to content aggregation service 1332. The content aggregation service 1332 first returns results from the local database 1340. Local database 1340 contains indexes or data models and indexed metadata. The content source service 1328 further issues search and browse requests for all content source investigators and other data management systems. The results will be sent to the content aggregation service 1332, which updates the database 1340 to reflect the further search results and provides the original content aggregation database search results and the data update results reflecting the more content source service search results to the content presentation service 1336 through the previously opened socket. The content presentation service 1336 then provides the results to one or more components of the user interface layer 1300 for presentation to the viewer. When the search phase is over (e.g., the search phase is terminated by a user or user action), the user interface layer 1300 will disconnect the jack. As shown, media may be provided from the content aggregation service 1332 directly to the media player 1316 for display to the user.
As in fig. 8, video content (e.g., television programs, videos, televisions, etc.) is displayed on the front side of screen 212. The window 1100 obscures a portion of the screen 212 and the remainder of the displayed video content, and may also cause the portion of the screen 212 displaying the video content to move up or down and/or compress as the height of the window 1100 changes, and it may also be that the window 1100 is superimposed over the video content such that the change in height of the window 1100 does not affect the display position of the video content.
Window 1100 can include one or more items of information, such as: a front panel navigation bar associated with the currently displayed image and/or content, detailed information (e.g., title, date/time, audio/visual indicators, ratings and genres, etc.), a hotkey bar, an information entry bar associated with a browse request and/or a search request.
In some examples, window 1100 includes information regarding the appropriate information associated with the content (e.g., name, duration, and/or remaining content viewing time), setup information, television or system control information, application (active) icons (e.g., pre-installed and/or downloaded applications), application centers, media centers, web browsers, input sources.
Fig. 9 is an illustrative pictorial example of a user interface for a content/silo selector. The avatar 1400 includes a content source selector 1404. Content source selector 1404 includes icons for one or more silos 1408-1424.
Content source selector 1404 may include two or more icons 1408-1424 representing different silos. For example, icons 1408 through 1420 represent different content application silos. The different content application silos may comprise a live tv silo, represented by icon 1408. A live television silo is a logical representation of a broadcast television signal application that may provide television content to a user of the television 100. A Video On Demand (VOD) silo is represented by reference 1412. The VOD silo provides a path for access to video or other types of media that may be selected and provided to the user on demand. The media center silo is represented by icon 1416. The media center silo contains applications that provide images and/or movies developed or stored by users. The media center provides a way for the user to store his media using the smart tv 100. The application silo is represented by icon 1420. Application silos provide games and other user applications that can be accessed and used on television, and the like. The input source silo 1424 may be any type of device or other storage mechanism that is connected to the television 100 through an input port or other electrical connection, such as: HDMI, and other input interfaces, or input interface aggregation silos.
FIG. 10 illustrates an exemplary Graphical User Interface (GUI) in a purchase class application, such as a home GUI display of a user under an aggregation learning application. The home GUI may display two areas on the display screen, a content display area 160 and a main navigation area 170.
The main navigation area 170 is configured to instruct the content display area 160 to display a plurality of options, and the recommendation position in each option displays classification information associated with a virtual product. Each of the options bars consists of an icon and text, and the options bars can hide the text and only display the corresponding icon, i.e. the reduced form of the main navigation area. In some examples, the primary navigation area 170 is displayed in the edge area of the screen, such as: the left edge. The user may also select to be displayed on the right edge, or to move to the lower side and change the layout orientation in the system setup.
The content display section 160 includes three parts, a status display section 161, a priority recommendation section 163, and an additional recommendation section 164, wherein the item information associated with the first virtual item, which may be a recommendation picture or recommendation place of the learning-class course, such as a "blessing hearing" picture or a video carousel window, is presented in the priority recommendation section 163 and the additional recommendation section 164.
The status display section 161 provides the user with basic information indicating the network environment and the external environment in which the smart tv 100 is located. Such as displaying a combination of one or more of a user's login status, network connection status, signal source, local time, local weather, city of residence.
The priority recommendation field 163 is used to provide an interface for searching for information associated with a second virtual good, such as: the member fusion interface shown as "open member" may also provide a function button for downloading the application program of the mobile phone board and a function button for providing the user with the information of the commodities associated with the first virtual commodity, such as advertisements and/or tutorials cooperating with the third-party learning platform, in the aggregation learning application program.
The additional recommendation field 164 is used to provide the user with the resources in the resource list at the recommendation position by the user's personal habits and content classification.
In some examples, the primary navigation area 170 shadows the location of the user-selected option bar; the content display area 160 displays the location of the user-selected tab in focus 150. Although a combination of shading and boxes are used in this description, other methods or configurations may be used to select and/or identify icons. For example, in addition to shading and boxes, adjustment 1720, etc. icons and the background of text may be included, thus making the color, shading, or tint different. Alternatively or additionally, the shadow 140 and focus 150 may contain enlarged or enlarged icons and text.
The user can simply move the focus to a certain option bar for viewing; the selection of the tab at the location of the focus may also be made. Once the selection is made, another GUI is provided to the user for interacting with the selected content.
As shown in fig. 11A, the main navigation area 170 specifically includes a plurality of options, and some or all of the options set classification information based on the user range of the population suitable for the virtual commodity, such as a preschool column, a high school column, a primary school column, a professional column, and/or an old age column; categorizing information, such as one or more of a personal records field, a search instruction entry field, a fine recommendations field, a hobbies field, and/or a language learning field, is set as aggregated content based on common habits of the users. In this embodiment, the main navigation bar 170 displays seven options bars at a time, and the remaining options bars are shown in a rolling manner.
The option bar of the main navigation area 170 is composed of icons and text, and the option bar may hide the text and only display the corresponding icons. These two display modes respectively correspond to two states of the main navigation area 170, namely, a simple state when only the icon is displayed while the text is hidden.
A personal record column is shown with "my" 171 for displaying content related to the user, such as a personal collection, in the content display area 160. A search instruction entry field, such as "search" 172, is used to instruct the user to enter relevant command keywords accordingly for searching. A pick recommendation column, shown as "pick" 173, for recommending to the user, in the content display area, the item information of the hot spot or the latest first virtual item corresponding to "pick" 173, which may be recommended based on the user click, the browsing duration, or the update time; the scholars tab is used to indicate the relevant content displayed in the content display area 160 according to the scholars selected by the user. Exemplary of the scholars' calendar options are "preschool" option 174, "elementary school" option 175, "junior school" option 176, "high school" option 177.
Fig. 11B is a content selection area view of the present example. The status display area 161 includes a member center option column 1611, a network connection option column 1612, a signal source option column 1613, a time option column 1614, a weather option column 1615, a city option column 1616, and a navigation indication column 1617 arranged to the right, wherein the navigation indication column is used for displaying the focus selection of the user in the main navigation area 170.
The priority recommendation field 163 includes a learning record option field 1631 for recording the historical lesson content learned by the user and providing a recommendation list based on the historical learning content; a member fusion interface 1632 for guiding the user to search the payment details page; a mobile phone APP download option bar 1633 for providing a user with a two-dimensional code for downloading a mobile phone application; carousel window 1634; a carousel thumbnail window 1635, and a main recommendation field 1636. The contents displayed in the main recommendation section 1636 are the latest online and/or updated lessons, videos and/or applications. The number of recommendation bits in the main recommendation area 1636 is no more than 16 at most, and the recommendation bits are arranged in two rows.
The member fusion interface 1632 displays "open member" and LOGO graphics for the user to browse the payment function and virtual goods of the interface, and is disposed in the left column of the priority recommendation area 163 for priority browsing.
Fig. 12A is a payment fusion GUI of the present embodiment. A payment fusion page 1810 is displayed in response to an instruction input to the member fusion interface 1632 for a user to select to view payment information, the payment fusion page 1810 including a navigation area and a payment detail area in which payment detail information corresponding to a rating column is displayed based on a user selection input to the rating column in the navigation area, for example: when the focus moves to the "old age" rating bar 1811, payment detail information including a payment tag area 1813 is displayed in the payment detail area 182 on the right side thereof, a plurality of payment tags are recommended in the payment tag area 1813 based on the payment habits of the user in the time dimension for the user to select browsing of payment information associated with the second virtual goods, for example, annual and semiannual, quarterly and monthly prices for indicating the payment amount required by the member are sequentially displayed in lines in the payment tag 1821, which are 253, 128, 45 and 30 in this order; a prompt field 1812 may also be included in which prompt information is displayed in prompt field 1812 for a user to view merchandise information associated with a member or/course merchandise, such as: the "old VIP" and "264 class hours in total" are used to inform the user of the commodity information.
FIG. 12B is another payment fusion GUI of the present embodiment. To vertically refine the user's range, the hierarchy columns displayed in the navigation area of the payment fusion page 1820 partially or entirely include at least two levels of hierarchy columns, and the payment detail information associated with the second level columns below the first level columns is displayed based on the user's selection of the first level columns, as shown in fig. 12B, when the focus is positioned on the "high-middle" first level column 1821, the "high-first", "high-second", and "high-third" second level columns are displayed on the right side thereof, the payment detail area 182 displays by default the payment detail information corresponding to the "high-first" second level column, the payment tag area 1823 in the payment detail information, a plurality of payment tags in the payment tag area 1823 are available for vertical payment for virtual goods corresponding to the first level columns, and the original price and the discounted price are displayed in each payment tag, as: the original annual price is "725 yuan", and the discount price is "490 yuan".
The prompt information 1833 listed in the payment tag area 1823 includes personalized recommendation contents suitable for the user range corresponding to the rating bar, such as "one VIP higher", "three jumps", "6 folds all over the world", and "one VIP higher than the last year is not selected to be upgraded, and the validity period is not changed", so as to improve the user experience.
In some embodiments, based on the selection input by the user, the focus is located to the carousel window 1634 in the priority recommendation area 163, and the input of the trigger instruction to the carousel window 1634 switches to the full-screen interface, as shown in fig. 13A, which is the full-screen GUI of the present embodiment.
Displaying a first interface 1911 on the video played by the full-screen user interface 1910 for transferring a payment page, wherein the first interface 1911 maintains a display state during the full-screen playing period so that a user can select the interface at any time within the watching duration; the first interface 1911 may display prompt information for describing an instruction input manner and commodity information for describing a search for the first virtual commodity, such as "press OK key" and "course".
Fig. 13B is a video detail GUI of the present embodiment. In response to a first instruction (e.g., a key instruction) input to the first interface 1911, the user proceeds to a video details page 1920 for viewing item detail information related to the first virtual item, the item detail information including text information for commenting on the virtual item and a second interface 1921 for guiding the user to search for a second virtual item associated with the first virtual item, and the second interface 1921 displays item information associated with the second virtual item, such as "buy a little VIP", as item information for seamlessly searching for a different virtual item.
The focus in the video details page 1920 is located on the carousel window by default, based on a right shift command input by the user, the focus is first located on the full-screen play control from the carousel window, and then is shifted to the right of the second interface 1921, and a payment details GUI can be called by inputting an instruction to the second interface 1921, as shown in fig. 13C, the payment details GUI of the embodiment is obtained.
In response to a second instruction (such as a voice or key instruction) input to the second interface 1921, displaying a payment detail page 1930 corresponding to the second virtual good, wherein at least two payment tags are displayed on the payment detail page 1930, and prompt information associated with the second virtual good is also included, a payment tag area and a prompt area are distributed in a double-row manner, and different payment detail GUIs display at least partially different prompt information and/or payment tags; price information is displayed on the payment label for providing a user with a view of multiple dimensions, including total and average prices, such as: total annual price "490" and its corresponding balance average price "1.2"; the second virtual commodity provides payment service for the first virtual commodity, if 490 Yuan open annual members pay, the members can watch all class education courses of 'first grade of primary school' in one year, and other class courses which do not open any member can only be watched for 5 minutes.
The following prompt messages arranged in a row direction in the payment detail page 1930 are that the occupied length L2 of a plurality of payment tags arranged alternately is smaller than the page length L1 of the payment detail page 1930, and the payment detail page 1930 can display an expansion frame 1931 for a user to select to view the expansion payment tags which are not displayed in the detail page or/and hide the expansion payment tags in a non-visible area of the payment detail page for concise display; the discount price and the original price in the payment label are displayed in double rows, the original price is positioned at the lower row of the discount price and marked by the deletion line, and payment information is displayed in an enhanced mode.
In some examples, the item detail page is linked based on a user input selection corresponding to the item information in the additional recommendation field 164, such as inputting a voice or moving focus position associated with "celebrity class-high schdule (personal education version a)", and the course detail page is displayed after triggering the corresponding recommendation position, as shown in fig. 14A, which is the course detail GUI of the present embodiment.
Course details page 2110 includes first interface 2111, merchandise information associated with the second virtual merchandise is displayed on first interface 2111, and is used for providing a second interface for the user to search for merchandise associated with the second virtual merchandise, and may also include other function controls distributed in line with first interface 2111, such as a full-screen play control, a collection control, and a like control, and the front-back layout order may be set based on the importance of the function thereof to the user's needs.
The course details page 2110 further includes merchandise information distributed around the display area of the first interface 2111 and associated with the first virtual merchandise, which may include text information describing the function of the first virtual merchandise, a video window for playing the first virtual merchandise, and/or merchandise information of other first virtual merchandise recommended based on the first virtual merchandise, such as a peripheral device (a handle or a keyboard) adapted to a game application, a casual video with similar contents, an application installation package with similar functions, and the like, so as to enrich the recommended contents; the first interface 2111 and other functionality controls are level above and below the row, with spacing between adjacent controls not greater than the minimum length occupied by each control in the row, so that the combined display provides a user with a convenient and easy to select layout video detail page, for example: and distributing the controls according to the front and back sequence of the full-screen playing control, the purchasing second-highest VIP control, the collecting control and the like, wherein the space between the controls is smaller than the length occupied by the like in the row direction.
The list column displayed in the course detail page 2110 comprises a title 2112 and a sub-recommendation bit area 2113 displayed under the title 2110, a plurality of recommendation bits are displayed in a visible area of the sub-recommendation bit area 2113, and a plurality of recommendation bits are partially or completely hidden in a non-visible area, in response to a movement instruction (such as a left-right gesture or voice) input by a user in the sub-recommendation bit area 2113, the hiding state of the non-visible area and the recommendation bits displayed in the visible area are mutually converted, for example, the user moves the operation focus to the right in the sub-recommendation bit area 2113, the recommendation bit 2113a and the previous recommendation bit are moved to the left by one recommendation bit length, so that the recommendation bit 2113a is converted from a semi-hidden state to a visible state, thereby expanding commodity information, providing convenience for user selection, and increasing user data for a content service provider; in order to better balance the rich commodity information and facilitate the user to select the commodity information from the plurality of recommendation bits, the recommendation bits displayable in the auxiliary recommendation bit area 2113 are equal to the number of recommendation bits in the list column where the recommended content source is located, and the number is between 5 and 20; the title bar 1642 may display identification information based on user personalized habit settings, such as "guess you like", "interest and hobby quality of life", and its belonging list bar generates a recommendation list according to user selected habits and/or in combination with recommendation hot spots and places it in the sub-recommendation bit area 2113.
Linking a corresponding item detail page based on a first virtual item displayed in the user-selected accessory recommendation area, such as peripheral item information adapted to a game, the item detail page being displayed based on another GUI in which the content display area 160 is located is shifted or overlaid, wherein the shifting includes a status change such as hiding, deleting, or location moving, the item detail page includes an item experience advertisement, a picture, or/and textual information describing its function associated with the first virtual item, and a first interface for guiding a user to search for an interface fuse page associated with a second virtual item (such as a VIP or SVIP); the interface fusion page comprises at least two second interfaces, wherein the first interface and the second interface display at least part of associated text information, such as 'second-high VIP' and 'second-high-grade VIP', and can be arranged in different or same interface forms; the interface can be a graphical interface, such as a control button or/and a picture, or a program interface, such as a program called based on a voice command, configured in the user interface, and the mapping relation between the commodity name and the voice field is stored in the program.
Fig. 14B is an interface fusion GUI of the present embodiment. The interface fusion page 2120 triggers a preset event, such as a single-click, double-click or slide event, based on an instruction (e.g., single-click or voice) input by the user to the first interface 2111 in the course details page 2110), displays three second interfaces 2121, 2122 and 2123 on another GUI in response to the preset event, displays a focus on the second interface 2121 by default, positions the focus on the second interface 2122 in response to a right-shift selection input by the user, and places the second interface 2122 in a stretched state rather than in a tiled state on the left and right interfaces thereof, and provides search links for different user ranges applicable to the same second virtual commodity through interface fusion, thereby realizing seamless connection of commodity information and improving user experience.
The goods information displayed on the second interface 2122 may be set based on a user range of a second virtual goods applicable group, and the text information at least sets "open" for characterizing the interface search use and "VIP" or/and "member" for describing virtual goods, and the goods information displayed on the respective second interfaces are different for switching different payment detail pages, for example: the second interfaces 2121-2123 are respectively suitable for the user ranges of the first, second and third grades of high school, so as to set corresponding user information, provide brief and comprehensive search information for the user, and display contents such as 12A, 12B and 13C on the payment details page, which are not described herein again.
Interface fuse page 2120 shows that the background includes transparent color, translucent color or nontransparent color, such as grey, blue white etc. is different from the rendering color that shows on the second interface for the commodity information that the setback shows on the second interface, the commodity information of being convenient for browse, if: the interface fusion page 2120 and the second interface 2121 display dark gray and light gray, respectively, and the second interface 2122 where the focus is located displays white.
Fig. 14C is another interface fusion GUI according to the present embodiment. The interface fusion page 2130 is overlaid on the video detail page 2111 based on the selection of the user on the first interface 2111, the rendering color (such as the RGB value) of the video detail page 2111 is higher than that of the interface fusion page 2130, the transparency of the interface fusion page 2130 is lower than that of the video detail page 2111, and recommended contents of different user interfaces are displayed for fusion; the interface fusion page 2130 includes second interfaces respectively applicable to the first-highest, second-highest, and third-highest users, and character information is displayed on the second interface for describing the user range to which the second interface is applicable, the character information being shown as "first-highest VIP", "second-highest VIP", and "third-highest VIP"; to enhance the user experience and VIP activation rate, the interface fuses other areas on the page 2130 that are different from the second interface to display prompting information in pictures or text for prompting the user for a login account and the activation status of the VIP associated with the recommended content source, such as: list in the display area of the second interface "respected users, your good! "show login account, include". Such. curriculum in the following VIPs, select one of your interests to activate the Bar! "show the activation status.
In order to reduce the interference of the page layout mode on the browsing of the user, the second interfaces and/or the prompt messages are arranged in the line direction and distributed in the middle area of the interface fusion page, and the rendering colors of the interface fusion page and the video detail page are similar or identical, for example: the video detail page and the interface fusion page are respectively white and dark blue so as to hide the video detail page; the role information in the second search interface may show rendering colors that protrude beyond the interface blend, for example: the interface fuses the page and the role to be dark blue and yellow in turn, and the role information is displayed in an enhanced manner.
In some embodiments, the payment details page further includes payment hint information for the user to view attribute content related to the recommended content source or/and associated with the user scope, the payment hint information including the user scope and course coverage, for example: the user range is 'first grade of primary school', the discount rate is 'time limit 8-fold', the course coverage range is 'first grade of primary school' and 'famous teacher explains' star course; attribute content associated with the user scope is highlighted, for example: displaying a large font, a bolded font, or a font displayed on the background picture.
Fig. 14D is another interface fusing page GUI of the present embodiment. For the purpose of merging and displaying multiple interfaces, the interface merging page 2140 includes a main interface area 2141, an additional interface area 2142, and a commodity recommending area 2143, where multiple other recommended interfaces different from the second interface are displayed in the main interface area 2141, multiple second interfaces are displayed in the row direction in the additional function area 2142, a commodity recommending position associated with the first virtual commodity is displayed in the commodity recommending area, and commodity information, such as pictures or/and characters, is displayed on the commodity recommending position, and is a simple search path.
The main interface area 2141 may include a two-dimensional code, where the two-dimensional code includes a link address (URL) of a payment details page, and is configured to transmit a payment link to another terminal (e.g., a mobile phone, a Pad, or a server of a wearable device) and display the payment details page after the payment link is identified, and may also include a product name (e.g., a young connection shift) and/or an additional interface associated with the first virtual product, where the additional interface includes an advertisement filtering interface, a picture quality interface, a personalized content interface, and a following display of prompt information corresponding to each interface, where the prompt information may be sequentially shown as "advertisement privileged", "super high definition picture quality", and "member-specific activity", where the interface trigger instruction may send a corresponding request, and a content returned in response to the request is displayed on another GUI, and is not repeated here.
The commodity recommending area 2143 transmits a payment order request based on a payment tag that the user selects payment in the payment detail sheet, and receives VIP opening information fed back in response to the request, thereby displaying commodity information associated with the first virtual commodity on the commodity recommending area, presetting the commodity information before opening the member and displaying corresponding prompt information, such as "the following hot courses can be viewed after opening the member", in order to improve user experience.
And the second interface which is outstanding from the display state of other second interfaces triggers a link address based on an instruction input by a user on the second interface for linking to the payment detail page, and the video detail page is directly returned from the payment detail page in the display state in response to the instruction input, so that the user can select the first interface on the video detail page and recall the interface fusion page.
For easy path search, the interface fuse page is backed from the payment details page, linked to the interface fuse page based on the user re-selecting the role tab, and different text or/and picture formats are displayed on the payment details page to highlight the payment details information, which is not described herein again.
In some examples, the status display area 161 is horizontally disposed within the content display area 160 and above the content display area 160. Used for displaying the current operation state and information prompt.
In order to improve the user experience, the status display area 161, the priority recommendation area 163, and the additional recommendation area 164 are integrally designed. I.e., an immersive state display area is provided to keep the overall interface style of the content display area 160 consistent.
In some examples, the status display area may or may not be displayed. Generally, the display is better, and the operating state can be provided.
When the focus is not in the area of the status display area 161, each of the tabs of the status display area 161 displays only the corresponding icon, and the spacing distance between adjacent icons is minimized. When the tab of the status display area 161 takes focus, the selected tab shows an icon and text.
The member center tab 1611 is used to obtain the login status of the user. After the user acquires the focus, if the user is in the logged-in state, displaying the head portrait of the user, the user name and the residual time of the member; and if the user is in the non-login state, displaying a login tab and a registration tab for the user to login or register the account.
The network connection tab 1612 is used to display a network connection status. When the focus is not acquired, only icons representing connection or non-connection are displayed; after the user acquires the focus, if the user is in a connected state, displaying an icon representing the connection and a network name; if the connection is not established, an icon indicating the connection is not established and a word "unregistered" is displayed.
The signal source tab 1613 is used to display the signal source name.
The time tab 1614 displays the local time, which if in china is beijing time. When the focus is not acquired, the specific time is displayed, and after the user acquires the focus, the specific time and the date are displayed. When the user selects the focus, the system modification time can be entered, and the time can also be set to be 24-hour system or 12-hour system.
Weather tab 1615 is used to display the selected city weather. When the focus is not acquired, a weather icon is displayed together with the lowest air temperature and the highest air temperature of the day. After the focus is obtained, specific weather information and/or air quality information, such as an icon representing cloudiness, a word of "cloudy sunny", a minimum air temperature and a maximum air temperature of the day, and the content of PM2.5 can be displayed.
City tab 1616 is used to display the region. After the focus is obtained, provinces and cities are displayed.
When the focus is located in the state display area 161, the focus may be moved in the horizontal direction for multiple times, and the user may enter the corresponding content page or setting page by selecting any icon in the state display area 161, which is not described herein. The icons in the status display area 161 are not limited to the icon styles given in the above embodiments, and the arrangement sequence between the tabs located on the left side in the status display area 161 in the above embodiments may be set optionally according to the preference of the user, and in this embodiment, only one of the sequences is shown, which should not be taken as a limitation to the present application, and the protection scope of the present application also includes the cases where the icon styles are different and the ordering manner is different.
The display contents of the priority recommendation region 163 and the additional recommendation region 164 are determined by the position of the focus of the main navigation region 170. The system defaults to when the user enters the homepage of the gathering application, the focus is on the "pick" tab of the main navigation area 170. The user may bring the focus within the content display area 160 by clicking on a remote control, touch screen, or gesture, etc.
In some embodiments, the focus is moved to the item recommendation position 1641 in the additional recommendation region based on a selection command input by the user in the content display region 160, the first interface is displayed in the item recommendation region 1641a in response to a trigger instruction input to the item recommendation position 1641a, the video detail page is called in response to a trigger instruction input to the first interface, at least two second search interfaces are displayed on the video detail page, and the payment detail page is linked to as a simple search path in response to a selection input by the user to the second interface.
As shown in fig. 15, when the user moves the focus once to the right by clicking a remote controller, a touch screen, or a gesture, the text of the main navigation area is hidden, and only the corresponding icon is displayed, indicating that the "picked" icon is in a shadow state. Meanwhile, the focus of the content display area 160 falls on the learning record option bar 1631 of the priority recommendation area 163 by default.
When the main navigation area 170 hides the text, its width decreases and the content display area 160 as a whole is translated to the left.
In some examples, the user may move the first tab member center tab 1611 of the focus entry status display area up or down to the "open VIP" tab 1632 or to the carousel area.
The user may move the focus to view the content displayed at the different recommendation sites in the priority recommendation region 163. When the focus is horizontally moved to the right at the recommended bit in the main recommendation area 1636, the positions of the learning record option column 1631, the open member option column 1632, and the mobile APP download option column 1633 remain unchanged on the display screen, while the positions of the carousel window 1634, the carousel thumbnail window 1635, and the main recommendation area 1636 on the display screen panel disappear as the right recommended bit appears and the left display portion disappears.
The recommendation bits in the additional recommendation area 164 are divided into a double-line display and a single-line display, and when the double-line display is performed, the focus moves left and right in one line, and simultaneously moves the other line left and right.
In this example, when the user moves the focus horizontally to the last recommended bit of the main recommendation area to the right, the focus cannot move to the right any more, and if the user still triggers an instruction to move the focus to the right, a prompt effect of moving the recommended bit of the main recommendation area to the end occurs. Such as: the plurality of recommendation positions displayed in the main recommendation area move rightwards for a very short distance and quickly return to the original position so as to prompt the user to reach the last recommendation position.
The user moves the focus downward into the additional recommendation field 164, each recommendation bit being arranged in a horizontal direction. When the number of the recommendation bits 1642a displayed in a first row in the sub-recommendation bit area 1642 is less than 9, displaying in a single line; and when the number of the display units is more than 9, the display units are displayed in double rows, and the number of the display units is not more than 16 at most. Each recommendation slot 1642a may display a thumbnail or poster of the recommended course.
In some instances, when focus is not on the determined recommendation bits 1642a, the literal name of the recommended course is displayed below the range of each recommendation bit 1642 a.
When the focus stays at the determined recommendation position 1641a, detailed information of the content of the recommendation position, such as the duration of a course, a teacher for giving a guidance, a subject of the course and the like, can be checked. The position of the title area 1642 remains unchanged on the display screen while the focus is moved in the horizontal direction to view the recommendation bit 1641 a. The dynamic effect of the focus moving horizontally between each or two rows of recommendation bits 1641a in the additional recommendation area 164 is the same as the effect of the focus moving between the recommendation bits 1641a in the main recommendation area in the above embodiment, and is not described herein again.
When the focus moves only in the content display area 160, i.e. when the user is browsing the content display area 160, the focus can be moved along the horizontal direction for a plurality of times, and when the focus moves to the left row of icons in the content display area 160, the focus moves to the left again, and then enters the main navigation area 170; when the focus moves to the last icon in the content display area 160, the focus cannot continue to move further to the right and give a live-action cue, such as a flick, to reach the border.
FIG. 16 is a GUI in which the user moves the focus downward multiple times in the additional recommendations field 164. The main navigation area 170 displays only icons, the additional recommendation area 164 moves upward as a whole to display a plurality of rows of recommendation positions in the content display area 160, and the state display area 161 and the priority recommendation area 163 are hidden when being pushed up to the topmost end of the display screen; when the focus moves to a recommendation position which is positioned in the fourth column of the first row under the 'guessing you like', the remote controller inputs an instruction to trigger the recommendation position, the recommendation position can be stretched to a preset position towards the periphery of the recommendation position, a first interface is displayed in the recommendation position, and the interface fusion type can be called by inputting a trigger instruction to the first interface.
As described above, when the user enters the homepage of the hakuri application, the focus is located under the "pick" option column of the main navigation area 170, and the specific implementation may be combined with the description of other embodiments, which is not described herein again. However, it should be understood that the present application may be embodied in many ways beyond the specific details detailed in the present application.
As shown in FIG. 17A, the present embodiment provides an example of a method 2100 for implementing an application display by moving focus after entering a purchasing application (e.g., gathering). Although fig. 15-16 show a general order of execution for method 2100, method 2100 may contain more or fewer steps, or the order of the steps may be arranged in a different order than method 2100 shown in fig. 17A. The method 2100 is a set of computer-executable instructions executable by a computer system or processor, encoded or stored on a computer-readable medium, or embodied as circuitry in an Application Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) that can perform the method. In the following, we will explain the method 2100 in connection with the systems, components, modules, data structures, user interfaces, etc. described in connection with fig. 1-16.
Figure 17A conceptually illustrates a method 2100 of some embodiments in which a user moves focus to a content display area. As shown, the method 2100 first begins by receiving (at 2110) an instruction to trigger entry into an aggregation learning application interface (home page). The user may receive the command through a remote control button, a gesture, or a set of user interactions touching the screen, or through some variation.
The process then determines (at 2120) to display a user interface on the GUI that includes a main navigation area and a content navigation area, with the focus default position on the "pick" menu bar. When the user selects to move focus to the right (at 2130), the focus moves from the "pick" option bar to the content display area, which may be first positioned at the "Member center" of the status bar; or the position is positioned on the recommendation position of the first column and the first row in the priority recommendation area, such as a 'learning record' option bar; the content display area is entirely shifted to the left, and the focus is located above the priority recommendation area, and when the user selects to move the focus upward, as shown in fig. 15, the "learning record" option field is switched to the "member center" on the GUI.
The process then detects (at 2140) a move right command entered on the GUI, and the device moves the focus from "learning record" to the item recommendation position on the content recommendation field in response to the command to switch the focus downward, on which item information associated with the first virtual item is displayed, such as: when the focus is positioned in the carousel window, loading and playing the trial video in the carousel window; the goods recommendation bit may be triggered for a request package carrying a download link in response to an entered instruction and may draw another GUI. And the recommendation positions and the carousel windows are presented in the preferred recommendation area and the additional recommendation area below the preferred recommendation area, so that the user can browse and view the recommended commodity information in a sliding or waterfall flow type in the horizontal and vertical directions.
As shown in fig. 17B, method 2200 detects a user input selection command to move focus in the content display area, locates focus to a carousel window in the preferred recommendation area (2210), stretches the carousel window to overlay the home GUI in response to a trigger command input to the carousel window (2220), plays the video in full screen, and continues to display the first interface during full screen play until zooming out in response to an input return command, the zoomed-out carousel window reverts to a default position and cancels the first interface; or in response to a trigger instruction input to the first interface, link to another GUI, referring to fig. 11B, 13A, which replaces the display state of the full-screen play window.
In response to a trigger instruction (2230) input to the first interface during full screen play, the device links to a video detail GUI, the GUI being displayed based on full screen GUI hiding, a second search control being rendered on the video detail page, the second search control displaying merchandise information associated with a second virtual merchandise, the merchandise information including a second virtual merchandise name and role information describing a user scope applicable to the second virtual merchandise, and referring to fig. 13B, the second virtual merchandise may be used to activate a service for the video, the service being such as: the user opens the member after paying the virtual currency to the service provider, the video corresponding to the member has no viewing time limit, namely the payment viewing service is activated, and on the contrary, the member has no viewing time limit when not opened, namely the trial viewing service is activated.
The apparatus locates focus to a second search control in response to a selection command entered in the video details GUI, detects a trigger instruction entered on the second interface (2240), transitions to a payment details GUI in response to the trigger instruction, and renders a plurality of payment tags in a line on the payment details GUI, including implicitly or displaying merchandise information (e.g., VIP or members) associated with the second virtual merchandise information for the user to view a payment price for payment of the second virtual merchandise based on short range or long range communication with other devices.
As shown in fig. 17C, the member fusion interface is located in the left column of the carousel window, and the method 2300 detects a selection command for input in the preferred recommendation field, moves focus from the carousel window to the member fusion interface (2310), and links from the home page GUI to the payment fusion GUI (2320) in response to a trigger command input to the member fusion interface, wherein the payment fusion GUI includes a navigation field and a payment detail field, the navigation field is displayed with a plurality of options, and payment detail information displayed in the payment detail fields corresponding to some or all of the options is different, as shown in fig. 12A and 12B.
As shown in fig. 17D, method 2400 begins by receiving a command to enter a gathering application program that triggers entry into a gathering application interface (home page). The user may receive the command through a remote control button, a gesture, or a set of user interactions touching the screen, or through some variation.
Displaying a user interface including a main navigation area and a content display area on a homepage GUI, setting a default focus on a 'pick' option bar, causing a device to fall on a 'member center' recommendation position in response to a command to switch the focus to the right, and then causing the device to move the focus to an additional recommendation area in response to a command to switch the focus continuously downward (2410), displaying recommendation positions displayed in the additional recommendation area in a sliding or waterfall manner upward, positioning the focus to a commodity recommendation position associated with a first virtual commodity in the 'high school advising' recommendation area, displaying commodity information of 'war preparedness 2018' in the recommendation position, and displaying brief description information of 'famous interpretation 2017 high-level reference text' below a corresponding display area of the recommendation position (not shown in the figure).
The device responds to a trigger instruction (2420) input to the commodity recommendation position, links a newly created GUI, displays commodity information related to a first virtual commodity based on feedback after a remote request on the GUI, and draws a first search control, wherein commodity information related to a second virtual commodity is rendered on the first search control and is used for guiding a user to search for the commodity information in payment of the second virtual commodity.
The device responds to a triggering instruction (2430) input to the first interface, links the interface fusion GUI, displays the interface fusion GUI on the interface fusion GUI based on the commodity detail GUI withdrawal, comprises at least two second interfaces, displays commodity information associated with a second virtual commodity on part of or all of the second interfaces, responds to a selection command (2440) input to any one of the second interfaces, links to a corresponding payment detail page, and the payment detail page is overlaid on the interface fusion page, and is shown in the figure 14C.
In some embodiments, for vertical payment, the interface fusion GUI includes a plurality of second interfaces respectively applicable to different user scopes, different role information is displayed in the second interfaces, and is used for the user to view different user scopes applicable to the same first virtual commodity, and the second interfaces can be presented in a control or two-dimensional code manner, such as VIPs of "interest", "fifth grade of primary school", "0-2 year of preschool", and the like; other interfaces with different functions, such as an advertisement filtering interface and a picture quality interface, may also be provided with the second interface to merge the commodity information associated with the second virtual commodity, as shown in fig. 14D.
As shown in fig. 18A, method 3100 displays a payment details GUI full screen on a display screen, a device detects an input return instruction (3110), invokes an item details GUI corresponding to the payment details GUI in a search link in response to the return instruction (3120), a carousel window with a focus default position on the item details GUI, the device detects a user-input right-shift instruction, positions a focus on a full screen play control, then detects the user-input right-shift instruction, positions the focus on a first interface, the device re-invokes an interface fuse page in response to a trigger instruction input to a second interface (3130), the interface fuse page is overlaid on the item details GUI, and a rendering color and transparency of the item details page are higher than the interface fuse page to display item information in a fused manner.
In some embodiments, as shown in fig. 18B, to briefly search for a path, the method 3200 detects a return instruction entered on the payment details GUI (3210), links to interface fuses (3220) on which a plurality of second interfaces are distributed in a row direction in response to the return instruction, the focus default position is on the second interface of the first column, the device detects a panning selection entered by the user to reposition the focus on the other second interface (3320), and links to the payment details GUI associated with merchandise information displayed on the second interface in response to a triggering instruction entered on the focused second interface (3330), as shown in fig. 18C.
The exemplary systems and methods of the present application have been described in relation to an entertainment system. However, to avoid unnecessarily obscuring the present application, the foregoing description omits some known structures and devices. Such omissions are not to be construed as limiting the scope of the claims. Specific details are provided herein to facilitate an understanding of the present application. However, it should be understood that the present application may be embodied in many ways beyond the specific details detailed in the present application.
Moreover, while the exemplary aspects, examples, and/or configurations illustrated herein show various components of the system arranged together, some of the system components may be located remotely from a distributed network (e.g., a LAN and/or the Internet), or in a dedicated system. It should therefore be appreciated that the components of the system may be combined into one or more devices, such as a set-top box or television set, or other devices collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network. In light of the above description and for reasons of computational efficiency, the system components may be located anywhere within the distributed component network without affecting the operation of the system. For example, the various components may be located in a switch (e.g., a PBX and media server, a gateway), one or more communication devices, one or more user sites, or some combination thereof. Similarly, one or more functional portions of the system may be distributed between a telecommunications device and an associated computing device.
Further, it should be understood that the various connections between these elements may be wired or wireless connections, or any combination thereof, or any other known or later developed element capable of providing data and/or communicating data with the connected elements. These wired or wireless connections may also be secure connections capable of transmitting encrypted information. The transmission medium used for connection may be, for example, any suitable electronic signal carrier including coaxial cables, copper wire and fiber optics, and may be acoustic or light waves such as those generated during radio-wave and infra-red data transmissions.
Further, while some flow diagrams have been discussed and illustrated in a particular order of events, it should be understood that such order may be altered, augmented, and omitted without materially affecting the operation of the disclosed examples, configurations, and aspects.
A number of variations and modifications of the present application may be employed. It is possible to provide only some of the features of the present application without providing the remaining features.
In another example, the systems and methods of the present application, when implemented, may be used in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit (e.g., a discrete element circuit), a programmable logic device or gate array (e.g., PLD, PLA, FPGA, PAL), a special purpose computer, any comparable means, or the like. In general, any device or means capable of carrying out the methods described herein can be used to carry out the various aspects of the present disclosure. Exemplary hardware suitable for use with the disclosed examples, configurations, and aspects includes computers, handheld devices, telephones (e.g., cell phones, internet-enabled, digital, analog, hybrid, and others), and other hardware known in the art. Some such devices include a processor (e.g., one or more microprocessors), memory, non-volatile memory, input devices, output devices, and so forth. Moreover, the methods described herein may also be implemented using other software-implemented processes, including but not limited to distributed processing or component/object distributed processing, parallel processing, or virtual machine processing.
In another example, the disclosed methods may also be readily implemented in connection with software using object or object-oriented software development environments, as these environments may provide convenient source code that may be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI devices. Whether software or hardware is used in implementing a system according to the present application depends on the speed and/or efficiency requirements of the system, the particular function and particular software or hardware system, or microprocessor or microcomputer system being used.
In another example, the disclosed methods may be implemented in part in software that may be stored on a storage medium, executed on a programmed general purpose computer, special purpose computer, microprocessor, equipped with a controller and memory. In these examples, the systems and methods herein may be implemented as programs embedded in a personal computer (e.g., an applet, or a CGI script), resources maintained on a server or computer workstation, routines embedded in a dedicated measurement system and system components, and so forth. Further, the system may also be implemented by physically integrating the system and/or method into a software and/or hardware system.
Although the present application describes components and functions implemented in certain aspects, examples, and/or configurations in accordance with particular standards and protocols, it is not intended that such aspects, examples, and/or configurations be limited by such standards and protocols. Other similar standards and protocols not mentioned in this application are also present and are considered to be included in this application. Moreover, the standards and protocols mentioned in this application and other similar standards and protocols not mentioned in this application will be periodically replaced by the fastest and more efficient equivalents which have essentially the same function. Such replacement standards and protocols having the same functions are considered equivalents to be included in this application.
The subject application includes the components, methods, processes, systems and/or apparatus described in detail herein in various aspects, examples, and/or configurations, including various aspects, examples, configuration examples, subcombinations, and/or subsets thereof. Those of skill in the art will understand how to make and use the disclosed aspects, examples, and/or configurations after understanding the present application. The subject application, in various aspects, examples, and/or configurations, includes providing devices and processes that improve performance, ease of implementation, reduce implementation costs, and/or the like, in the absence of items not depicted and/or described in this application or in various aspects, examples, and/or configurations, or in the absence of such items as may have been used in previous devices or processes.
The foregoing discussion is presented for purposes of illustration and description and is not intended to limit the application to the form or forms disclosed. In the foregoing detailed description, for example, various features of the application are grouped together in one or more aspects, examples, and/or configurations for the purpose of streamlining the disclosure. Various features of the various aspects, examples, and/or configurations of the application may be combined in other aspects, examples, and/or configurations than those described above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in all features of a single foregoing disclosed aspect, example, and/or configuration. Thus, the following claims are hereby incorporated into this detailed description, with each claim standing on its own as a separate preferred example of the application.
Moreover, although the foregoing description has included description of one or more aspects, examples, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, as determined by the skilled artisan after understanding the disclosure. Applicants' intent is to obtain rights to include alternative aspects, examples, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those included in the claims, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed in the application, and without intending to publicly dedicate any patentable subject matter.

Claims (8)

1. A method for searching virtual commodity information in a user interface based on a display device,
based on selection input to the carousel window in the purchasing application, switching and displaying the carousel window as a first user interface of full-screen playing, wherein the first user interface comprises a video of full-screen playing, a first interface on the video of full-screen playing and commodity information associated with a first virtual commodity, and the first virtual commodity comprises a learning class course;
responding to an instruction input to the first interface, displaying a second user interface, wherein the second user interface comprises a video window for playing a first virtual commodity, a full-screen playing option for switching full-screen playing and a second interface associated with a second virtual commodity, the second virtual commodity comprises a member for purchasing the learning class course associated service, the second virtual commodity provides payment service for the first virtual commodity, and the second user interface comprises a video window for playing the first virtual commodity, wherein the video window is not full-screen;
in response to an instruction input to the second interface, displaying a third user interface that includes payment information associated with the second virtual good.
2. The method of claim 1, wherein a prompt and at least two payment labels associated with the payment information are displayed in the third user interface, some or all of the payment labels including an original payment price and a discount price, the prompt including a personalized identifier indicating a user scope associated with the second virtual good.
3. The method of claim 1, wherein the switching the display of the carousel window as the first user interface for full-screen play based on a selection of a carousel window input in the purchase-class application comprises:
receiving and responding to a trigger instruction input to the purchasing application, and displaying a commodity homepage associated with the first virtual commodity, wherein the commodity homepage comprises a main navigation area and a preferred recommendation area positioned on the right side of the main navigation area, and the preferred recommendation area comprises a member fusion interface for a user to browse member content and a carousel window for playing a video related to the first virtual commodity;
receiving and responding to the input that the focus is switched from the classification option bar in the main navigation area to the preferred recommendation area, and positioning the focus in the carousel window;
and receiving and responding to an instruction input to the carousel window, wherein the carousel window is switched to the first user interface, and the first user interface is used for playing the video in a full screen mode.
4. The method of claim 3,
receiving and responding to an input that the focus is switched to the member fusion interface, and positioning the focus to the member fusion interface;
receiving and responding to an instruction input to the member fusion interface, and displaying a payment fusion interface, wherein the payment fusion interface comprises a navigation area and a payment detail area, the navigation area comprises a plurality of grading columns, and classification information displayed on part or all of the grading columns is used for indicating a user range to which the payment information displayed in the payment detail area is applicable.
5. The method of claim 3,
the first user interface plays the video in a full screen mode and continuously displays the first interface within the playing time, and the second user interface displays the second interface and other function interfaces distributed in a line direction with the second interface.
6. The method of claim 1,
receiving and responding to a return instruction input to the third user interface, and calling the first user interface;
receiving and responsive to an input to position a focus in the first user interface, positioning the focus at the first interface;
and receiving and responding to the instruction re-input to the first interface, and calling the second user interface.
7. The method of claim 1,
receiving and responding to a return instruction input to the third user interface, and calling the second user interface;
receiving and in response to an input selecting one of the second user interfaces, panning focus to any of the plurality of second interfaces;
receiving and responding to an instruction input to the different second interface where the focus is positioned, and linking to the different third user interface.
8. An intelligent television, comprising:
a display screen configured to display content associated with a virtual good in a user interface;
a memory;
and a processor in communication with the memory and the display screen, the processor configured to perform: the method of any one of claims 1-7.
CN201810130360.1A 2018-02-08 2018-02-08 Smart television and method for searching virtual commodity information in user interface Active CN108429927B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110605678.2A CN113422999B (en) 2018-02-08 2018-02-08 Display method and display device
CN201810130360.1A CN108429927B (en) 2018-02-08 2018-02-08 Smart television and method for searching virtual commodity information in user interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810130360.1A CN108429927B (en) 2018-02-08 2018-02-08 Smart television and method for searching virtual commodity information in user interface

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110605678.2A Division CN113422999B (en) 2018-02-08 2018-02-08 Display method and display device

Publications (2)

Publication Number Publication Date
CN108429927A CN108429927A (en) 2018-08-21
CN108429927B true CN108429927B (en) 2021-06-04

Family

ID=63156626

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110605678.2A Active CN113422999B (en) 2018-02-08 2018-02-08 Display method and display device
CN201810130360.1A Active CN108429927B (en) 2018-02-08 2018-02-08 Smart television and method for searching virtual commodity information in user interface

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110605678.2A Active CN113422999B (en) 2018-02-08 2018-02-08 Display method and display device

Country Status (1)

Country Link
CN (2) CN113422999B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669165A (en) * 2019-09-27 2021-04-16 徐蔚 Unified access method applying digital personal code chain
CN110675875B (en) * 2019-09-30 2022-02-18 思必驰科技股份有限公司 Intelligent voice conversation technology telephone experience method and device
CN111510753B (en) * 2019-11-04 2022-10-21 海信视像科技股份有限公司 Display device
CN111815419B (en) * 2020-07-17 2023-09-15 网易(杭州)网络有限公司 Recommendation method and device for virtual commodity in game and electronic equipment
CN112148941B (en) * 2020-09-24 2023-07-25 网易(杭州)网络有限公司 Information prompting method, device and terminal equipment
WO2022083554A1 (en) * 2020-10-19 2022-04-28 聚好看科技股份有限公司 User interface layout and interaction method, and three-dimensional display device
CN113347482B (en) * 2021-06-18 2023-10-27 聚好看科技股份有限公司 Method for playing data and display device
CN114168045A (en) * 2021-06-24 2022-03-11 武汉理工数字传播工程有限公司 Dictation learning method, electronic equipment and storage medium
CN113703646A (en) * 2021-06-24 2021-11-26 武汉理工数字传播工程有限公司 Method for learning image and text, electronic equipment and storage medium
CN114666642A (en) * 2022-02-22 2022-06-24 海信视像科技股份有限公司 Display device, split screen control method, and storage medium
CN115202530B (en) * 2022-05-26 2024-04-09 当趣网络科技(杭州)有限公司 Gesture interaction method and system of user interface

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028121A (en) * 2016-06-27 2016-10-12 乐视控股(北京)有限公司 Resource integration method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980275A (en) * 2010-11-01 2011-02-23 深圳市同洲电子股份有限公司 System, digital television terminal, device and method for realizing commodity order
CN105898589A (en) * 2015-12-09 2016-08-24 乐视网信息技术(北京)股份有限公司 Payment method and device for video play and TV device
US20170171628A1 (en) * 2015-12-15 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method and electronic device for quickly playing video
CN105893023A (en) * 2015-12-31 2016-08-24 乐视网信息技术(北京)股份有限公司 Data interaction method, data interaction device and intelligent terminal
CN105894352A (en) * 2016-03-30 2016-08-24 乐视控股(北京)有限公司 Method and apparatus for on-line purchasing membership service
CN106941624B (en) * 2017-04-28 2019-12-27 北京小米移动软件有限公司 Processing method and device for network video trial viewing
CN107622419A (en) * 2017-09-26 2018-01-23 安徽特旺网络科技有限公司 A kind of B2C on-line shopping systems

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028121A (en) * 2016-06-27 2016-10-12 乐视控股(北京)有限公司 Resource integration method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
大屏教育已来,海信深度布局在线教育;欣颜;《流媒体网,https://lmtw.com/mzw/content/detail/id/129876/keyword_id/-1》;20160421;第5页-第6页 *
这个暑假,西游记和还珠格格之外你还有新的选择;海信冰箱冷柜;《搜狐,https://www.sohu.com/a/160850753_751581》;20170728;第3页 *

Also Published As

Publication number Publication date
CN113422999B (en) 2022-11-15
CN113422999A (en) 2021-09-21
CN108429927A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN108429927B (en) Smart television and method for searching virtual commodity information in user interface
US11449145B2 (en) Systems and methods for providing social media with an intelligent television
CN108055589B (en) Intelligent television
CN108055590B (en) Method for displaying graphic user interface of television picture screenshot
US20140068689A1 (en) Systems and methods for providing social media with an intelligent television
CN108111898B (en) Display method of graphical user interface of television picture screenshot and smart television
CN108600817B (en) Smart television and method for facilitating browsing of application installation progress in display device
CN103748586A (en) Intelligent television
WO2014092814A1 (en) Silo manager
WO2014092815A1 (en) Location-based context for ui components
WO2014046817A2 (en) Application panel manager

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant