CN115550740A - Display device, server and language version switching method - Google Patents

Display device, server and language version switching method Download PDF

Info

Publication number
CN115550740A
CN115550740A CN202110729316.4A CN202110729316A CN115550740A CN 115550740 A CN115550740 A CN 115550740A CN 202110729316 A CN202110729316 A CN 202110729316A CN 115550740 A CN115550740 A CN 115550740A
Authority
CN
China
Prior art keywords
media asset
control
data
asset
language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110729316.4A
Other languages
Chinese (zh)
Inventor
李园园
公荣伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202110729316.4A priority Critical patent/CN115550740A/en
Publication of CN115550740A publication Critical patent/CN115550740A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4856End-user interface for client configuration for language selection, e.g. for the menu or subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Abstract

The application provides a display device, a server and a language version switching method, wherein the display device comprises a display; a controller configured to: receiving a triggering instruction of a media asset control in a media asset list page; responding to the instruction, and acquiring the media asset detail data corresponding to the media asset control; and if the media asset detail data comprises first media asset data corresponding to a first language version and second media asset data corresponding to a second language version of the media asset control, generating a second media asset detail page, wherein the second media asset detail page comprises a player control and a first list control and a second list control which are sequentially arranged, the first list control corresponds to the first media asset data, and the second list control corresponds to the second media asset. According to the method and the device, the plurality of list controls respectively corresponding to different language versions are displayed on the medium resource detail page, so that a user can switch the language version which the user wants to watch on the medium resource detail page, the convenience of language version switching is improved, and the user experience of the display device is improved.

Description

Display device, server and language version switching method
Technical Field
The present application relates to the field of display device technologies, and in particular, to a display device, a server, and a method for switching language versions.
Background
The smart television is an important display device for people to watch movies. When people want to watch a movie, people can search in a search entry provided by the smart television. In order to meet the personalized film viewing requirements of people, many films have resources of multiple language versions. When people input search words in a search entry, more than one movie searched by the intelligent television may be obtained. In the related art, a display logic is that in a search result interface, each language version of a movie respectively occupies one display position, and a user can directly select a certain language version of a certain movie to watch the movie; another display logic is that each movie only occupies one display position in a search result interface, and a user can call a control menu and switch the language version of the movie through the control menu in the process of watching the movie.
Disclosure of Invention
In order to solve the technical problem that language version switching experience is poor, the application provides a display device, a server and a language version switching method.
In a first aspect, the present application provides a display device comprising:
a display for presenting a user interface;
a controller connected with the display, the controller configured to:
receiving a triggering instruction of a media asset control in a media asset list page;
responding to the triggering instruction of the media asset control, and acquiring media asset detail data corresponding to the media asset control; if the media asset detail data comprises first media asset data corresponding to a first language version of the media asset control and does not comprise second media asset data corresponding to a second language version, generating a first media asset detail page, wherein the first media asset detail page comprises a player control and a first list control, the player control is used for playing audio and video data corresponding to the first media asset data, and the first list control is used for displaying a media asset title and a media asset picture in the first media asset data;
and if the media asset detail data comprises first media asset data corresponding to a first language version of the media asset control and second media asset data corresponding to a second language version of the media asset control, generating a second media asset detail page, wherein the second media asset detail page comprises a player control and a first list control and a second list control which are sequentially arranged, the first list control is used for displaying media asset titles and media asset pictures in the first media asset data, the second list control is used for displaying the media asset titles and the media asset pictures in the second media asset data, the player control is used for playing audio and video data corresponding to the first list control, and the second list control is configured to control display equipment to play audio and video data corresponding to the second list control in response to triggering.
In a second aspect, the present application provides a server configured to:
receiving a media resource detail request of a display device, wherein the media resource detail request comprises a media resource identifier;
responding to the request for the details of the media assets, acquiring the details of the media assets corresponding to the media asset identifier, and generating interface data of a media asset detail page according to the details of the media assets, wherein the details of the media assets are generated by a server according to the media assets corresponding to the media asset identifier, a plurality of language versions of the media asset data are contained in a media asset library, and each piece of media asset data is respectively provided with a media asset title, a media asset picture, a language version parameter and a media asset playing address;
and sending interface data of the media asset detail page to display equipment.
In a third aspect, the present application provides a method for switching language versions, including:
responding to a trigger instruction of the media asset control, and acquiring media asset detail data corresponding to the media asset control; if the media asset details comprise first media asset data corresponding to a first language version of the media asset control and do not comprise second media asset data corresponding to a second language version, generating a first media asset details page, wherein the first media asset details page comprises a player control and a first list control, the player control is used for playing audio and video data corresponding to the first media asset data, and the first list control is used for displaying a media asset title and a media asset picture in the first media asset data;
and if the media asset details comprise first media asset data corresponding to a first language version and second media asset data corresponding to a second language version of the media asset control, generating a second media asset detail page, wherein the second media asset detail page comprises a player control and a first list control and a second list control which are sequentially arranged, the first list control is used for displaying media asset titles and media asset pictures in the first media asset data, the second list control is used for displaying the media asset titles and the media asset pictures in the second media asset data, the player control is used for playing audio and video data corresponding to the first list control, and the second list control is configured to control display equipment to play audio and video data corresponding to the second list control in response to triggering.
The display device, the server and the language version switching method provided by the application have the beneficial effects that:
according to the method and the device, the multiple list controls of the film are displayed on the detailed media page, each list control corresponds to the media data of different language versions, and a user can switch the language version which the user wants to watch on the detailed media page, so that the multiple language versions of the film do not need to be displayed on the detailed media page such as a search interface through multiple display positions, and only one display position is needed for displaying the film; according to the method and the device, the language version of the film can be determined before the film is played in a full screen mode, the convenience of language version switching is improved, and the user experience of the display device is improved.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic diagram illustrating an operational scenario between a display device and a control apparatus according to some embodiments;
a block diagram of the hardware configuration of the control device 100 according to some embodiments is illustrated in fig. 2;
a block diagram of the hardware configuration of the display device 200 according to some embodiments is illustrated in fig. 3;
a schematic diagram of the software configuration in the display device 200 according to some embodiments is illustrated in fig. 4;
FIG. 5 is an interface diagram that illustrates a video on demand program, according to some embodiments;
FIG. 6 is a timing diagram that illustrates the processing of asset data by a server, according to some embodiments;
FIG. 7 is a timing diagram that illustrates the processing of asset data by a server, according to some embodiments;
FIG. 8 illustrates a search results interface diagram in accordance with some embodiments;
FIG. 9 is a flow diagram that illustrates a method of generating a asset detail page, according to some embodiments;
fig. 10 illustrates a timing diagram for user switching film language versions, according to some embodiments;
FIG. 11 is a schematic diagram that illustrates a media asset detail page, according to some embodiments;
a schematic diagram of language switching during full-screen play of a media asset according to some embodiments is illustrated in fig. 12.
Detailed Description
In order to facilitate the technical solution of the present application, some concepts related to the present application will be described below.
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to all of the elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the functionality associated with that element.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, the user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls the display device 200 in a wireless or wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200.
In some embodiments, the smart device 300 (e.g., mobile terminal, tablet, computer, laptop, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 to obtain a voice command, or may be received through a voice control device provided outside the display device 200.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 according to an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction from a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200.
Fig. 3 shows a hardware configuration block diagram of the display apparatus 200 according to an exemplary embodiment.
In some embodiments, the display apparatus 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
In some embodiments the controller comprises a processor, a video processor, an audio processor, a graphics processor, a RAM, a ROM, a first interface to an nth interface for input/output.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, performing display of video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
In some embodiments, the display 260 may be a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. The display apparatus 200 may establish transmission and reception of a control signal and a data signal with the external control apparatus 100 or the server 400 through the communicator 220.
In some embodiments, the user interface may be configured to receive control signals for controlling the apparatus 100 (e.g., an infrared remote control, etc.).
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, the detector 230 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception and demodulates audio/video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable control. Operations related to the selected object are: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon.
In some embodiments, the controller includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphic Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions receiving external input, so as to finally display and play various audio-video contents. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal that can be displayed or played on the directly displayable device 200.
In some embodiments, the video processor includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like. And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.
In some embodiments, the audio processor is configured to receive an external audio signal, and perform decompression and decoding, and processing such as denoising, digital-to-analog conversion, and amplification processing according to a standard codec protocol of the input signal, so as to obtain a sound signal that can be played in the speaker.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on display 260, and the user input interface receives the user input commands through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, the system of the display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel starts, activates kernel space, abstracts hardware, initializes hardware parameters, etc., runs and maintains virtual memory, scheduler, signals and inter-process communication (IPC). And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
The system of the display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
Referring to fig. 4, in some embodiments, the system is divided into four layers, which are an Application (Applications) layer (abbreviated as "Application layer"), an Application Framework (Application Framework) layer (abbreviated as "Framework layer"), an Android runtime (Android runtime) and system library layer (abbreviated as "system runtime library layer"), and a kernel layer from top to bottom.
In some embodiments, at least one application program runs in the application program layer, and the application programs may be windows (windows) programs carried by an operating system, system setting programs, clock programs or the like; or an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an Application Programming Interface (API) and a programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer acts as a processing center that decides to let the applications in the application layer act. The application program can access the resources in the system and obtain the services of the system in execution through the API interface.
As shown in fig. 4, in the embodiment of the present application, the application framework layer includes a manager (Managers), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used for interacting with all activities running in the system; the Location Manager (Location Manager) is used for providing the system service or application with the access of the system Location service; a Package Manager (Package Manager) for retrieving various information related to an application Package currently installed on the device; a Notification Manager (Notification Manager) for controlling display and clearing of Notification messages; a Window Manager (Window Manager) is used to manage the icons, windows, toolbars, wallpapers, and desktop components on a user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the various applications as well as general navigational fallback functions, such as controlling exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of a display screen, judging whether a status bar exists, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window, displaying a shake, displaying a distortion deformation, and the like), and the like.
In some embodiments, the system runtime layer provides support for the upper layer, i.e., the framework layer, and when the framework layer is used, the android operating system runs the C/C + + library included in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the core layer includes at least one of the following drivers: audio drive, display driver, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (like fingerprint sensor, temperature sensor, pressure sensor etc.) and power drive etc..
The hardware or software architecture in some embodiments may be based on the description in the above embodiments, and in some embodiments may be based on other hardware or software architectures that are similar to the above embodiments, and it is sufficient to implement the technical solution of the present application.
In some embodiments, the display device may directly enter a display interface of a signal source selected last time after being started, or a signal source selection interface, where the signal source may be a preset video-on-demand program, or may be at least one of an HDMI interface, a live tv interface, and the like, and after a user selects different signal sources, the display may display contents obtained from different signal sources.
In some embodiments, the display device may directly enter an interface of a preset video-on-demand program after being started, and the interface of the video-on-demand program may include at least a navigation bar 510 and a content display area located below the navigation bar 510, where content displayed in the content display area may change according to a change of a selected control in the navigation bar, as shown in fig. 5. The programs in the application program layer can be integrated in the video-on-demand program and displayed through one control of the navigation bar, and can also be further displayed after the application control in the navigation bar is selected.
In some embodiments, as shown in FIG. 5, the interface of the video-on-demand program may be provided with a search control 520, and upon entering the interface of the video-on-demand program, the focus of the display device may be positioned on the "recommendation" control by default. If the user wants to watch a movie, the user can move the focus of the display device to the search control 520 through the direction key of the remote controller, and then press the confirm key of the remote controller to trigger the search control 520.
In some embodiments, the display device may be configured to enter a media asset search interface in response to the search control 520 being triggered. The media asset searching interface can be provided with an input box control, and a user can input target information such as media asset names, actor names, director names and the like in the input box control. And after receiving the target information input by the user in the input box control, the display equipment generates a medium resource searching instruction, and communicates with the server according to the medium resource searching instruction, so that the server searches the medium resources related to the target information input by the user in the medium resource library to obtain a searching result. And the display equipment generates a search result interface according to the search result returned by the server, and displays the search result on the search result interface.
In some embodiments, the search results interface may include a plurality of asset controls, each asset control may display an asset poster of an asset video, and a user may click on one asset control to access an asset details page of the asset control. And displaying a small window playing window, a media asset brief introduction and a playing list on the media asset detail page, wherein the playing list can be provided with at least one episode control, the episode control can correspond to one of the episodes of the media asset videos, and the episode control is configured to respond to the triggering of the full-screen playing of the corresponding video. The user can directly press a confirmation key on the remote controller to carry out full-screen playing on the small window playing window, or trigger the episode control in the playing list to realize full-screen playing.
In some embodiments, in addition to the search results interface, there are other interfaces that may expose multiple asset controls. For example, in the top page of the video-on-demand program shown in FIG. 5, multiple asset controls may also be presented. In the application, as shown in a search interface, interfaces with a plurality of asset controls corresponding to different assets are collectively referred to as an asset list page, and a search result interface is taken as an example below to introduce the display of the asset controls of the asset list page and the triggered interfaces.
In some embodiments, multiple language versions of videos may exist in the same movie, so that in order to facilitate a user to quickly find a movie that the user wants to watch, the display device may be configured to display, in the search result interface, only one asset control for asset videos belonging to different language versions of the same movie, and display, in the asset detail page, asset posters of all language versions of the movie through multiple list controls.
To achieve the above technical effect, in some embodiments, the server on the display device side includes a content crawling network element. When the content capturing network element performs media asset warehousing, the media asset data can be processed, so that the display equipment can display videos of different language versions as a plurality of episodes of a film on a media asset detail page.
In some embodiments, the server on the display device side may include a media asset library storage system, and the media asset library may refer to that the content capture network element captures media asset data from a third-party server, processes the captured media asset data, and stores the processed media asset data in the media asset library storage system, where the content capture network element and the media asset library storage system may be disposed on one hardware device, or may be disposed on different hardware devices.
In some embodiments, the content crawling network element may be configured to periodically crawl the media asset data to the third party server through a data interface of the third party server.
In some embodiments, the asset data captured by the server of the display device from the third-party server includes album data and single-set video data, wherein the album data includes data of a plurality of videos and the single-set video data includes data of a single video.
In some embodiments, after the content capture network element captures a plurality of pieces of asset data, it may be detected whether each piece of asset data contains an album identifier, and if the album identifier is detected in one piece of asset data, it may be determined that the piece of asset data is album data, and if the album identifier is not detected, it may be determined that the piece of asset data is single-set video data.
In some embodiments, the processing procedure of the album data by the display device can be seen in fig. 6, which is a timing diagram illustrating the processing of the media asset data by the server according to some embodiments.
As shown in fig. 6, when the content capture network element processes an album data, it may determine what language is supported by a video corresponding to the album data according to language _ list therein, split the album data, and add a language identifier to the play information of each split episode after splitting.
For example, taking an album data as an album data of a movie as an example, the album data includes feature diversity data and some description data, media classification data, and other media description data. The content capture network element can extract feature diversity data provided with feature identification from the album data, detect a language _ list field from the feature diversity data, and if multiple languages can be obtained from the language _ list field, split the feature diversity data into multiple derived feature diversity data according to the language type to obtain new album data, wherein the new album data comprises the multiple derived feature diversity data and description data of the album.
For example, in one data capture, the content capture network element captures 10 pieces of asset data from the third-party server, each piece of asset data is provided with an asset ID and corresponds to a video or a video album, wherein 5 pieces of asset data are album data of 5 films, and the other 5 pieces of asset data are single-set video data of 5 films, wherein the album data may be provided with an album identifier, so that the album content capture network element can distinguish the album data from the single video data according to the album identifier. The content capture network element can convert the 10 media asset IDs into the media asset IDs of the display equipment side, so that the display equipment side can conveniently process the media asset IDs.
Taking the album data of one of the movies as an example, the content capture network element detects "language _ list = english in the feature-diversity data of the album data; a mandarin "field, the data of the feature diversity can be split into two derived feature diversity data, where the content of one derived feature diversity data includes a" language _ list = english "field and a playing address of an english version of video, and the content of the other derived feature diversity data includes a" language _ list = mandarin "field and a playing address of an english version of video, where the" language _ list = english "field and the" language _ list = mandarin "field can be referred to as a language identifier, and the playing address can be obtained from album data.
In some embodiments, the derived feature diversity data includes description data such as a diversity asset title, a poster, and an ID of the diversity, in addition to the field and the play address, wherein the asset title of the derived feature diversity data can be generated according to the asset title and the language identifier of the album data, for example, if the asset title is "movie a" and the language identifier includes "mandarin", a media asset title can be generated as "movie a mandarin edition". The diversity ID may be generated by a content crawling network element, the diversity ID being a secondary ID.
In some embodiments, the content capture network element may further set two IDs in each piece of derived feature diversity data, one being the ID of the album data, the ID being a primary ID, and one being a secondary ID set for the derived feature diversity data.
In some embodiments, the primary ID indicates that the description data corresponding to the ID can be used to generate a media asset recommendation page or a media asset control of a media asset search result interface, and the secondary ID indicates that the description data corresponding to the ID can be used to generate a list control of a media asset detail page.
In some embodiments, after the content capture network element splits the media asset data captured from the third-party server to obtain new album data, the content capture network element may send the new album data, that is, the split media asset data, to the media asset storage system for processing.
In some embodiments, the asset database storage system may store the new album data, extract the secondary ID, the poster and the asset title corresponding to the secondary ID from the derivative feature diversity data in the album data, generate episode information, and store the episode information in the album data.
In some embodiments, the media asset storage system may further obtain and store a corresponding relationship between the primary ID and the secondary ID from the album data, and further calculate a total collection number of the album data, where the total collection number is the number of the secondary IDs corresponding to the primary ID, set a total collection number parameter according to the total collection number, so that a value of the total collection number parameter is the same as the number of the secondary IDs corresponding to the primary ID, and store the total collection number parameter in the album data.
Fig. 6 illustrates a processing procedure of the album data by the server of the display device, and a processing procedure of the single set of video data by the server of the display device can be seen in fig. 7, which is a timing diagram of a media asset data processing of the server according to some embodiments.
In some embodiments, the plurality of pieces of media asset data captured by the content capture network element include data of different versions of the same movie, and taking two pieces of media asset data as the chinese version data and the english version data of the movie a, respectively, as an example, the server may perform matching of a media asset title, a director, and an actor on the captured media asset data this time, where the media asset title may adopt a fuzzy matching manner, the director and the actor may adopt a complete matching manner, and if matching of the two pieces of media asset data is successful, media asset merging is performed on the two pieces of media asset data according to a language type.
For example, also taking an example that the content capture network element captures 10 pieces of asset data from the third-party server, the above embodiment has described that 5 pieces of asset data are album data, and the remaining 5 pieces of asset data are single video data. And the content capture network element performs matching of the media asset titles, the director and the actors on the 5 pieces of media asset data, and combines the two pieces of media asset data into one piece of album data according to the language type after confirming that the media asset titles of the two pieces of media asset data are matched successfully in a fuzzy way, the director is matched successfully in a complete way and the actors are matched successfully in a complete way.
Illustratively, the resource title of one piece of resource data is "movie B", the resource title of the other piece of resource data is "movie B english", wherein the resource data with the resource title of "movie B" includes a "language _ list = mandarin" field or other fields indicating that the language type is mandarin, the resource data with the resource title of "movie B english" includes a "language _ list = english" field or other fields indicating that the language type is english, and the director and the actor of the two pieces of resource data are the same, the two pieces of resource data will be successfully matched. The content capture network element may set both an ID of the asset data having an asset title of "movie B english" and an ID of the asset data having an asset title of "movie B" to secondary IDs, and then generate an album data so that the album data includes contents of the two pieces of asset data.
Further, the content capture network element may also generate a primary ID for the new album data, and set the ID of the original asset data as a secondary ID under the primary ID, so that the two assets corresponding to the two pieces of asset data become the episode in the album data.
In some embodiments, the content capture network element may further generate description data of an album corresponding to the album data according to the description data of the two pieces of media asset data. Illustratively, the content capture network element may select a piece of asset data, and extract a poster, an asset title, classification data, and the like from the description data thereof as description data of the album, wherein the language version information is deleted when the asset title of the album is generated.
In some embodiments, after merging the asset data captured from the third-party server to obtain album data, the content capture network element may send the album data, that is, the merged asset data, to the asset storage system for processing.
In some embodiments, the asset database storage system may store the new album data, extract the secondary ID, the poster and the asset title corresponding to the secondary ID from the derivative feature diversity data in the album data, generate episode information, and store the episode information in the album data.
In some embodiments, the media asset storage system may further obtain and store a corresponding relationship between the primary ID and the secondary ID from the album data, further calculate a total set number of the album data, where the total set number is the number of the secondary IDs corresponding to the primary ID, set a total set number parameter according to the total set number, so that a value of the total set number parameter is the same as the number of the secondary IDs corresponding to the primary ID, and store the total set number parameter in the album data.
As can be seen from fig. 6 and 7, the server of the display device can process the media asset data provided by the third-party server, and merge the movie data of different language versions into one album data in a splitting or merging manner, so that the display device can display the movies of different language versions on the media asset detail page.
The following takes the user searching for the asset a as an example, and introduces the interaction process among the user, the display device and the server.
In some embodiments, a user may click on a search control 520 in the interface shown in fig. 5, and the display device enters the media asset search interface in response to the search control 520 being triggered. After a user inputs target information such as a media asset name, an actor name, a director name and the like through an input box control on a media asset searching interface, display equipment generates a media asset searching instruction according to the received target information, and generates a media asset searching request according to the media asset searching instruction.
In some embodiments, the display device sends the media asset search request to the terminal-oriented subsystem to obtain interface data of a search result interface sent by the terminal-oriented subsystem.
In some embodiments, the media asset search request may include target information input by a user and version information of a video-on-demand program on the display device, where the target information is used to enable the terminal-oriented subsystem to acquire media asset data to be sent to the display device, and the version information of the video-on-demand program is used to enable the terminal-oriented subsystem to determine layout data of a search result interface to be sent to the display device, so that the display device generates a media asset detail page according to the media asset data and the layout data. For example, different versions of video-on-demand programs are provided with display bits of different sizes on a search result interface, the terminal-oriented subsystem can determine the size of the display bits of the search result interface on the display device according to the version information of the video-on-demand programs, then generate the media asset control data corresponding to the size, and the arrangement data sent to the display device includes the media asset control data, so that the display device can display the media asset control on the display bits.
In some embodiments, after receiving a media asset search request, the terminal-oriented subsystem may extract target information from the media asset search request, then call a search interface of the media asset storage system, search for media asset data corresponding to the target information, where the search range may be description data corresponding to the primary ID, and after receiving media asset data returned by the media asset storage system, return the media asset data and the arrangement data to the display device as media asset detail data, where the media asset data includes the primary ID, a media asset title corresponding to the primary ID, and a media asset poster.
In some embodiments, the terminal-oriented subsystem may also combine the media asset data and the layout data after obtaining the media asset data and the layout data corresponding to the media asset search request, and send the combined data to the display device as interface data of the search result interface, so that the display device directly generates the search result interface according to the combined data.
Taking the case that the display device generates a medium asset search result according to the arrangement data and the medium asset data as an example, in some embodiments, after receiving the medium asset data sent to the terminal subsystem, the display device may extract a medium asset poster from each piece of medium asset data, and then generate a search result interface including multiple medium asset controls, where the multiple medium asset controls are displayed in the search result interface, and the medium asset controls may be used to display the medium asset poster.
In some embodiments, the terminal-oriented subsystem searches for the related assets of "asset a" in the asset storage system, including asset 1-asset 2847, and determines that 12 display bits exist in the search result interface of the display device according to the version information of the video-on-demand program, and then may first send the total amount of assets and the data of the first 12 assets to the display device, where the total amount of assets is 2847. After receiving the total amount of the assets and the data of the first 12 assets, the display device extracts the asset posters and asset ids of the 12 assets, generates 12 asset controls according to the 12 asset posters and asset ids, and each asset control can display one asset poster and corresponds to one asset id. Each asset control may be controlled to generate an asset detail request for the asset id in response to being triggered. The display device generates the search result interface of fig. 8 according to the media asset control and the total amount of the media assets, and the focus of the display device can be located on the first media asset control by default.
Referring to fig. 8, the search results interface may default to displaying 12 asset controls, each occupying a display slot, which may display a asset poster. If the current interface has the media assets which the user wants to watch, the user can select a media asset control according to the media asset poster, click the media asset control and generate a trigger instruction of the media asset control. Illustratively, the asset A1 is an asset which the user wants to watch, and since the focus of the display device is located on the first asset control, the user can directly click the confirmation key on the remote controller to generate the trigger instruction of the first asset control. If no media assets which the user wants to watch exist in the current interface, the user can press a direction down key of the remote controller to enable the display equipment to obtain data of 13 th to 24 th media assets from the terminal facing subsystem, and the like.
In some embodiments, after a user clicks one of the asset control in the interface shown in fig. 8, the display device may generate an asset detail page corresponding to the asset control, and a method for generating the asset detail page may refer to fig. 9, which is a schematic flow diagram of a method for generating an asset detail page according to some embodiments, and as shown in fig. 9, the method for generating an asset detail page may include the following steps:
step S110: and receiving a triggering instruction of the media asset control in the media asset list page.
In some embodiments, after selecting the asset control of asset A1 in fig. 8, the user inputs a confirmation instruction to the display device, and the display device may generate a trigger instruction of the asset control.
Step S120: and responding to the triggering instruction of the media asset control, and acquiring the media asset detail data corresponding to the media asset control.
In some embodiments, the display device, in response to a trigger instruction of the asset control, may obtain an asset id corresponding to the asset control, generate an asset detail request including the asset id, and then send the asset detail request to the terminal-oriented subsystem.
In some embodiments, after receiving the asset detail request, the terminal-oriented subsystem can extract asset ID from the asset detail request, the asset is a primary ID, then a search interface of the asset library storage system is called to search asset data corresponding to the primary ID, and then an asset detail page of the asset ID is generated according to the asset data of the asset ID and arrangement data of the asset detail page, wherein the arrangement data of the asset detail page can be determined according to version information of a video-on-demand program, and arrangement data of asset detail pages corresponding to video-on-demand programs of different versions are possibly different; the total media asset data comprises the total aggregation parameter corresponding to the primary ID and the media asset data of the secondary ID. Illustratively, if the total aggregation parameter of the asset A1 is 2, the asset data corresponding to the asset ID includes two pieces of asset data of secondary IDs and description data of primary IDs, one piece of asset data may be referred to as first asset data, the corresponding secondary ID is the secondary ID1, the other piece of asset data may be referred to as second asset data, and the corresponding secondary ID is the secondary ID2. In some embodiments, the layout data of the asset detail page includes configuration data of a list control, configuration data of a player control, and configuration data of description data, wherein the configuration data of the list control includes: the list control corresponds to a piece of media asset data of the secondary ID, and display contents are a media asset poster and a media asset title in the secondary ID media asset data, wherein the media asset title comprises language version information, the media asset poster is arranged above the media asset title, and the list control plays audio and video data of a playing address corresponding to the list control in a full screen mode when responding to triggering; the configuration data of the player control includes: playing address of audio and video data; the configuration data describing the data includes: the display content is a media title and a media introduction which do not contain language version information. It should be noted that the configuration data of the list control, the configuration data of the player control, and the configuration data of the description data all include the position information on the media asset detail page.
In some embodiments, the terminal-oriented subsystem can pack arrangement data of the media asset detail page and media asset data corresponding to the primary ID, and send the packed arrangement data and the media asset data to the display device as the media asset detail data, and the display device generates interface data of the media asset detail page according to the data.
In some embodiments, the terminal-oriented subsystem combines the layout data of the asset detail page with the asset data corresponding to the primary ID to obtain combined data, sends the combined data to the display device, and generates the asset detail page according to the combined data by the display device.
Taking the example that the terminal-oriented subsystem packs and sends the arrangement data of the detail media asset page and the media asset data corresponding to the primary ID to the display device, the display device can generate the detail media asset page through step S130 or step S140 according to different conditions of the media asset data corresponding to the primary ID.
Step S130: and if the media asset detail data comprises first media asset data corresponding to a first language version of the media asset control and does not comprise second media asset data corresponding to a second language version, generating a first media asset detail page, wherein the first media asset detail page comprises a player control and a first list control, the player control is used for playing audio and video data corresponding to the first media asset data, and the first list control is used for displaying a media asset title and a media asset picture in the first media asset data.
In some embodiments, the display device may combine the layout data with the media asset data corresponding to the primary ID to generate a first media asset detail page according to that, in the media asset data corresponding to the primary ID, only one piece of media asset data of the secondary ID is provided, and a language version corresponding to a language identifier in the media asset data of the secondary ID is a first language version, for example, an acoustic version, where the first media asset detail page includes a player control and a first list control, the player control is configured to play audio and video data corresponding to a first play address in the first media asset data, and the first list control is configured to display a media asset title and a media asset picture in the first media asset data.
Step S140: and if the media asset detail data comprises first media asset data corresponding to a first language version and second media asset data corresponding to a second language version of the media asset control, generating a second media asset detail page, wherein the second media asset detail page comprises a player control and a first list control and a second list control which are sequentially arranged, the first list control is used for displaying a media asset title and a media asset picture in the first media asset data, the second list control is used for displaying the media asset title and the media asset picture in the second media asset data, the player control is used for playing audio and video data corresponding to the first list control, and the second list control is configured to control a display device to play audio and video data corresponding to the second list control in response to triggering.
In some embodiments, the display device may combine the layout data with the media asset data corresponding to the primary ID to generate a second media asset detail page according to that, in the media asset data corresponding to the primary ID, there are two pieces of media asset data of the secondary ID, where a language version corresponding to a language identifier in one piece of media asset data of the secondary ID is a first language version, such as an original sound version, and a language version corresponding to a language identifier in the other piece of media asset data of the secondary ID is a second language version, such as a mandarin version, and the player control is used for playing the audio and video data corresponding to the first play address in the first media asset data.
In fig. 9, the specific method for combining the asset data corresponding to the primary ID and the layout data of the asset detail page in step S140 may refer to the description of fig. 10, and the method in step S130 may be adjusted accordingly.
Referring to fig. 10, which is a timing diagram of an overall process of interaction between a user and a display device according to some embodiments, in fig. 10, a terminal may be the display device, and a process in which the user inputs target information of a media asset search to the terminal to acquire a page shown in fig. 8 may be described specifically, as described above, a process in which the user inputs a media asset on-demand instruction, that is, a confirmation instruction input after the user selects one media asset control is described, and in fig. 10, a process in which a terminal-oriented subsystem combines media asset data and arrangement data is described as follows: and generating interface data of the media asset detail page according to the total collection parameter in the media asset data greater than 1 and the language version information.
In some embodiments, the method for generating interface data of the media asset detail page comprises the following steps:
1) And determining the number of the list controls according to the value of the total collection parameter.
In some embodiments, the number of list controls is determined to be two according to the value of the total collection number parameter being 2. And copying the configuration data of the list controls once to obtain the configuration data of the two list controls.
2) And generating interface data of a list control according to the media asset data corresponding to each secondary ID.
Filling the media asset data corresponding to each secondary ID into the configuration data of the list control, and generating interface data of the list control, for example, if a media asset title in the media asset data corresponding to the secondary ID1 is "media asset A1 (original sound edition)", setting a value of the media asset title t1 as "media asset A1 (original sound edition)", that is, "t1=" is set to be "media asset A1 (original sound edition)", in the configuration data of the list control; if the asset title in the asset data corresponding to the secondary ID2 is "asset A1 (english edition)", the value of the asset title t1 is set to "asset A1 (english edition)", that is, "t2=" asset A1 (english edition) ", in the configuration data of the list control.
3) And generating interface data of the player control according to the system language.
In some embodiments, the media asset detail page request sent by the display device may include a system language of the display device, the terminal-oriented subsystem may detect whether a language corresponding to the language version information is consistent with the system language in the two pieces of media asset data, and if the language corresponding to the language version information of one piece of media asset data is consistent with the system language, the play address in the piece of media asset data is set as the interface data of the player control; if the language corresponding to the language version information in the two pieces of media asset data is not consistent with the system voice, the playing address of the media asset data corresponding to the language with the top sequence can be set as the interface data of the player control according to the preset language sequence.
In some embodiments, the media asset detail page request sent by the display device does not include a system language of the display device, the terminal-oriented subsystem may set a play address with a default language as a play address in configuration data of the player control, a selection range of the play address is a play address in the first media asset data and a play address in the second media asset data, and the default language is determined by the display device, where the play address in the first media asset data is a first play address and the play address in the second media asset data is a second play address.
4) And generating interface data of the description data according to the description data of the primary ID.
And the terminal-oriented subsystem can fill the description data corresponding to the primary ID into the configuration data of the description data to obtain the interface data of the description data. For example, in the configuration data of the description data, if the resource title T is a resource title corresponding to the primary ID, the resource title "resource A1" is extracted from the description data corresponding to the primary ID, and is set to be the value of T, that is, "T = resource A1" is set, so that the display device can display the resource title on the resource detail page.
According to the steps, after the interface data of the media asset detail page is obtained by the terminal-oriented subsystem, the interface data of the media asset detail page can be returned to the display device.
In some embodiments, if the user selects the asset A2 in fig. 8 and there is only one piece of asset data under the primary ID corresponding to the asset A2, the interface data of the asset detail page generated by the terminal subsystem includes the interface data of one list control.
In some embodiments, after receiving the interface data of the above-mentioned media asset detail page, the display device may generate and display the media asset detail page.
If the interface data of the media asset detail page received by the display device only contains the interface data of one list control, the media asset detail page generated by the display device can be called a first media asset detail page, and at this time, the playing address of the player control can be directly set as the playing address in the media asset data of the primary ID; if the interface data of the media asset detail page received by the display device contains interface data of a plurality of list controls, the media asset detail page generated by the display device may be referred to as a second media asset detail page.
In some embodiments, if the terminal-oriented subsystem sets the player control and the display device sets the play address, the display device may determine the default language of the media asset control according to the system language of the display device and the language identifier in the control data of the list control. And if the list control with the language identification consistent with the system language exists, confirming the system language as the default language of the medium resource A1. For example, the language identifier in one of the list controls is a first language identifier, the corresponding language is mandarin, the language identifier in the other list control is a second language identifier, the corresponding language is english, and the language corresponding to the first language identifier is set as the default language of the media asset A1 according to the fact that the system language is mandarin.
In some embodiments, the display device further sets the list controls in the default language to a first order, and sorts the list controls in other languages according to a preset language sorting rule, where the preset language rule may be sorting according to a playing amount, where a larger playing amount is sorted before a smaller playing amount is sorted after the smaller playing amount is sorted. Through sorting, the arrangement order of the list controls can be obtained. For example, the language ordering rules may also include other ordering manners, such as setting the acoustic languages to a first order, ordering other languages by playback amount, and so forth.
In some embodiments, the display device may directly determine an arrangement order of the list controls according to a preset language ordering rule, and determine a language corresponding to the top list control as a default language, where, for example, the top list control is a first list control, a medium resource corresponding to the first list control is a first medium resource, and a medium resource corresponding to the second list control is a second medium resource.
In some embodiments, after determining the default language of the media asset A1, the display device may load audio and video data of the media asset from the play address corresponding to the list control, and generate a video window smaller than a full screen, where the video window is a player control and is used to play the audio and video data, where the play address corresponding to the default language may be referred to as a first play address, and the audio and video data at the first play address may be the first media asset data.
In some embodiments, the display device may generate a media asset detail page to display information such as the video window and the list control, where in the media asset detail page, an arrangement order of the list control is the arrangement order determined in the embodiments.
Referring to fig. 11, which is a schematic view of a resource detail page according to some embodiments, as shown in fig. 11, on the resource detail page, a video window may play a resource video in a default language, and two list controls are disposed below the video window, where a resource title of the first list control is "resource A1 (acoustic edition)", and a resource title of the second list control is "resource A1 (mandarin edition)". One side of the video window can be provided with a full-screen playing control, and the full-screen playing control is configured to play the video being played by the video window in a full-screen mode when responding to the trigger.
In some embodiments, on the media asset detail page, the focus of the display device may be located on the video window by default, and after the user presses a confirmation key on the remote controller, a confirmation instruction is generated, and the display device may play the video being played in the video window in full screen to continue playing according to the current playing position, or play the video corresponding to the video window from the beginning.
In some embodiments, on the media asset detail page, the user may press a direction key on the remote controller to generate a focus moving instruction for moving the focus of the display device to the list control, and if the user moves the focus of the display device to the first list control and presses the confirmation key, a confirmation instruction is generated, and the display device may perform full-screen playing on the video being played in the video window.
In some embodiments, if the user moves the focus of the display device to the second list control, and presses the confirmation key, the list control may be triggered, the display device may obtain language version information corresponding to the list control according to the received trigger instruction of the list control, stop loading the audio/video data currently being played according to the fact that the language version information is not consistent with the language version information of the audio/video data played in the video window, obtain a playing address of the second media resource corresponding to the second list control, load the audio/video data of the playing address of the second media resource, and then play the audio/video data of the playing address of the second media resource in a full screen manner, where the audio/video data of the playing address of the second media resource may be referred to as second media resource data.
In some embodiments, when the focus is located in the first list control, the display device controls the player control to play audio and video data corresponding to the first list control; and when the focus is positioned at the second list control, the display equipment controls the player control to play the audio and video data corresponding to the second list control.
Referring to fig. 12, which is a schematic view of language switching during full-screen playing of media assets according to some embodiments, as shown in fig. 12, when full-screen playing of media assets is performed, a user may press a menu key on a remote controller to call up a play menu, a switching list control may be set in the play menu, and the user may click the switching list control to switch a language version of a currently played video.
As can be seen from the above embodiments, in the embodiments of the present application, each movie is displayed on one display position on the search result interface, so that a plurality of different movies can be displayed on the search result interface, which is beneficial for a user to quickly find a movie that the user wants to watch, and the user can switch the language version that the user wants to watch on the asset detail page by displaying a plurality of list controls of the movie on the asset detail page, so that the language version of the movie can be determined before the movie is played in a full screen manner, and user experience of language version switching is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, comprising:
a display for presenting a user interface;
a controller connected with the display, the controller configured to:
receiving a triggering instruction of a media asset control in a media asset list page;
responding to the trigger instruction of the media asset control, and acquiring media asset detail data corresponding to the media asset control; if the media asset detail data comprises first media asset data corresponding to a first language version of the media asset control and does not comprise second media asset data corresponding to a second language version, generating a first media asset detail page, wherein the first media asset detail page comprises a player control and a first list control, the player control is used for playing audio and video data corresponding to the first media asset data, and the first list control is used for displaying a media asset title and a media asset picture in the first media asset data;
and if the media asset detail data comprises first media asset data corresponding to a first language version of the media asset control and second media asset data corresponding to a second language version of the media asset control, generating a second media asset detail page, wherein the second media asset detail page comprises a player control and a first list control and a second list control which are sequentially arranged, the first list control is used for displaying media asset titles and media asset pictures in the first media asset data, the second list control is used for displaying the media asset titles and the media asset pictures in the second media asset data, the player control is used for playing audio and video data corresponding to the first list control, and the second list control is configured to control display equipment to play audio and video data corresponding to the second list control in response to triggering.
2. The display device of claim 1, wherein generating a second asset detail page comprises:
acquiring a system language of display equipment, a first language identifier in the first media asset data and a second language identifier in the second media asset data;
when the language of the first language identification representation is the same as the system language, determining that the language of the first language identification representation is the default language of the media asset control;
and generating a second media asset detail page to enable the first list control to load the media asset titles and the media asset pictures in the first media asset data, the second list control to load the media asset titles and the media asset pictures in the second media asset data, and the player control plays the audio and video data corresponding to the first list control according to a playing address of a first media asset, wherein the playing address of the first media asset is obtained from the first media asset data corresponding to the first language identifier.
3. The display device of claim 1, wherein generating a second asset detail page comprises:
controlling the first list control to load the asset titles and asset pictures in the first asset data, and controlling the second list control to load the asset titles and asset pictures in the second asset data;
acquiring a play address of a first medium resource of the first medium resource corresponding to a first list control arranged at the head;
and playing the audio-visual data corresponding to the first list control according to the playing address of the first media asset.
4. The display device according to claim 1, wherein the controller is further configured to:
receiving a focus moving instruction when the focus is positioned in the first list control;
responding to the focus moving instruction, and controlling the focus of the display device to move to the second list control according to the moving direction corresponding to the focus moving instruction and towards the second list control;
receiving a confirmation instruction;
responding to the confirmation instruction, acquiring a playing address of the second media asset from second media asset data corresponding to the second list control, and loading the second media asset data of the playing address of the second media asset;
and starting the full screen player control to play the second media asset data corresponding to the playing address of the second media asset in full screen.
5. The display device according to claim 1, wherein the controller is further configured to:
when a focus is positioned on the player control, if a confirmation instruction is received, the full-screen player control is started to play audio and video data corresponding to the audio and video played by the player control in a full screen manner in response to the confirmation instruction;
when the focus is positioned in a first list control, controlling the player control to play audio and video data corresponding to the first list control;
and when the focus is positioned at a second list control, controlling the player control to play audio and video data corresponding to the second list control.
6. The display device of claim 1, wherein the asset title of the asset control of the asset list page does not contain language version information, and wherein the titles of the first list control and the second list control both contain language version information.
7. A server, wherein the server is configured to:
receiving a media resource detail request of a display device, wherein the media resource detail request comprises a media resource identifier;
responding to the request for the details of the media assets, acquiring the details of the media assets corresponding to the media asset identifier, and generating interface data of a media asset detail page according to the details of the media assets, wherein the details of the media assets are generated by a server according to the media assets corresponding to the media asset identifier, a plurality of language versions of the media asset data are contained in a media asset library, and each piece of media asset data is respectively provided with a media asset title, a media asset picture, a language version parameter and a media asset playing address;
and sending the interface data of the media asset detail page to display equipment.
8. The server of claim 7, wherein generating interface data for a asset detail page from the asset detail data comprises:
generating a plurality of interface data of the list control respectively corresponding to one piece of media asset data according to a plurality of pieces of media asset data in the media asset detail data;
generating interface data of a player control, wherein the size of the player control in the interface data of the player control is smaller than the full-screen size of display equipment;
and the interface data of the media asset detail page comprises the interface data of the list control and the interface data of the player control.
9. A method for switching language versions is characterized by comprising the following steps:
responding to a triggering instruction of the media asset control, and acquiring media asset detail data corresponding to the media asset control; if the media asset details comprise first media asset data corresponding to a first language version of the media asset control and do not comprise second media asset data corresponding to a second language version, generating a first media asset details page, wherein the first media asset details page comprises a player control and a first list control, the player control is used for playing audio and video data corresponding to the first media asset data, and the first list control is used for displaying a media asset title and a media asset picture in the first media asset data;
and if the media asset details comprise first media asset data corresponding to a first language version and second media asset data corresponding to a second language version of the media asset control, generating a second media asset details page, wherein the second media asset details page comprises a player control and a first list control and a second list control which are sequentially arranged, the first list control is used for displaying media asset titles and media asset pictures in the first media asset data, the second list control is used for displaying the media asset titles and the media asset pictures in the second media asset data, the player control is used for playing audio and video data corresponding to the first list control, and the second list control is configured to control display equipment to play the audio and video data corresponding to the second list control in response to triggering.
10. The language version switching method according to claim 9,
generating a second asset detail page, comprising:
acquiring a system language of display equipment, a first language identifier in the first media asset data and a second language identifier in the second media asset data;
when the language of the first language identification representation is the same as the system language, determining that the language of the first language identification representation is the default language of the media asset control;
and generating a second media asset detail page to enable the first list control to load the media asset titles and the media asset pictures in the first media asset data, the second list control to load the media asset titles and the media asset pictures in the second media asset data, and the player control plays the audio and video data corresponding to the first list control according to the playing address of the first media asset, wherein the playing address of the first media asset is acquired from the first media asset data corresponding to the first language identifier.
CN202110729316.4A 2021-06-29 2021-06-29 Display device, server and language version switching method Pending CN115550740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110729316.4A CN115550740A (en) 2021-06-29 2021-06-29 Display device, server and language version switching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729316.4A CN115550740A (en) 2021-06-29 2021-06-29 Display device, server and language version switching method

Publications (1)

Publication Number Publication Date
CN115550740A true CN115550740A (en) 2022-12-30

Family

ID=84717570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729316.4A Pending CN115550740A (en) 2021-06-29 2021-06-29 Display device, server and language version switching method

Country Status (1)

Country Link
CN (1) CN115550740A (en)

Similar Documents

Publication Publication Date Title
CN111464844A (en) Screen projection display method and display equipment
WO2021189697A1 (en) Video display method, terminal, and server
WO2021203530A1 (en) Display device and television program pushing method
CN112333509B (en) Media asset recommendation method, recommended media asset playing method and display equipment
CN112601117B (en) Display device and content presentation method
CN111836115B (en) Screen saver display method, screen saver skipping method and display device
WO2022048203A1 (en) Display method and display device for manipulation prompt information of input method control
CN112055245B (en) Color subtitle realization method and display device
CN113490032A (en) Display device and medium resource display method
CN113301420A (en) Content display method and display equipment
CN112733050A (en) Display method of search results on display device and display device
CN112272331A (en) Method for rapidly displaying program channel list and display equipment
CN113613047B (en) Media file playing control method and display device
WO2022012299A1 (en) Display device and person recognition and presentation method
CN114390332A (en) Display device and method for rapidly switching split-screen application
CN115550740A (en) Display device, server and language version switching method
CN113453069A (en) Display device and thumbnail generation method
CN114915810A (en) Media asset pushing method and intelligent terminal
CN114390329A (en) Display device and image recognition method
CN112367550A (en) Method for realizing multi-title dynamic display of media asset list and display equipment
CN112601116A (en) Display device and content display method
CN112328553A (en) Thumbnail capturing method and display device
CN112261463A (en) Display device and program recommendation method
CN111787350A (en) Display device and screenshot method in video call
CN115150667B (en) Display device and advertisement playing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination