CN113796091B - Display method and display device of singing interface - Google Patents

Display method and display device of singing interface Download PDF

Info

Publication number
CN113796091B
CN113796091B CN202080025019.5A CN202080025019A CN113796091B CN 113796091 B CN113796091 B CN 113796091B CN 202080025019 A CN202080025019 A CN 202080025019A CN 113796091 B CN113796091 B CN 113796091B
Authority
CN
China
Prior art keywords
song
singing
style
action
action instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202080025019.5A
Other languages
Chinese (zh)
Other versions
CN113796091A (en
Inventor
王光强
孙浩然
李珑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010095812.4A external-priority patent/CN111343509A/en
Priority claimed from CN202010193786.9A external-priority patent/CN112533030B/en
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Publication of CN113796091A publication Critical patent/CN113796091A/en
Application granted granted Critical
Publication of CN113796091B publication Critical patent/CN113796091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Abstract

Some embodiments show a display method, a display device and a server of a singing interface, and some embodiments show a technical scheme that a singing theme display request sent by a display device when a song is selected is received, where the singing theme display request includes a song identifier for characterizing the song; and determining singing topics according to the song identifications, wherein different singing topics correspond to different songs, and sending the singing topics to a display device to display images corresponding to the singing topics on a singing interface for playing the songs by the display device.

Description

Display method and display device of singing interface
The present application claims priority of chinese patent application filed in 2019, 09 and 19, with application number 201910886826.5, application name "a display method, display device and server for singing interface", chinese patent application filed in 2020, 03 and 18, application number 202010193786.9, application name "a display method, display device and server for singing interface", chinese patent application filed in 2020, 02 and 17, application number 202010095812.4, application name "an action control method and display device for virtual image", the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of software technologies, and in particular, to a method, a device, and a server for displaying a singing interface.
Background
With more and more families using smart televisions, the requirement of users on entertainment functions of the smart televisions is higher and higher. In order to enrich the entertainment function of the intelligent television, a song singing application can be installed on the intelligent television, so that a user can complete song singing activities on the intelligent television.
However, when a user sings a song on the intelligent television in the related technology, the singing interface displayed by the intelligent television only displays lyrics, the singing interface has single display content, the use feeling of the user is affected, and the interestingness of the user singing the song on the intelligent television is reduced. Therefore, how to enable the singing interface to meet the user requirements when the user sings the song on the smart television becomes a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
In a first aspect, some embodiments show a method for displaying a singing interface, which is applied to a server, and the method includes:
receiving a singing theme display request sent by a display device when a song is selected, wherein the singing theme display request comprises a song identifier for representing the song;
Determining singing topics according to the song identifications, wherein different singing topics correspond to different songs;
and sending the singing theme to a display device, and displaying an image corresponding to the singing theme on a singing interface by the display device.
In a second aspect, some embodiments show a method for displaying a singing interface, which is applied to a display device, and the method includes:
receiving a selection of songs by a user;
responding to selection of songs by a user, and sending a singing theme display request corresponding to the songs to a server, wherein the singing theme display request comprises song identifications representing the songs, the song identifications are used by the server for determining singing themes, and different singing themes correspond to different songs;
receiving the singing theme sent by the server;
and displaying the image corresponding to the displayed singing theme on a singing interface for playing the song.
In a third aspect, some embodiments show a server comprising:
the song display device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for receiving a singing theme display request sent by the display device when a song is selected, and the singing theme display request comprises a song identifier for representing the song;
the first sending unit is used for determining singing topics according to the song identifications, and different singing topics correspond to different songs;
And sending the singing theme to a display device, and displaying an image corresponding to the singing theme on a singing interface by the display device.
In a fourth aspect, some embodiments show a display device comprising:
a communicator for communicating with the server;
a display for displaying an image and a user interface, and a selector in the user interface for indicating that an item is selected in the user interface;
a controller configured to:
receiving a control signal input from a user input interface indicating launching of a first application, and presenting a user interface of the first application on the display in response to the user input;
selecting, by the selector, a user input in the user interface indicating that a user will select a song to be singed, and in response to the user input, sending, by the communicator, a request to the server carrying a song identification of the selected song to cause the server to determine accompaniment audio data and singing background video image data for the song to be singed, and song lyrics according to the song identification;
receiving accompaniment audio data and singing background video image data of the singing song, which are sent by the server, wherein the accompaniment audio data corresponds to real-time lyrics, presenting the singing background video image in a user interface of the first application program, and performing rendering processing on the accompaniment background video image and the corresponding lyrics of the real-time accompaniment audio, and then superposing and synthesizing the accompaniment background video image and the singing background video image and displaying the accompaniment video image on the singing background video image.
In a fifth aspect, some embodiments show a display device comprising:
a display;
a controller configured to:
receiving a selection of songs by a user;
responding to selection of songs by a user, and sending a singing theme display request corresponding to the songs to a server, wherein the singing theme display request comprises song identifications representing the songs, the song identifications are used by the server for determining singing themes, and different singing themes correspond to different songs;
receiving the singing theme sent by the server;
and controlling the display to display the image corresponding to the display singing theme on a singing interface for playing the song.
In a sixth aspect, some embodiments show a method for displaying a singing interface, which includes the following steps:
displaying an avatar and a lyric text obtained by analyzing a song file when the song file is played;
detecting a tag on a time axis of the song file, wherein the time axis is used for controlling display of the lyric text;
responding to the detection of the label, and acquiring an action instruction corresponding to the label;
and controlling the action of the avatar according to the action instruction.
In a seventh aspect, some embodiments provide a display device, including:
a display configured to display actions of the avatar and lyrics text;
a speaker configured to output sound of the song file;
a controller configured to display an avatar and a lyric text obtained by parsing a song file when the song file is played;
detecting a tag on a time axis of the song file, wherein the time axis is used for controlling display of the lyric text;
responding to the detection of the label, and acquiring an action instruction corresponding to the label;
and controlling the action of the avatar according to the action instruction.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1;
A hardware configuration block diagram of the display device 200 in accordance with the embodiment is exemplarily shown in fig. 2;
a hardware configuration block diagram of the control device 100 in accordance with the embodiment is exemplarily shown in fig. 3;
a functional configuration diagram of the display device 200 according to the embodiment is exemplarily shown in fig. 4;
a schematic diagram of the software configuration in the display device 200 according to an embodiment is exemplarily shown in fig. 5;
a schematic configuration of an application program in the display device 200 according to an embodiment is exemplarily shown in fig. 6;
a schematic diagram of a display interface for displaying a K song control is shown in an exemplary manner in accordance with an embodiment in fig. 7;
a schematic diagram of a user interface of the display device 200 is exemplarily shown in fig. 8;
a schematic diagram of a user interface of the application center of the display device 200 is exemplarily shown in fig. 9;
a song list interface is shown schematically in fig. 10;
another song list interface is illustrated in fig. 11;
yet another song list interface is illustrated in fig. 12;
FIG. 13 is a view of a mode selection user interface entered after selecting a song from the song list shown in FIG. 10;
yet another mode selection user interface is illustrated schematically in FIG. 14;
FIG. 15 is a user interface entered after selecting a real-time wheat-along singing in the user interface shown in FIG. 13;
A schematic diagram of a singing interface is shown schematically in fig. 16 in accordance with an embodiment;
a schematic diagram of yet another singing interface in accordance with an embodiment is shown schematically in fig. 17;
a schematic diagram of yet another singing interface in accordance with an embodiment is shown schematically in fig. 18;
a flowchart of a method for displaying a singing interface according to an embodiment is exemplarily shown in fig. 19;
a flowchart of a display method of still another singing interface according to an embodiment is exemplarily shown in fig. 20;
a flowchart of a display method of still another singing interface according to an embodiment is exemplarily shown in fig. 21;
a flowchart of a display method of still another singing interface according to an embodiment is exemplarily shown in fig. 22.
A flowchart of a method of displaying a singing interface according to an embodiment is exemplarily shown in fig. 23;
a data flow diagram of a method of displaying a singing interface according to an embodiment is exemplarily shown in fig. 24;
a flowchart of a display method according to an embodiment is exemplarily shown in fig. 25;
a method flow diagram for retrieving action instructions in accordance with an embodiment is illustrated in fig. 26;
another method flow diagram for fetching action instructions according to an embodiment is illustrated in FIG. 27;
Another flowchart of a display method of a singing interface in accordance with an embodiment is exemplarily shown in fig. 28;
a block diagram of a display device according to an embodiment is exemplarily shown in fig. 29.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of exemplary embodiments of the present application more apparent, the technical solutions of exemplary embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is apparent that the described exemplary embodiments are only some embodiments of the present application, not all embodiments.
All other embodiments, which can be made by a person skilled in the art without inventive effort, based on the exemplary embodiments shown in the present application are intended to fall within the scope of the present application. Furthermore, while the present disclosure has been described in terms of an exemplary embodiment or embodiments, it should be understood that each aspect of the disclosure may be separately implemented as a complete solution.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in connection with the embodiments of the application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this disclosure refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
The term "remote control" as used herein refers to a component of an electronic device (such as a display device as disclosed herein) that can be controlled wirelessly, typically over a relatively short distance. Typically, the electronic device is connected to the electronic device using infrared and/or Radio Frequency (RF) signals and/or bluetooth, and may also include functional modules such as WiFi, wireless USB, bluetooth, motion sensors, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in a general remote control device with a touch screen user interface.
The term "gesture" as used herein refers to a user behavior by which a user expresses an intended idea, action, purpose, and/or result through a change in hand shape or movement of a hand, etc.
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1. As shown in fig. 1, a user may operate the display apparatus 200 through the mobile terminal 300 and the control device 100.
The control device 100 may control the display apparatus 200 through a wireless or other wired manner by using a remote controller including an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication manners. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc. Such as: the user can input corresponding control instructions through volume up-down keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, on-off keys, etc. on the remote controller to realize the functions of the control display device 200.
In some embodiments, mobile terminals, tablet computers, notebook computers, and other smart devices may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device. The application program, by configuration, can provide various controls to the user in an intuitive User Interface (UI) on a screen associated with the smart device.
By way of example, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. Such as: it is possible to implement a control command protocol established between the mobile terminal 300 and the display device 200, synchronize a remote control keyboard to the mobile terminal 300, and implement a function of controlling the display device 200 by controlling a user interface on the mobile terminal 300. The audio/video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display device 200 is also in data communication with the server 400 via a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 400 may be one or more groups, and may be one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 400.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display device 200 may additionally provide an intelligent network television function of a computer support function in addition to the broadcast receiving television function. Examples include web tv, smart tv, internet Protocol Tv (IPTV), etc.
A hardware configuration block diagram of the display device 200 according to an exemplary embodiment is illustrated in fig. 2. As shown in fig. 2, the display device 200 includes a controller 210, a modem 220, a communication interface 230, a detector 240, an input/output interface 250, a video processor 260-1, an audio processor 260-2, a display 280, an audio output 270, a memory 290, a power supply, and an infrared receiver.
A display 280 for receiving image signals from the video processor 260-1 and for displaying video content and images and components of the menu manipulation interface. The display 280 includes a display screen assembly for presenting pictures, and a drive assembly for driving the display of images.
The communication interface 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communication interface 230 may be a Wifi module 231, a bluetooth module 232, a wired ethernet module 233, or other network communication protocol chip or near field communication protocol chip, and an infrared receiver (not shown in the figure).
The display device 200 may establish control signal and data signal transmission and reception with an external control device or a content providing device through the communication interface 230. And an infrared receiver, which is an interface device for receiving infrared control signals of the control device 100 (such as an infrared remote controller).
The detector 240 is a signal that the display device 200 uses to collect an external environment or interact with the outside. The detector 240 includes a light receiver 242, a sensor for collecting the intensity of ambient light, a parameter change may be adaptively displayed by collecting the ambient light, etc.
And the image collector 241, such as a camera, a video camera, etc., can be used for collecting external environment scenes, collecting attributes of a user or interacting gestures with the user, can adaptively change display parameters, and can also recognize the gestures of the user so as to realize the interaction function with the user.
An input/output interface 250 for data transmission between the control display device 200 of the controller 210 and other external devices. Such as receiving video signals and audio signals of an external device, or command instructions.
The input/output interface 250 may include, but is not limited to, the following: any one or more of a High Definition Multimedia Interface (HDMI) interface 251, an analog or data high definition component input (component) interface 253, a composite video input (AV) interface 252, a USB input interface 254, an RGB port (not shown), and the like may be used.
In other exemplary embodiments, the input/output interface 250 may also form a composite input/output interface from the plurality of interfaces described above.
The video processor 260-1 is configured to receive an external video signal, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, etc., according to the standard codec protocol of the input signal, and obtain a signal that can be displayed or played on the directly displayable device 200.
The audio processor 260-2 is configured to receive an external audio signal, decompress and decode the external audio signal according to a standard codec protocol of an input signal, and perform noise reduction, digital-to-analog conversion, amplification processing, and the like, to obtain a sound signal that can be played in a speaker.
In other exemplary embodiments, video processor 260-1 may include one or more chip components. The audio processor 260-2 may also include one or more chips.
And, in other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or integrated together in one or more chips with the controller 210.
An audio output 272, which receives the sound signal output by the audio processor 260-2 under the control of the controller 210, such as: the speaker 272, and an external sound output terminal 274 that can be output to a generating device of an external device, other than the speaker 272 carried by the display device 200 itself, such as: external sound interface or earphone interface, etc.
A user input interface for receiving an input signal of a user and then transmitting the received user input signal to the controller 210. The user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
By way of example, a user inputs a user command through the remote controller 100 or the mobile terminal 300, the user input interface responds to the user input through the controller 210 by the display device 200 according to the user input.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
The controller 210 controls the operation of the display device 200 and responds to the user's operations through various software control programs stored on the memory 290.
As shown in fig. 2, the controller 210 includes RAM213 and ROM214, and a graphics processor 216, processor 212, communication interfaces, such as: first interface 218-1 through nth interfaces 218-n, and a communication bus. The RAM213 and the ROM214 are connected to the graphics processor 216, the processor 212, and the communication interface 218 via buses.
The controller 210 may control the overall operation of the display apparatus 100. For example: in response to receiving a user command to select a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon. The user command for selecting the UI object may be an input command through various input means (e.g., mouse, keyboard, touch pad, etc.) connected to the display device 200 or a voice command corresponding to a voice uttered by the user.
A block diagram of the configuration of the control device 100 according to an exemplary embodiment is illustrated in fig. 3. As shown in fig. 3, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control device 100 is configured to control the display device 200, and may receive an input operation instruction of a user, and convert the operation instruction into an instruction recognizable and responsive to the display device 200, to function as an interaction between the user and the display device 200. Such as: the user responds to the channel addition and subtraction operation by operating the channel addition and subtraction key on the control apparatus 100, and the display apparatus 200.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications for controlling the display apparatus 200 according to user's needs.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent electronic device may function similarly to the control device 100 after installing an application that manipulates the display device 200. Such as: the user may implement the functions of controlling the physical keys of the device 100 by installing various function keys or virtual buttons of a graphical user interface available on the mobile terminal 300 or other intelligent electronic device.
The controller 110 includes a processor 112 and RAM113 and ROM114, a communication interface 218, and a communication bus. The controller 110 is used to control the operation and operation of the control device 100, as well as the communication collaboration among the internal components and the external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display device 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display device 200. The communication interface 130 may include at least one of a WiFi chip, a bluetooth module, an NFC module, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touchpad 142, a sensor 143, keys 144, and other input interfaces. Such as: the user can implement a user instruction input function through actions such as voice, touch, gesture, press, and the like, and the input interface converts a received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the corresponding instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display device 200. In some embodiments, an infrared interface may be used, as well as a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. And the following steps: when the radio frequency signal interface is used, the user input instruction is converted into a digital signal, and then the digital signal is modulated according to a radio frequency control signal modulation protocol and then transmitted to the display device 200 through the radio frequency transmission terminal.
In some embodiments, the control device 100 includes at least one of a communication interface 130 and an output interface. The control device 100 is provided with a communication interface 130 such as: the WiFi, bluetooth, NFC, etc. modules may send the user input instruction to the display device 200 through a WiFi protocol, or a bluetooth protocol, or an NFC protocol code.
A schematic diagram of the functional configuration of the display device 200 according to an exemplary embodiment is illustrated in fig. 4. As shown in fig. 4, the memory 290 is used to store an operating system, application programs, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. Memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically used for storing an operation program for driving the controller 210 in the display device 200, and storing various application programs built in the display device 200, various application programs downloaded by a user from an external device, various graphical user interfaces related to the application, various objects related to the graphical user interfaces, user data information, and various internal data supporting the application. The memory 290 is used to store system software such as OS kernel, middleware and applications, and to store input video data and audio data, and other user data.
Memory 290 is specifically used to store drivers and related data for audio and video processors 260-1 and 260-2, display 280, communication interface 230, modem 220, detector 240 input/output interface, and the like.
In some embodiments, memory 290 may store software and/or programs, the software programs used to represent an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (such as the middleware, APIs, or application programs), and the kernel may provide interfaces to allow the middleware and APIs, or applications to access the controller to implement control or management of system resources.
By way of example, the memory 290 includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The controller 210 executes various software programs in the memory 290 such as: broadcast television signal receiving and demodulating functions, television channel selection control functions, volume selection control functions, image control functions, display control functions, audio control functions, external instruction recognition functions, communication control functions, optical signal receiving functions, power control functions, software control platforms supporting various functions, browser functions and other applications.
A block diagram of the configuration of the software system in the display device 200 according to an exemplary embodiment is illustrated in fig. 5.
As shown in FIG. 5, operating system 2911, which includes executing operating software for handling various basic system services and for performing hardware-related tasks, acts as a medium for data processing completed between applications and hardware components. In some embodiments, portions of the operating system kernel may contain a series of software to manage display device hardware resources and to serve other programs or software code.
In other embodiments, portions of the operating system kernel may contain one or more device drivers, which may be a set of software code in the operating system that helps operate or control the devices or hardware associated with the display device. The driver may contain code to operate video, audio and/or other multimedia components. Examples include a display screen, camera, flash, wiFi, and audio drivers.
Wherein, accessibility module 2911-1 is configured to modify or access an application program to realize accessibility of the application program and operability of display content thereof.
The communication module 2911-2 is used for connecting with other peripheral devices via related communication interfaces and communication networks.
User interface module 2911-3 is configured to provide an object for displaying a user interface, so that the user interface can be accessed by each application program, and user operability can be achieved.
Control applications 2911-4 are used for controllable process management, including runtime applications, and the like.
The event transmission system 2914, which may be implemented within the operating system 2911 or within the application 2912, is implemented in some embodiments on the one hand within the operating system 2911, and simultaneously within the application 2912, for listening to various user input events, and will implement one or more sets of predefined operational handlers based on various event references in response to recognition results of various types of events or sub-events.
The event monitoring module 2914-1 is configured to monitor a user input interface to input an event or a sub-event.
The event recognition module 2914-2 is configured to input definitions of various events to various user input interfaces, recognize various events or sub-events, and transmit them to a process for executing one or more corresponding sets of processes.
The event or sub-event refers to an input detected by one or more sensors in the display device 200, and an input from an external control device (e.g., the control device 100, etc.). Such as: various sub-events are input through voice, gesture input through gesture recognition, sub-events of remote control key instruction input of a control device and the like. By way of example, one or more sub-events in the remote control may include a variety of forms including, but not limited to, one or a combination of key press up/down/left/right/, ok key, key press, etc. And operations of non-physical keys, such as movement, holding, releasing, etc.
Interface layout manager 2913 directly or indirectly receives user input events or sub-events from event delivery system 2914 for updating the layout of the user interface, including but not limited to the location of controls or sub-controls in the interface, and various execution operations associated with the interface layout, such as the size or location of the container, the hierarchy, etc.
As shown in fig. 6, the application layer 2912 contains various applications that may also be executed on the display device 200. The application may include, but is not limited to, one or more applications, and the display interface may display icons for the applications, such as: live television application icons, video on demand application icons, media center application icons, application center icons, game application icons, and the like.
Some embodiments of the present application show a method for displaying a singing interface, which is applied to a display device, and the method includes:
receiving a selection of songs by a user;
responding to selection of songs by a user, and sending a singing theme display request corresponding to the songs to a server, wherein the singing theme display request comprises song identifications representing the songs, the song identifications are used by the server for determining singing themes, and different singing themes correspond to different songs;
In some embodiments, the method for displaying a singing interface provided by the present application is not only applicable to the dual-chip display device 200, but the display device 200 may also be a non-dual-chip display device.
In some embodiments, the chorus process is described in detail in connection with a display device in which the present solution may be implemented. Fig. 8 to 15 exemplarily show user interface changes in the comic process.
When the display displays the user interface shown in fig. 8, the user can enter the interface of the application center shown in fig. 9 by operating the control device 100 (e.g., remote control 100A). For example, the user may move the position of the selector in the view display area by controlling the input of the control device, and when the selector selects an item for entering the application center (e.g., "my application" in fig. 8) and determines, the display displays the application center interface as shown in fig. 9. As shown in fig. 9, icons corresponding to respective applications, such as application 1, application 2 … …, and "K song application", are displayed in the application center interface.
In some embodiments, as shown in fig. 7, the song entry interface may also be displayed on the display interface of the smart tv by the user, where when the focus (selector) moves to the K song control, the display interface corresponding to the K song control is displayed, and the display interface displays the following options: the user can select the mode of K songs according to own needs. Different display pages are entered according to different modes of K songs, for example, a singer song requesting control is selected, and a list of different singers appears. The user clicks a singer through the remote controller, and the page enters a song list page sung by the singer. After clicking on the song to be sung, the page jumps to the singing interface displaying the singing theme.
Next, in the user interface shown in fig. 9, when the user operates the control device to start the "K song application", the controller may start the "K song application" in response to the user instruction, and present the application interface of the "K song application" shown in fig. 10 on the display.
In some embodiments, the chorus application does not act as a separate application, but rather as part of the focused application as shown in fig. 8, includes "chorus" titles in the TAB column of the interactive interface in addition to "movie", "education", "game", "application", etc. titles, and the user may perform a corresponding title interface by moving the focus to a different title, e.g., after moving the focus to the "chorus" title, enter the chorus interface where song resources are presented.
In some embodiments, the interface shown in fig. 10 may be an implementation of an application interface or a K song interface, where a list of possible songs is presented in the interface shown in fig. 10, from which the user may select songs to chorus by operating the control device. The interfaces shown in figures 11 and 12 may be one other implementable implementation of an application interface or a K song interface,
in some embodiments, the interfaces shown in fig. 11 and 12 may be provided with a sort topic control (e.g., fantasy K songs) at a top level interface, and the user clicks on the song list interface entered after clicking.
Fig. 13 is a user interface (mode selection interface) entered after selecting a song from the list of songs shown in any of the interfaces of fig. 10-12. The user interface displays the items corresponding to the selected chorus songs, which concretely comprise 'common mode singing', 'real-time continuous wheat singing', 'AR interesting singing', and 'I want to listen to songs'. The user selects the different mode controls by moving the focus, and when the display displays the user interface as shown in fig. 13, the user can select "real-time continuous microphone" by operating the control device to input an instruction for instructing to initiate continuous microphone.
Fig. 14 is a schematic diagram of another mode selection interface, which may be implemented by creating a floating layer on a superior interface.
Fig. 15 is a user interface entered after selecting "real-time comic" from the user interfaces shown in fig. 13 or 14, specifically, a user interface before comic initiation. The user interface illustratively shows a plurality of items (controls) corresponding to "real-time chorus", including specifically a chorus initiation step description, at least one functional icon (illustratively, may be in the form of a control) corresponding to a matching manner, such as "quick match" and "co-city quick match", and a buddy list including at least one online buddy, and an item for inviting one of the online buddies. When the display shows the user interface as shown in fig. 19, the user may invite one of the online friends in the buddy list to make a chorus by operating an item in the control device operation interface.
In some embodiments, the user may select to initiate a chorus request at an online friend, where the request includes an identifier of the friend and an identifier of a song, so that after creating a chorus room, the server sends an invitation request to other display devices corresponding to the identifier of the friend according to the identifier of the friend and the identifier of the song and/or the identifier of the room, and after receiving the request, the other devices join in a chorus room service to complete chorus of the song.
In some embodiments, the user may not select the buddy list control generated according to the address book, but select the quick matching control, where the display device initiates a chorus request, where the request includes an identifier that requires quick matching and an identifier of a song, so that after the server creates a chorus room, an invitation request is sent to other display devices corresponding to any online user according to the identifier that requires quick matching and the identifier of the song and/or the identifier of the room, and after the other devices receive the request, the chorus room service is added to complete chorus of the song.
In some embodiments, the user may not select the buddy list control generated according to the address book, but select the quick matching control, where the display device initiates a chorus request, where the request includes an identifier for requesting to match with a city and an identifier for a song, so that after creating a chorus room, the server sends an invitation request to other display devices corresponding to any user whose ip address or geographic location belongs to the same city area according to the identifier for requesting to match with the city and the identifier for the song and/or the identifier for the room, and after receiving the request, the other devices join in a chorus room service to complete chorus of the song.
In some embodiments, any user is a user that is using a karaoke application or browsing a karaoke interface. The server receives the state identification uploaded by the display device to distinguish users who are using the Karaoke application or browsing the Karaoke interface from other users, and interference of the chorus on other users in the same city matching and the rapid matching is avoided.
In some embodiments, any user is a user who requires co-city matching and/or quick matching, but does not select a display device corresponding to the song, and after receiving the co-city matching request and/or the quick matching request, the server may serve as any user to perform matching if the server does not carry the song identifier.
In some embodiments, the other display device of any user is a display device that is configured to receive city match and/or quick match rights.
In some embodiments, when a user sings a song on the smart television, an image representing a singing theme can be displayed on a singing interface of the smart television, so that the interest of chorus of the user on the smart television is increased. Fig. 16 and 17 illustrate the miss display effect of different singing subjects.
As shown in fig. 7, when the user moves the focus (selector) to the K song control on the display interface of the smart television, the display interface corresponding to the K song control is displayed, and the display interface displays the following options: the user can select the mode of K songs according to own needs. Different display pages are entered according to different modes of K songs, for example, a singer song requesting control is selected, and a list of different singers appears. The user clicks on a singer through the remote control, and the page enters the song list page sung by the singer (as shown in fig. 12). After clicking on the song to be sung, the page jumps to the singing interface displaying the theme of the singing, as shown in fig. 16-18. Some embodiments refer to choruses not only to multi-person choruses, but also to single sings.
In some embodiments, the triggering of the display of the image corresponding to the singing theme may be other triggering methods in the related art, and only the singing theme display request including the song identifier needs to be triggered in the application program.
In some embodiments, the user may select a song to be sung on the smart tv, and the smart tv sends a sung theme display request with a song identifier to the server. The song identification refers to a unique identification of a song. For example, song A may have a song identification of 001 and song B may have a song identification of 002, the specific representation of which is not limited herein.
Some embodiments show a method for displaying a singing interface, which is applied to a server, as shown in fig. 19, and the method includes:
step S100, receiving a singing theme display request sent by a display device when a song is selected, wherein the singing theme display request comprises a song identifier for representing the song.
In some embodiments, the display device sends a singing theme display request including a song identifier to the server when the song is selected, and the server determines a singing theme, lyrics, accompaniment sounds, etc. corresponding to the song according to the song identifier.
Step 200, determining singing topics according to the song identifications, wherein different singing topics correspond to different songs; and sending the singing theme to a display device, and displaying an image corresponding to the singing theme on a singing interface for playing the song by the display device.
It should be noted that the singing theme includes at least one of a background, a special effect, and a combination of actions. The background refers to a background image (still image or moving image) of the singing interface. The special effects refer to special effect scenes such as fallen snowflakes, petals, meteor rain, rainy and the like which are displayed in a suspension layer on a singing interface through loading corresponding files in a song singing process, wherein the special effect scenes can be displayed on the suspension layer, and the suspension layer is positioned on a video layer of an image corresponding to a display background. The action combination refers to the combination of all character actions executed by virtual characters in a song. When a user sings songs on the intelligent television, the singing interface can utilize the virtual character to execute character actions, enrich the singing interface, and the character actions can be various decomposed dance actions such as rotation, heart comparison, hug, waving or jumping. The intelligent television can generate an action frame according to the received action instruction, and after the action frame is overlapped on the background image, or the lyrics and the action frame are overlapped on the background image, the generated video layer data is sent to the display, so that the background image and the character action are displayed on the video layer, or the background image and the character action are displayed on the video layer, and the lyrics are displayed on the suspension layer.
In some embodiments, the singing subject does not include a presentation of lyrics. After the singing theme is determined, a display color of the lyrics is determined according to a background color of the singing theme so that contents of the lyrics can be visually recognized. The trouble of the user caused by the fact that the color of the lyrics is close to that of the background image is avoided. In some implementations, the black background image may correspond to a lyric color of white; the pink background image may correspond to the lyric color of blue, etc.
In some embodiments, the singing theme display request further includes a unique identifier of the display device, and the server sends the singing theme to the display device according to the unique identifier of the display device after the singing theme corresponding to the song is actually enabled. The unique identification may contain at least one of an ID of the display device, a Mac address, an account name, and the like.
Some embodiments also show a display method of a singing interface, applied to a display device, the method further including: and receiving the singing theme sent by the server, and displaying the image corresponding to the displayed singing theme on a singing interface for playing the song. In some embodiments, decoding the singing theme sent by the server refers to demodulating the singing theme sent by the server in a coding manner.
In some embodiments, the song identification is also used by the server to determine lyrics and audio data corresponding to the song;
the display equipment receives lyrics and audio data corresponding to the songs sent by the server, displays the lyrics on images corresponding to the singing subjects, and controls a loudspeaker to output according to the audio data, wherein the video images are used for displaying at least one of the background, special effects and action combinations contained in the singing subjects. The method comprises the steps that lyrics are correspondingly arranged in songs sung by the intelligent television, some of the songs are not in lyrics, the lyrics and corresponding audio data (such as accompaniment sounds, wherein the accompaniment sounds refer to all audio data in the songs, and background sounds during recording and other later added sounds can be contained) are stored locally in a saving mode, the lyrics can be downloaded from a server, and the lyrics are displayed on images corresponding to the sung subjects so that a user can clearly see the lyrics in the sung process.
In some embodiments, the display device includes a controller, a display screen, and a speaker, and the receiving the singing theme sent by the server includes:
the controller receives the singing theme and lyrics corresponding to the songs, wherein the lyrics corresponding to the songs are determined by the server according to the song identifications;
The displaying the image corresponding to the singing theme on the singing interface for playing the song comprises the following steps:
in order to realize the lyrics display on the image corresponding to the singing theme, an optional way is that: the controller draws a display layer, wherein the display layer comprises a video layer and a suspension layer; loading images corresponding to the singing subjects on the video layer, and loading the lyrics on a suspension layer;
the controller superimposes the video layer and the suspension layer to generate video data and sends the video data to a display screen, wherein the data sent to the display screen can be YUV data and/or RGB data. The processing of the video data by the controller may include parsing the data sent from the server, generating video data from the parsed data and the data of the local sound field by superposition, and then sending the video data to the display in a format of YUV data and/or RGB data. The display controls the RGB pixels in the pixels to achieve a final interface display.
And displaying a singing interface corresponding to the video data on a display screen.
Another implementation may be to directly superimpose lyrics on an image corresponding to a singing theme, and the superimposed data is loaded into a video layer for display.
In some embodiments, the controller is further configured to receive audio data determined by the server according to the song identification;
the controller decodes the audio data and sends the decoded audio data to a loudspeaker so as to output audio.
In some embodiments, the server may send the scenery (or special effects or action combinations or themes) directly to the display device in the form of data packets, which the display device parses to render the scenery (or special effects or action combinations or themes).
In some embodiments, the singing theme is represented by a singing theme code including at least one of a background code, a special effect code, and a plurality of character action code sets that make up an action combination. The code refers to identification data representing a background (special effects or action combinations or topics), and transmission of the identification data (code) can reduce transmission of overall data. The terminal stores the background (and/or special effects and/or action combinations and/or topics), and the corresponding background (and/or special effects and/or action combinations and/or topics) can be called according to the mapping relation between the codes and the background (or special effects or action combinations or topics).
In some embodiments, if the singing theme is represented by a singing theme code, the server transmits the singing theme code to the display device, and the display device receives the singing theme code transmitted by the server. And the intelligent television controls the display equipment to display the image corresponding to the singing theme on the singing interface for playing the song according to the theme code.
In some embodiments, the singing subject code includes at least one of a background code for determining a background image loaded by the video layer, a special effect code for determining a special effect image loaded by the hover layer, and a plurality of character action code sets constituting an action combination for determining actions in image frames of virtual characters in the video layer at different times.
In some embodiments, the display device determines whether all theme resources corresponding to the singing theme code are stored in the display device, where the theme resources include a background, a special effect, and a character action.
And if all the theme resources corresponding to the singing theme codes are stored in the display equipment, displaying the singing theme according to the theme resource codes.
In some embodiments, the partial theme resource may be preset in the display device or the singing theme is displayed on the display device, and the partial theme resource is already downloaded in the display device. So if the theme resource corresponding to the singing theme code is stored in the display apparatus, the theme resource does not need to be downloaded from the server to the display apparatus again. If not, the server is requested to download.
In some embodiments, because the theme resources of the singing theme have a certain capacity, the server does not directly send the singing theme to the display device, but sends the singing theme code to the display device, which has an advantage of having a small capacity relative to the singing theme. If all theme resources in the singing theme corresponding to the singing theme code are stored in the display equipment, the server sends the singing theme to the display equipment, and waste of resources is caused. Wherein, the code is a code value used for representing a certain resource, and the occupied storage control and/or transmission flow are far smaller than the corresponding resource.
Of course, in some embodiments, it is also within the scope of the present application to send the singing theme including the theme resource directly to the display device after the server determines the singing theme.
And if all the theme resources corresponding to the singing theme codes are not stored in the display equipment, sending a resource downloading request of the theme resources which are not stored to the server. The server acquires the resource downloading request, sends the corresponding theme resource to the display equipment, and the display equipment displays the singing theme.
In some embodiments, the server determines a singing theme and sends the singing theme code and theme resources to the display device.
In some embodiments, sending the singing subject code and the subject resource to a display device, determining whether all the subject resources corresponding to the singing subject code exist by the display device, and if the subject resources corresponding to the stored singing subject code exist in the display device, switching the receiving of the subject resources; if the theme resource corresponding to the singing theme code is not stored in the display equipment, the theme resource sent by the server is continuously received, so that the downloading time for downloading the theme resource from the server by sending the resource downloading request can be saved by simultaneously sending the singing theme code and the theme resource.
In one possible implementation, as shown in fig. 20, the step of determining, by the server, a singing theme according to song identification includes:
step S201, judging whether the song identification is stored in a historical singing record, wherein the historical singing record stores song identifications corresponding to historical singing songs of a user;
in one possible implementation manner, if the song identifier is stored in the historical singing record stored in the server, determining that the historical singing theme corresponding to the song identifier is a singing theme, where the historical singing theme refers to a singing theme matched when the song is singed by the user in the past. Thus, the singing theme is determined according to the historical behaviors of the user, and the singing theme can be more in accordance with the use habit of the user.
In some embodiments, the server stores song identifications corresponding to songs sung by the user in the historical sung record. When a singing theme display request is received, song identifications in the singing theme display request are matched with song identifications stored in the historical singing records. If the song identification is stored in the historical singing record, the user is informed of singing the song corresponding to the song identification in the past. When the song is sung by the user, the process of matching the song with the sung theme is completed before the current determination of the sung theme is described, and the most suitable sung theme is screened out or the theme corresponding to the song is set by the user. Thus, if song identifications are stored in the historical singing record, the historical singing theme is determined to be the singing theme.
The historical singing record comprises a user account number or an intelligent television ID, a song identifier and a singing theme corresponding to the song identifier. The user account number refers to a unique identifier representing a user, and the intelligent television ID refers to a unique identifier representing an intelligent television. The singing theme may be represented by a singing theme code.
Step S202, if the song identification is not stored in the historical singing record, determining a label corresponding to the song identification;
In some embodiments, the corresponding tags may be determined directly from the song identification without querying the historical singing record.
And step 203, determining a singing theme according to the label.
The tag comprises at least one of singer tag, style tag, mood tag and classification tag. In some embodiments, the style tag comprises: at least one of ballad, rock, jazz, pop, popular, etc., the mood tag comprising: at least one of anxiety, happiness, difficulty, silence, etc. The classification tag includes at least one of movies, shows, country breeze, classical, bell, etc. The singer tag includes: hua Chenyu, zhou Jielun, wang Fei, grandchild, etc. The specific contents of the singer tag, the style tag, the mood tag, and the category tag are not limited to those disclosed above, and the contents of the tags are not limited to those disclosed above.
In some embodiments, if song identifications are not stored in the historical singing record, then the song and the singing theme are not matched. To determine a singing theme that is more suitable for a user to display on a singing interface when singing a song, some embodiments utilize a label corresponding to the song identification to determine the singing theme.
In one possible implementation, the tag includes a style tag; the step of determining the singing theme according to the label and the mapping relation between the label and the singing theme comprises the following steps:
and determining the singing theme according to the style label and the mapping relation between the style label and the singing theme. The mapping relation is prestored in the server, can be in the form of a mapping table or a deep learning model, and is not limited by a specific implementation form once again as long as the mapping effect is achieved.
In some embodiments, according to the style tag, selecting a singing theme conforming to the style tag. Each singing theme is provided with a label. In some embodiments, the singing theme conforming to the style tag can be determined directly according to the style tag. When there are a plurality of singing subjects conforming to style labels, one of them may be selected.
In one possible implementation, as shown in fig. 21, the tag includes a style tag;
the step of determining the singing theme according to the label comprises the following steps:
step S2031, determining the background of the singing theme according to the style label and the mapping relation between the style label and the singing background;
Step S2032, generating an action combination according to the preset ranked character actions arranged according to the use times and the corresponding rhythms of the style labels;
it should be noted that, in order to make the singing theme more capable of meeting the preference of the user, in some embodiments, the character actions used by the user are arranged according to the number of times of use, and the character actions with preset ranks are selected. The number of uses may be from a large number to a small number, or from a small number to a large number. The preset ranking may be the first few digits if the character actions are arranged from more to less according to the number of uses. If the number of the character actions is from a few to a plurality of rows according to the number of the use times, the preset ranking can be the last few, and the final purpose of selecting the character actions is to screen the character actions with more use times, so that the preference of the user can be met.
The number of uses of the user background, character action, and special effect is shown in table 1.
TABLE 1
In table 1, the character actions used by the user 001 are arranged from large to small in number of times of use: character action 6, character action 5, character action 1, character action 2, and task action 3. For example, the preset ranking may be the first three, and the person actions ranked by the number of uses in the preset ranking include person action 6, person action 5, and person action 1. The character action 6, the character action 5, the character action 1, and the rhythms corresponding to the styles are combined to generate an action combination.
In some embodiments, the preset ranking may also be a popular ranking, where the popular ranking is a ranking of people actions with a higher number of uses after the people actions are ordered according to the number of uses of all users.
In addition, the virtual character needs to execute character actions according to rhythms, so in some embodiments, action combinations are generated by combining character actions with rhythms corresponding to style tags.
In some embodiments, the context is determined according to the style tag, in some embodiments, the context may also be determined according to preference of a user, in some embodiments, the user may set the context on the smart tv side, or may automatically determine according to the ranking of the number of uses of the context. The automatic determination is automatic determination at the intelligent television side, or may be automatically determined according to the mode provided by the embodiment and then sent to the intelligent television for display after the server receives the sheep farm theme display request.
Step S2033, generating a singing theme according to the combination of the background and the action.
In some embodiments, the singing theme includes action combinations and contexts. Furthermore, the singing theme can further comprise special effects, the tag can further comprise mood tags, and the special effects can be determined according to the mood tags.
In some embodiments, the mood tag is determined based on the content of the song, the mood tag includes a sad, happy, hard or quiet, and when the mood tag is a sad, the special effect may be a snowflake that falls. The special effects can also be determined according to the preference of the user, and in some embodiments, the user can set the special effects by himself or can determine according to the ordering condition of the times of using the special effects. In some embodiments, the singing theme may include only the background, may include only the action combination, or may include only the special effects, and in other embodiments, the singing theme may include the action combination and the special effects, or may include the background and the special effects, which are determined in the above manner.
After determining the composition of each part in the singing theme, the server issues the singing theme to the intelligent television.
In some embodiments, the theme and the mapping relationship between the theme and the song may be preset for the singed song, and the display device may be downloaded to the display device or may be obtained from the server in real time.
In one possible implementation, as shown in fig. 22, the step of determining, by the server, a singing theme according to the tag includes:
Step S2034, counting the number of tags;
step S2035, determining a theme to be selected for singing corresponding to the label according to the label; the to-be-selected singing theme at least comprises one of a background, special effects and character action combination;
the step of counting the number of tags may be performed first, or the step of determining the theme to be selected corresponding to the tags according to the tags may be performed first, where no order is required between the steps S2034 and S2035, and of course, both steps may be performed simultaneously.
Step S2036, determining one of the to-be-selected singing topics as a singing topic in response to the number of tags being 1;
in some embodiments, the number of tags to which song identifications correspond is counted. If the number of the tags is 1, for example, the tags include only the style tag, a candidate singing theme corresponding to the style tag is determined. In some embodiments, each tab corresponds to at least one alternative singing theme.
Step S2037, determining, as the singing theme, the selected singing theme corresponding to the most tags in response to the number of tags being greater than 1.
In one possible implementation manner, the step of determining the candidate singing theme corresponding to the most tags as the singing theme includes:
If the number of the to-be-selected singing topics corresponding to the most labels is 1, determining that the to-be-selected singing topics corresponding to the most labels are singing topics;
for example, the number of tags corresponding to the song identifications may be 3, and in some embodiments, tag a, tag B, and tag C are included. The number of the to-be-selected singing topics corresponding to each tag is not fixed, for example, the to-be-selected singing topics corresponding to the tag A comprise a to-be-selected singing topic A, a to-be-selected singing topic B and a to-be-selected singing topic C; the to-be-selected singing theme corresponding to the label B comprises a to-be-selected singing theme A; and the label C corresponds to the to-be-selected singing theme A and the to-be-selected singing theme B.
The to-be-selected singing theme A corresponds to the label A, the label B and the label C, so that the to-be-selected singing theme A is the to-be-selected singing theme corresponding to the most labels, and finally, the to-be-selected singing theme A is determined to be the singing theme.
And if the number of the selected singing topics corresponding to the most labels is greater than 1, determining the singing topics according to the weights of the selected singing topics corresponding to the most labels.
For another example, the to-be-selected singing theme corresponding to the label A comprises a to-be-selected singing theme A and a to-be-selected singing theme B; the to-be-selected singing theme corresponding to the label B comprises a to-be-selected singing theme B and a to-be-selected singing theme C; the to-be-selected singing theme corresponding to the label C comprises a to-be-selected singing theme C and a to-be-selected singing theme A. And the to-be-selected singing theme A, the to-be-selected singing theme B and the to-be-selected singing theme C are respectively corresponding to two labels, at this time, the to-be-selected singing theme A, the to-be-selected singing theme B and the to-be-selected singing theme C are respectively used as the to-be-selected singing theme corresponding to the most labels, the singing theme is determined according to the weights of the three to-be-selected singing themes corresponding to the most labels, and the determined singing theme is issued to the intelligent television.
In one possible implementation manner, the weights of the candidate singing topics corresponding to the most tags are determined by the following steps:
acquiring weights of all tags corresponding to the to-be-selected singing subject corresponding to the most tags;
and adding the weights of all the tags to determine the weight of the selected singing theme corresponding to the most tags.
In some embodiments, when the number of the candidate singing topics corresponding to the most tags is greater than 1, some embodiments determine the weights of the candidate singing topics corresponding to the most tags by using the weights of all tags corresponding to the candidate singing topics.
For example, the most labeled candidate singing topics include a candidate singing topic a, a candidate singing topic B and a candidate singing topic C. And the labels corresponding to the to-be-selected singing theme A are a label A and a label B, and the weights of the label A and the label B are summed to be used as the weight of the to-be-selected singing theme A. And the labels corresponding to the to-be-selected singing theme B are a label B and a label C, and the weights of the label B and the label C are summed to be used as the weight of the to-be-selected singing theme B. And the labels corresponding to the to-be-selected singing theme C are a label C and a label A, and the weights of the label C and the label A are summed to be used as the weight of the to-be-selected singing theme C.
And selecting the quasi singing theme with the highest weight as the singing theme. The weights may be set according to the preference of the user, or may be set in other manners, and are not limited herein.
In some embodiments, the theme includes at least one of a background, a special effect, and a combination of actions, the theme does not include lyrics, and the server is further configured to determine lyrics corresponding to the song according to the song identifier, and send the lyrics to the display device to cause the display device to display the lyrics on an image corresponding to the singing theme. The image corresponding to the singing theme is an image which is used for displaying after the background image and/or the task action frame and/or the special effect image are overlapped, and lyrics are not included.
Some embodiments show a server comprising:
the song display device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for receiving a singing theme display request sent by the display device when a song is selected, and the singing theme display request comprises a song identifier for representing the song;
the first sending unit is used for determining singing topics according to the song identifications, and different singing topics correspond to different songs; and sending the singing theme to a display device, and displaying an image corresponding to the singing theme on a singing interface by the display device.
Some embodiments show a display device comprising:
a communicator for communicating with the server;
a display for displaying an image and a user interface, and a selector in the user interface for indicating that an item is selected in the user interface;
a controller configured to:
receiving a control signal input from a user input interface indicating launching of a first application, and presenting a user interface of the first application on the display in response to the user input; selecting, by the selector, a user input in the user interface indicating that a user will select a song to be singed, and in response to the user input, sending, by the communicator, a request to the server carrying a song identification of the selected song to cause the server to determine accompaniment audio data and singing background video image data for the song to be singed, and song lyrics according to the song identification; receiving accompaniment audio data and singing background video image data of the singing song, which are sent by the server, wherein the accompaniment audio data corresponds to real-time lyrics, presenting the singing background video image in a user interface of the first application program, and performing rendering processing on the accompaniment background video image and the corresponding lyrics of the real-time accompaniment audio, and then superposing and synthesizing the accompaniment background video image and the singing background video image and displaying the accompaniment video image on the singing background video image.
In some embodiments, the system further comprises an audio acquisition device configured to acquire user-input audio data; the controller superimposes the audio data input by the user and the accompaniment audio data and then outputs the superimposed audio data and accompaniment audio data at the loudspeaker.
Some embodiments show a server comprising:
a display;
a controller configured to:
receiving a selection of songs by a user; responding to selection of songs by a user, and sending a singing theme display request corresponding to the songs to a server, wherein the singing theme display request comprises song identifications representing the songs, the song identifications are used by the server for determining singing themes, and different singing themes correspond to different songs; receiving the singing theme sent by the server; and controlling the display to display the image corresponding to the display singing theme on a singing interface for playing the song.
In some embodiments, the receiving the singing theme sent by the server; the step of controlling the display to display the image corresponding to the display singing theme on the singing interface for playing the song comprises the following steps:
receiving the singing theme and lyrics corresponding to the songs, wherein the lyrics corresponding to the songs are determined by the server according to the song identifications; drawing a display layer comprising a video layer and a suspension layer, loading images corresponding to the singing subjects on the video layer, and loading the lyrics on the suspension layer; superposing the video layer and the suspension layer to generate video data and transmitting the video data; and sending the video data to a display screen to control the display screen to display a singing interface corresponding to the video data.
In the above technical solutions, some embodiments show a display method, a display device, and a server for a singing interface, and in some embodiments show the technical solutions, a singing theme display request sent by a display device when a song is selected is received, where the singing theme display request includes a song identifier that characterizes the song; and determining singing topics according to the song identifications, wherein different singing topics correspond to different songs, and sending the singing topics to a display device to display images corresponding to the singing topics on a singing interface for playing the songs by the display device. In some embodiments, when a user sings a song on the smart television, the singing theme can be displayed on a singing interface of the smart television, so that the singing interface is enriched, and the interestingness of the user singing the song on the smart television is increased.
A flowchart of a method of displaying a singing interface according to an embodiment is exemplarily shown in fig. 23; a data flow diagram of a method of displaying a singing interface according to an embodiment is exemplarily shown in fig. 24. According to the singing interface display method provided by the embodiment of the application, when the intelligent television is used for K songs, scene actions can be automatically matched according to different scenes, so that the action of the virtual image is realized, and the participants have the sense of substituting scenes. The method is applied to display equipment, such as a smart television, wherein a 3D avatar K song APP capable of carrying out an avatar is installed in the smart television, and specifically, referring to fig. 23 and 24, the display method of the singing interface comprises the following steps:
S1, displaying the virtual image and lyric text obtained by analyzing the song file when the song file is played.
The intelligent television starts to play a song file, the song file can be a song in MV form, the action with an avatar can follow different songs, different lyrics and the like to present different gestures. When the song file is played, the controller in the display device can select the corresponding song file and the avatar corresponding to the song file according to the trigger of the song control so as to be displayed in the user interface of the display device.
Specifically, before displaying the avatar and the lyric text obtained by analyzing the song file, the method further includes:
s01, receiving selection of a song control by a user, and loading a song file corresponding to the song control.
S02, loading a preset virtual image model and a preset background image file, and analyzing the song file to obtain a lyric text.
The user triggers the song control through the remote controller, selects the favorite song, and after the selection, the controller loads the song file selected by clicking the song control. Meanwhile, the controller loads an avatar model and a preset background image file, and different types of avatars are stored in the avatar model and are used for presenting actions; the background image file is used for presenting the background image of the song file, such as stage style, effect, etc. presented in the user interface during the singing process.
The song file comprises an audio file, a video file and a lyric text, wherein the audio file is input by a loudspeaker of the display device, the video file is displayed in a user interface, and the lyric text is used for being displayed in the user interface and used as a lyric prompt when a user songs.
A flowchart of a display method according to an embodiment is exemplarily shown in fig. 25. After loading the avatar model and the background image file, the controller of the display device selects an avatar and a background image corresponding to the song file selected by the user to display. For this, in the present embodiment, referring to fig. 25, lyric text obtained by avatar and parsing song file is displayed as follows, including:
s11, determining the avatar based on a preset avatar model, determining a background image based on a preset background image file, and displaying the background image and the avatar on a video layer.
And S12, displaying the lyric text on a suspension layer according to the lyric text and a time axis, wherein the suspension layer is positioned above the video layer.
The user can select one of the virtual images in the preset virtual image model as the virtual image of the current K song, select one of the virtual images in the preset background image file as the background image of the current K song, establish a connection with the song file selected by the user, and display the connection in the user interface.
When the video layer is displayed, the background image and the virtual image are positioned on the same layer, and the video layer is generated for display. The lyric text and the time axis are displayed in the form of a hover layer, and the hover layer is located above the video layer.
When the song file is played, the user interface of the display device displays the virtual image, the background image, the lyric text and the like, and at the moment, the virtual image has no action and is a default image, namely, a static image or an initial image preset by the system.
S2, detecting a label on a time axis of the song file, wherein the time axis is used for controlling the display of the lyric text.
In order to match the actions of the avatar in the process of K songs with the playing scenes of the songs, a series of action libraries can be arranged for each song file, and the action libraries can determine the richness of the actions of the avatar.
For example, the action of calling can be set before the song is singed, the right hand microphone is set near the mouth in the song singing process, the left hand is set to act, the action of interacting with the user can be set when waiting in the middle of the upper section and the lower section, the actions of rotating and the like are set when the song sings on the climax, and the actions of the screen and the like are set when the song sings on the climax.
In order to automatically match actions when a song file is played to a certain progress, the method provided by the embodiment can add corresponding labels according to time nodes on the lyrics text of the song file in advance, namely, labels are added on a time axis, so that the labels can be matched with actions in an action library.
The tag refers to adding a flag bit for identifying an action instruction on lyrics corresponding to song media according to a preset time node. The tag can be added to the time axis of each song file according to the preference of the user or a system preset mode, and a flag bit for identifying actions is added to a preset time node of the time axis to serve as the tag of the current song file. For example, when the song file is played for 50s, a first tag is added, and the corresponding action is "call in"; when the player plays for 60 seconds, a second label is added, and the action of shaking the arm is corresponding.
In order to accurately display the action so that the action is matched with the song file, in the process of playing the song file, whether a tag exists in the lyric playing progress, namely whether a flag bit for identifying the action exists on a lyric time axis or not is detected in real time.
S3, responding to the detection of the label, and acquiring an action instruction corresponding to the label.
If a label exists at a certain time node corresponding to the lyric time axis in the playing process of the song file, the controller responds to the label to acquire an action instruction corresponding to the label. The action instruction comprises an animation identifier and a music style, wherein the animation identifier is used for representing an animation for realizing the action, and the music style is used for representing the style of a currently played song file, such as a fast-rhythm song, a slow-rhythm song, a sad song, a cheerful song and the like.
According to the display method of the singing interface, different animations can be called according to different music styles to realize actions of the virtual images, so that a user can enter into the atmosphere of songs according to actions of the virtual images displayed in the user interface when the user sings, and the user can be immersed in the atmosphere, and experience is better.
A method flow diagram for retrieving action instructions in accordance with an embodiment is illustrated in fig. 26. In one possible embodiment, referring to fig. 26, in response to detecting the tag, acquiring the action instruction corresponding to the tag includes:
S311, responding to the detection of the label, and accessing an action instruction library corresponding to the song style identification according to the label and the song style identification, wherein the song style identification is preset in a song file, and different action instruction libraries correspond to different song style identifications.
S312, receiving action instructions corresponding to the labels in the action instruction library.
According to the song file, the style of the song can be determined, and then the song style identification corresponding to the song file is obtained. If a label exists on a certain time node of the time axis in the playing process of the song file, matching with a corresponding action instruction library according to the label and the song style identification to determine an action instruction corresponding to the label. The action command is used for controlling the action of the avatar, namely, realizing corresponding animation so that the avatar executes corresponding action.
The labels and the action instructions are in a mapping relation, one label corresponds to one action instruction, but according to different action types, namely under different music style scenes, the same action instruction may correspond to a plurality of labels. For example, the motion command "waving hand" is labeled "huishou", and at this time, the labels are in one-to-one correspondence with the motion command. If the frequency of the action is large in a cheerful music scene, the label is huishou 1; if the frequency of the motion is generally small in a sad music scene, the label is "huishou2", so that the same motion command is "waving hand", and in different music scenes, a plurality of labels are corresponding to each label, and each label corresponds to one motion command, and at this time, the two motion commands may be the same motion command. Thus, after a tag is detected, a corresponding action instruction can be determined.
Different types of action instructions are stored in a preset action instruction library in the display device so as to accord with music scenes of different music styles and different melodies (rhythms). The preset action instruction library can be a complete action library, and comprises action instructions conforming to music scenes of different music styles and different melodies, namely, all types of action instructions; the preset action instruction library may also be composed of a plurality of sub-action instruction libraries, where each sub-action instruction library corresponds to one action type, for example, one action instruction library corresponds to one music style, and one action instruction library corresponds to one melody.
After the label is detected, the controller calls an action control module in the display device, and the action control module matches the label with the action instruction library. When the action instruction is matched with a preset action instruction library for determining the action instruction, a target action instruction library is determined according to the song style identification, and the action instruction matched with the song style identification is stored in the target action instruction library. After the target action instruction library is determined, matching with all action instructions in the target action instruction library according to the detected labels, and determining the action instructions corresponding to the labels in the target action instruction library in view of the mapping relation between the labels and the action instructions.
For example, if the song style is identified as a cheerful music style, selecting an action instruction library of the cheerful music style from the preset action instruction library as a target action instruction library; if the label is 'bengtiao', matching with a target action instruction library, and determining the action instruction 'jump' in accordance with the cheerful music style.
Another method flow diagram for fetching action instructions according to an embodiment is illustrated in FIG. 27. In another possible embodiment, referring to fig. 27, in response to detecting the tag, acquiring the action instruction corresponding to the tag includes:
s321, responding to the detection of the label, and accessing an action instruction library corresponding to the rhythm identifier according to the label and the rhythm identifier, wherein the rhythm identifier is preset in a song file, and different action instruction libraries correspond to different rhythm identifiers.
S322, receiving an action instruction corresponding to the label in the action instruction library.
The song file may also include songs of different tempos, such as fast-tempo songs and slow-tempo songs, etc. At this time, the preset action instruction library in the display device includes action instructions of music scenes with different rhythms, and one rhythm corresponds to one action instruction.
When the action instruction is matched with a preset action instruction library for determining the action instruction, a target action instruction library is determined according to the rhythm identification, and the action instruction matched with the rhythm identification is stored in the target action instruction library. After the target action instruction library is determined, matching with all action instructions in the target action instruction library according to the detected labels, and determining the action instructions corresponding to the labels in the target action instruction library in view of the mapping relation between the labels and the action instructions.
For example, if the tempo is a fast tempo music style, selecting a motion instruction library of the fast tempo music style from preset motion instruction libraries as a target motion instruction library; if the label is "baishou", matching with the target action instruction library, and determining an action instruction "hand swing" in accordance with the fast-paced music style.
Therefore, according to the method provided by the embodiment, the corresponding adaptive action instruction library can be selected according to different music styles and different rhythms, and then the action instruction is uniquely determined according to the detection of the label on the time axis in the playing process of the song file, so that the action of the avatar is realized, and the animation of the avatar is combined with the song file.
And S4, controlling the action of the virtual image according to the action instruction.
The action instruction carries animation realization, after determining the action instruction corresponding to the label at a certain time point on the time axis in the song file playing process, the controller sends the action instruction to the action control module, the action control module triggers the action to display, and the action instruction controls the virtual image to execute the corresponding action. That is, when the current tag is matched with a certain action instruction in the action instruction library, the action instruction is triggered by the action control module to perform action, and the action is presented and displayed on a user interface of the display device by the avatar.
At this time, the video file, the background image, the lyric text and the avatar of the song file are simultaneously displayed in the user interface, and the avatar at this time may present different motion animations. And combining the virtual image presenting the action animation with the background image, and presenting different animations along with the playing of the song file so as to enrich the user interface.
However, if the corresponding target action instruction library is not selected from the preset action instruction library based on different song style identifications and different rhythm identifications, that is, the action instruction corresponding to the label cannot be received, and the current label is invalid, the current operation is abandoned, that is, the matching process with the preset action instruction library is finished, the song file is continuously played, and the detection of the label of the next time node is continuously performed.
Another flowchart of a display method of a singing interface according to an embodiment is exemplarily shown in fig. 28. If no tag is detected on the time axis during the playing of the song file, at this time, in order to ensure the action of the avatar, the method provided in this embodiment, see fig. 28, further includes:
s61, acquiring a default action instruction in response to the fact that the label is not detected.
S62, controlling the action of the virtual image according to the default action instruction.
If no label is detected on a certain time node on the lyric time axis corresponding to the song file, continuing to play the song file. Because no label is detected at this time, the virtual image cannot be controlled to present corresponding action animation according to the matched action instruction, and the preset default action instruction can be acquired at this time in order to ensure the richness of the user interface.
When the tag is not detected, the controller of the display device invokes a pre-stored default action command, which may control the avatar to perform a default action, e.g., in-situ shaking, in-situ arm shaking, etc. Therefore, when the label is not detected, the virtual image can still be controlled to present default action instead of standing still in the process of playing the song file, and the visual effect of the user interface can be ensured.
According to the method provided by the embodiment, the chorus function can be researched and developed in a social system based on a social television, the purpose of establishing the relationship among users deeply is achieved, the viscosity and activity of the users are enhanced, and therefore the playing method of a K song platform of the television is enriched, users can play the K song platform with the user who wants to sing with the user at home, the relationship between the user and the television can be better pulled up, the social environment is more comfortable, the television is more warm, the limitation of space minor elements is broken, the user wants to play the K song with the user who wants to sing with the user, and the K song social scene is played.
According to the technical scheme, in the display method of the singing interface, when the song file is played, the virtual image is displayed, and the lyric text obtained by analyzing the song file is displayed; detecting a tag on a time axis of the song file, wherein the time axis is used for controlling the display of the lyric text; responding to the detection of the label, and acquiring an action instruction corresponding to the label; and controlling the action of the avatar according to the action command. Therefore, when the intelligent television is used for K songs, the method provided by the embodiment of the application can automatically match corresponding actions when playing the corresponding lyric positions according to different scenes corresponding to the lyric time axis corresponding to the song file, and the virtual images are displayed, so that participants have the sense of substituting scenes, the interaction among users is increased, and the user experience is improved.
A block diagram of a display device according to an embodiment is exemplarily shown in fig. 29. Referring to fig. 29, the present application provides a display apparatus for performing the steps related to the display method of the singing interface shown in fig. 23, the display apparatus comprising: a display 20 configured to display actions of the avatar and lyrics text; a speaker 30 configured to output sound of the song file; a controller 10 configured to display an avatar and a lyric text obtained by parsing a song file when the song file is played; detecting a tag on a time axis of the song file, wherein the time axis is used for controlling display of the lyric text; responding to the detection of the label, and acquiring an action instruction corresponding to the label; and controlling the action of the avatar according to the action instruction.
Further, the controller 10 is further configured to: receiving selection of a song control by a user, and loading a song file corresponding to the song control; loading a preset avatar model and a preset background image file, and analyzing the song file to obtain the lyric text.
Further, the controller 10 is further configured to: determining an avatar based on the preset avatar model, and determining a background image based on the preset background image file, displaying the background image and the avatar at a video layer; and displaying the lyric text on a suspension layer according to the lyric text and the time axis, wherein the suspension layer is positioned above the video layer.
Further, the controller 10 is further configured to: in response to detecting the tag, accessing an action instruction library corresponding to the song style identification according to the tag and the song style identification, wherein the song style identification is preset in the song file, and different action instruction libraries correspond to different song style identifications; and receiving an action instruction corresponding to the label in the action instruction library.
Further, the controller 10 is further configured to: responding to the detection of the label, and accessing an action instruction library corresponding to the rhythm identifier according to the label and the rhythm identifier, wherein the rhythm identifier is preset in the song file, and different action instruction libraries correspond to different rhythm identifiers; and receiving an action instruction corresponding to the label in the action instruction library.
Further, the controller 10 is further configured to: acquiring a default action instruction in response to the tag not being detected; and controlling the action of the virtual image according to the default action instruction.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (8)

1. The display method of the singing interface is characterized by comprising the following steps:
transmitting a style identifier containing a song identifier to a server in response to selection of a first song input, so that the server determines a style identifier corresponding to the song identifier, and determining a song file corresponding to the song identifier, wherein the song file comprises a sound file and a lyric text;
receiving the style identification and the song file;
loading a preset virtual image model to display virtual images on a video layer of the singing interface, displaying lyric texts in a suspension layer above the video layer according to the lyric texts, and controlling output of a loudspeaker when the singing interface is displayed according to the sound file; detecting a first action tag on a time axis corresponding to the song file;
when the style identification characterizes the song as a first style, determining a first target action instruction in a first action instruction library corresponding to the first style according to the first action label, and controlling the action of the virtual image according to the first target action instruction;
When the style identification characterizes the song as a second style, determining a second target action instruction in a second action instruction library corresponding to the second style according to the first action label, and controlling the action of the avatar according to the second target action instruction, wherein the action of the avatar under the control of the first target action instruction is different from the action of the avatar under the control of the second target action instruction.
2. The method of claim 1, wherein after the responding and selecting the first song to input, the method further comprises:
and loading a preset background image file to display a background for the avatar at the video layer.
3. The method of claim 1, wherein when the style identification characterizes the song as a first style, determining a first target action instruction in a first action instruction library corresponding to the first style according to the first action tag, comprising:
determining a first target action instruction in a first action instruction library corresponding to the first style and the first rhythm according to the first action label when the style identification indicates that the song is a first style and the rhythm identification indicates that the song is a first rhythm, wherein the rhythm identification is preset in the song file;
When the style identification characterizes the song as a second style, determining a second target action instruction in a second action instruction library corresponding to the second style according to the first action label, wherein the determining comprises the following steps:
and when the style identification characterizes the song as a second style and the rhythm identification characterizes the song as a second rhythm, determining a second target action instruction in a second action instruction library corresponding to the second style and the second rhythm according to the first action label.
4. The method according to claim 1, wherein the method further comprises:
when the label is not detected, acquiring a default action instruction;
and controlling the virtual image to execute default action according to the default action instruction.
5. A display device, characterized by comprising:
the display is used for displaying the singing interface;
a speaker;
a controller configured to:
transmitting a style identifier containing a song identifier to a server in response to selection of a first song input, so that the server determines a style identifier corresponding to the song identifier, and determining a song file corresponding to the song identifier, wherein the song file comprises a sound file and a lyric text;
Receiving the style identification and the song file;
loading a preset virtual image model to display virtual images on a video layer of the singing interface, displaying lyric texts in a suspension layer above the video layer according to the lyric texts, and controlling output of a loudspeaker when the singing interface is displayed according to the sound file;
detecting a first action tag on a time axis corresponding to the song file;
when the style identification characterizes the song as a first style, determining a first target action instruction in a first action instruction library corresponding to the first style according to the first action label, and controlling the action of the virtual image according to the first target action instruction;
when the style identification characterizes the song as a second style, determining a second target action instruction in a second action instruction library corresponding to the second style according to the first action label, and controlling the action of the avatar according to the second target action instruction, wherein the action of the avatar under the control of the first target action instruction is different from the action of the avatar under the control of the second target action instruction.
6. The display device of claim 5, wherein after the responding and selecting the first song to input, the controller is further configured to:
And loading a preset background image file to display a background for the avatar at the video layer.
7. The display device of claim 5, wherein the controller determines a first target action instruction from the first action tag in a first action instruction library corresponding to the first style when the style identification characterizes the song as a first style, comprising the controller:
determining a first target action instruction in a first action instruction library corresponding to the first style and the first rhythm according to the first action label when the style identification indicates that the song is a first style and the rhythm identification indicates that the song is a first rhythm, wherein the rhythm identification is preset in the song file;
the controller determines a second target action instruction in a second action instruction library corresponding to a second style according to the first action tag when the style identification characterizes the song as the second style, and the controller comprises:
and when the style identification characterizes the song as a second style and the rhythm identification characterizes the song as a second rhythm, determining a second target action instruction in a second action instruction library corresponding to the second style and the second rhythm according to the first action label.
8. The display device of claim 5, wherein the controller is further configured to obtain a default action instruction when the tag is not detected;
and controlling the virtual image to execute default action according to the default action instruction.
CN202080025019.5A 2019-09-19 2020-08-26 Display method and display device of singing interface Active CN113796091B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
CN2019108868265 2019-09-19
CN201910886826 2019-09-19
CN2020100958124 2020-02-17
CN202010095812.4A CN111343509A (en) 2020-02-17 2020-02-17 Action control method of virtual image and display equipment
CN202010193786.9A CN112533030B (en) 2019-09-19 2020-03-18 Display method, display equipment and server of singing interface
CN2020101937869 2020-03-18
PCT/CN2020/111498 WO2021052133A1 (en) 2019-09-19 2020-08-26 Singing interface display method and display device, and server

Publications (2)

Publication Number Publication Date
CN113796091A CN113796091A (en) 2021-12-14
CN113796091B true CN113796091B (en) 2023-10-24

Family

ID=74883332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080025019.5A Active CN113796091B (en) 2019-09-19 2020-08-26 Display method and display device of singing interface

Country Status (2)

Country Link
CN (1) CN113796091B (en)
WO (1) WO2021052133A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239916A (en) * 2021-04-22 2022-10-25 北京字节跳动网络技术有限公司 Interaction method, device and equipment of virtual image
CN115250360A (en) * 2021-04-27 2022-10-28 北京字节跳动网络技术有限公司 Rhythm interaction method and equipment
CN113345470B (en) * 2021-06-17 2022-10-18 青岛聚看云科技有限公司 Karaoke content auditing method, display device and server
CN114554111B (en) * 2022-02-22 2023-08-01 广州繁星互娱信息科技有限公司 Video generation method and device, storage medium and electronic equipment
CN114625466B (en) * 2022-03-15 2023-12-08 广州歌神信息科技有限公司 Interactive execution and control method and device for online singing hall, equipment, medium and product
CN114928755B (en) * 2022-05-10 2023-10-20 咪咕文化科技有限公司 Video production method, electronic equipment and computer readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5279912A (en) * 1992-05-11 1994-01-18 Polaroid Corporation Three-dimensional image, and methods for the production thereof
CN101414322A (en) * 2007-10-16 2009-04-22 盛趣信息技术(上海)有限公司 Exhibition method and system for virtual role
CN103903638A (en) * 2012-12-30 2014-07-02 比亚迪股份有限公司 Mobile terminal, and song playing method and song playing device thereof
CN104102146A (en) * 2014-07-08 2014-10-15 苏州乐聚一堂电子科技有限公司 Virtual accompanying dance universal control system
JP2015138160A (en) * 2014-01-23 2015-07-30 株式会社マトリ Character musical performance image creation device, character musical performance image creation method, character musical performance system, and character musical performance method
CN106445460A (en) * 2016-10-18 2017-02-22 渡鸦科技(北京)有限责任公司 Control method and device
CN106649586A (en) * 2016-11-18 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Playing method of audio files and device of audio files
CN107422862A (en) * 2017-08-03 2017-12-01 嗨皮乐镜(北京)科技有限公司 A kind of method that virtual image interacts in virtual reality scenario
CN109478399A (en) * 2016-07-22 2019-03-15 雅马哈株式会社 Play analysis method, automatic Playing method and automatic playing system
CN109789541A (en) * 2016-09-19 2019-05-21 罗伯特·博世有限公司 Hand held power machine at least one external AR equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2471871B (en) * 2009-07-15 2011-12-14 Sony Comp Entertainment Europe Apparatus and method for a virtual dance floor
AU2011205223C1 (en) * 2011-08-09 2013-03-28 Microsoft Technology Licensing, Llc Physical interaction with virtual objects for DRM
US9577969B2 (en) * 2012-06-11 2017-02-21 The Western Union Company Singing telegram
CN105760479A (en) * 2016-02-15 2016-07-13 广东欧珀移动通信有限公司 Song playing control method and device, mobile terminal, server and system
US20180160077A1 (en) * 2016-04-08 2018-06-07 Maxx Media Group, LLC System, Method and Software for Producing Virtual Three Dimensional Avatars that Actively Respond to Audio Signals While Appearing to Project Forward of or Above an Electronic Display
US10789473B2 (en) * 2017-09-22 2020-09-29 Samsung Electronics Co., Ltd. Method and device for providing augmented reality service
CN109189541A (en) * 2018-09-17 2019-01-11 福建星网视易信息系统有限公司 interface display method and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5279912A (en) * 1992-05-11 1994-01-18 Polaroid Corporation Three-dimensional image, and methods for the production thereof
CN101414322A (en) * 2007-10-16 2009-04-22 盛趣信息技术(上海)有限公司 Exhibition method and system for virtual role
CN103903638A (en) * 2012-12-30 2014-07-02 比亚迪股份有限公司 Mobile terminal, and song playing method and song playing device thereof
JP2015138160A (en) * 2014-01-23 2015-07-30 株式会社マトリ Character musical performance image creation device, character musical performance image creation method, character musical performance system, and character musical performance method
CN104102146A (en) * 2014-07-08 2014-10-15 苏州乐聚一堂电子科技有限公司 Virtual accompanying dance universal control system
CN109478399A (en) * 2016-07-22 2019-03-15 雅马哈株式会社 Play analysis method, automatic Playing method and automatic playing system
CN109789541A (en) * 2016-09-19 2019-05-21 罗伯特·博世有限公司 Hand held power machine at least one external AR equipment
CN106445460A (en) * 2016-10-18 2017-02-22 渡鸦科技(北京)有限责任公司 Control method and device
CN106649586A (en) * 2016-11-18 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Playing method of audio files and device of audio files
CN107422862A (en) * 2017-08-03 2017-12-01 嗨皮乐镜(北京)科技有限公司 A kind of method that virtual image interacts in virtual reality scenario

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Current trends in the development of intelligent unmanned autonomous systems》;Tao ZHANG;Qing LI;Chang-shui ZHANG;Hua-wei LIANG;Ping LI;Tian-miao WANG;Shuo LI;Yun-long ZHU;Cheng WU;;Frontiers of Information Technology & Electronic Engineering(第01期);全文 *

Also Published As

Publication number Publication date
CN113796091A (en) 2021-12-14
WO2021052133A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
CN113796091B (en) Display method and display device of singing interface
CN111722768B (en) Display device and application program interface display method
US11899907B2 (en) Method, apparatus and device for displaying followed user information, and storage medium
CN111372109B (en) Intelligent television and information interaction method
CN111343509A (en) Action control method of virtual image and display equipment
CN112492371B (en) Display device
CN112533037B (en) Method for generating Lian-Mai chorus works and display equipment
US11425466B2 (en) Data transmission method and device
CN112516589A (en) Game commodity interaction method and device in live broadcast, computer equipment and storage medium
CN112380420A (en) Searching method and display device
CN111083538A (en) Background image display method and device
CN112073787B (en) Display device and home page display method
CN112165642B (en) Display device
CN112399199A (en) Course video playing method, server and display equipment
CN112533030B (en) Display method, display equipment and server of singing interface
CN113490060B (en) Display equipment and method for determining common contact person
CN112533023B (en) Method for generating Lian-Mai chorus works and display equipment
CN113038217A (en) Display device, server and response language generation method
CN111857936A (en) User interface display method and display device of application program
CN112199560A (en) Setting item searching method and display device
CN112839254A (en) Display apparatus and content display method
WO2021052115A1 (en) Method of generating vocal composition, publication method, and display apparatus
CN114339346B (en) Display device and image recognition result display method
CN113691841B (en) Singing label adding method, rapid audition method and display device
CN114866636B (en) Message display method, terminal equipment, intelligent equipment and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant