CN117290539A - Song information identification method and display device - Google Patents

Song information identification method and display device Download PDF

Info

Publication number
CN117290539A
CN117290539A CN202211666085.8A CN202211666085A CN117290539A CN 117290539 A CN117290539 A CN 117290539A CN 202211666085 A CN202211666085 A CN 202211666085A CN 117290539 A CN117290539 A CN 117290539A
Authority
CN
China
Prior art keywords
server
content data
audio
control
song information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211666085.8A
Other languages
Chinese (zh)
Inventor
王光强
刘文静
韩辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202211666085.8A priority Critical patent/CN117290539A/en
Publication of CN117290539A publication Critical patent/CN117290539A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a song information identification method and display equipment, wherein a current user interface is displayed on a display, and in response to a received song identification instruction, sound data being played are collected to generate an audio clip and content data corresponding to a control in the current user interface are obtained, wherein the content data represents media corresponding to the control. And sending the audio clip and the content data to a server, and after the server receives the audio clip and the content data, identifying song information corresponding to the sound data according to the audio clip and the content data and feeding back the song information to the display device. The display device receives and displays the identified song information fed back by the server. Therefore, song information of sound data is identified by adding content data of the control on the basis of recorded audio data, and song information identification efficiency and accuracy can be improved.

Description

Song information identification method and display device
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a song information identification method and a display device.
Background
The song recognition function may recognize the audio data being played to obtain song information for the audio data, such as song title, artist, album, etc. The user can utilize the song identification function of the display device to identify the song information, and then can play or download corresponding audio data on line based on the song information.
The display device typically performs song recognition by installing an application program having a song recognition function. When the user is interested in the songs heard, the application program with the song identification function can be started to identify the songs being played. A specific identification procedure is recording the song being played through the microphone of the display device. And uploading the recorded audio data to a server, and comparing the recorded audio data with the audio data in the database by the server so as to search the audio data matched with the recorded audio data in the database. And then the song information of the matched audio data is sent to the display equipment for display.
The song information identification method only uses the recorded audio data to compare with the audio data in the audio library, and the song information identification efficiency and accuracy are low.
Disclosure of Invention
The application provides a song information identification method and display equipment, which are used for solving the problem that the existing song identification method only uses recorded audio data to compare with audio data in an audio library, and the matching efficiency is low.
In a first aspect, the present embodiment provides a display apparatus, including:
A display for displaying a user interface;
a user interface for receiving an input signal;
a controller coupled to the display and the user interface, respectively, for performing:
displaying a current user interface;
responding to the received music recognition instruction, collecting the sound data being played to generate an audio fragment and obtaining content data corresponding to a control in a current user interface, wherein the content data characterizes media corresponding to the control;
transmitting the audio clip and the content data to a server so that the server can identify song information corresponding to the sound data according to the audio clip and the content data;
and receiving and displaying the identified song information fed back by the server.
In a second aspect, the present embodiment provides a song information identification method, including:
a display for displaying a user interface;
a user interface for receiving an input signal;
a controller coupled to the display and the user interface, respectively, for performing:
displaying a current user interface;
responding to the received music recognition instruction, collecting the sound data being played to generate an audio fragment and obtaining content data corresponding to a control in a current user interface, wherein the content data characterizes media corresponding to the control;
Transmitting the audio clip and the content data to a server so that the server can identify song information corresponding to the sound data according to the audio clip and the content data;
and receiving and displaying the identified song information fed back by the server.
According to the song information identification method and the display device, the current user interface is displayed on the display, the received song identification instruction is responded, the audio clips generated by the playing sound data are collected, the content data corresponding to the control in the current user interface are obtained, and the content data represent the media corresponding to the control. And sending the audio clip and the content data to a server, and after the server receives the audio clip and the content data, identifying song information corresponding to the sound data according to the audio clip and the content data and feeding back the song information to the display device. The display device receives and displays the identified song information fed back by the server. Therefore, song information of sound data is identified by adding content data of the control on the basis of recorded audio data, and song information identification efficiency and accuracy can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 illustrates an operational scenario between a display device and a control apparatus according to some embodiments;
fig. 2 shows a hardware configuration block diagram of the control device 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in a display device 200 according to some embodiments;
FIG. 5 illustrates a song information recognition system architecture schematic diagram in accordance with some embodiments;
FIG. 6 illustrates a user interface schematic of a display device according to some embodiments;
FIG. 7 illustrates a song information identification method process signaling diagram in accordance with some embodiments;
FIG. 8 illustrates a user interface schematic of yet another display device in accordance with some embodiments;
FIG. 9 illustrates a user interface schematic diagram of yet another display device in accordance with some embodiments;
FIG. 10 illustrates a schematic diagram of song screening according to a television profile, according to some embodiments;
FIG. 11 illustrates a user interface schematic of yet another display device in accordance with some embodiments;
FIG. 12 illustrates a user interface schematic of yet another display device in accordance with some embodiments;
FIG. 13 illustrates a user interface schematic of yet another display device in accordance with some embodiments;
FIG. 14 illustrates a user interface schematic of yet another display device in accordance with some embodiments;
FIG. 15 illustrates a user interface schematic of yet another display device in accordance with some embodiments;
FIG. 16 illustrates a signaling diagram of a particular implementation of song information identification in accordance with some embodiments;
FIG. 17 illustrates a flow diagram of a song information identification method in accordance with some embodiments;
fig. 18 illustrates a flow chart of yet another song information identification method in accordance with some embodiments.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of some embodiments of the present application more clear, the technical solutions of some embodiments of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terms in some embodiments of the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the implementation of some embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
The term "remote control" as used herein refers to a component of a display device (such as the display devices disclosed herein) that is typically capable of being controlled wirelessly over a relatively short distance. Typically, the display device is connected with infrared and/or Radio Frequency (RF) signals and/or Bluetooth, and can also comprise functional modules such as WiFi, wireless USB, bluetooth, motion sensors and the like. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in a general remote control device with a touch screen user interface.
The electronic device in the application may be a display device, or may be other electronic devices with a voice assistant function, and the scheme is described below taking the display device as an example. Fig. 1 is a schematic diagram of an operation scenario between a display device and a terminal device provided in some embodiments of the present application. As shown in fig. 1, a user may operate the display device 200 through the mobile terminal 300 and the terminal device 100.
In some embodiments, the terminal device 100 may be a remote controller, and the communication between the remote controller and the display device may include infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, etc., and the display device 200 is controlled in a wireless mode or other wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. The audio/video content displayed on the mobile terminal 300 can also be transmitted to the electronic device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display device 200 is also in data communication with the server 400 via a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks.
The display apparatus 200 may additionally provide a smart network television function of a computer support function, including, but not limited to, a network television, a smart television, an Internet Protocol Television (IPTV), etc., in addition to the broadcast receiving television function.
Fig. 2 is a block diagram of a hardware configuration of the display device 200 of fig. 1 provided in some embodiments of the present application.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, for receiving an image signal from the controller output, for displaying video content, image content, and components of a menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, communicator 220 is a component for communicating with external devices or servers 400 according to various communication protocol types.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI).
In some embodiments, user interface 280 is an interface that may be used to receive control inputs.
Fig. 3 is a block diagram of a hardware configuration of the terminal device in fig. 1 according to some embodiments of the present application. As shown in fig. 3, the terminal device 100 includes a controller 111, a communication interface 130, a user input/output interface, a memory, and a power supply.
The terminal device 100 is configured to control the display device 200, and can receive an input operation instruction of a user, and convert the operation instruction into an instruction recognizable and responsive to the display device 200, functioning as an interaction between the user and the display device 200.
In some embodiments, the terminal device 100 may be a smart device. Such as: the terminal device 100 may install various applications for controlling the display device 200 according to user's needs.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent display device may serve a similar function as the terminal device 100 after installing an application that manipulates the display device 200.
The controller 111 includes a processor 112 and RAM 113 and ROM 114, a communication interface 130, and a communication bus. The controller 111 is used to control the operation and operation of the terminal device 100, and communication cooperation between the internal components and external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display device 200 under the control of the controller 111. The communication interface 130 may include at least one of a WiFi chip 131, a bluetooth module 132, an NFC module 133, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touchpad 142, a sensor 143, keys 144, and other input interfaces.
In some embodiments, terminal device 100 includes at least one of a communication interface 130 and an input-output interface 140. The terminal device 100 is configured with a communication interface 130, such as: the WiFi, bluetooth, NFC, etc. modules may send the user input instruction to the display device 200 through a WiFi protocol, or a bluetooth protocol, or an NFC protocol code.
A memory 190 for storing various operation programs, data and applications for driving and controlling the terminal device 100 under the control of the controller. The memory 190 may store various control signal instructions input by a user.
And a power supply 180 for providing operation power support for the respective elements of the terminal device 100 under the control of the controller.
Fig. 4 is a schematic view of software configuration in the display device in fig. 1 provided in some embodiments of the present application, in some embodiments, the system is divided into four layers, namely, an application layer (application layer), an application framework layer (Application Framework) layer (framework layer), an Android run layer (Android run layer) and a system library layer (system runtime layer), and a kernel layer from top to bottom.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, a camera application, and the like; or may be an application developed by a third party developer.
The framework layer provides an application programming interface (Aplication Pogramming Iterface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act.
As shown in fig. 4, the application framework layer in some embodiments of the present application includes a manager (manager), a Content Provider (Content Provider), a View System (View System), and the like.
In some embodiments, the activity manager is to: managing the lifecycle of the individual applications and typically the navigation rollback functionality.
In some embodiments, a window manager is used to manage all window programs.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is accessed, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, touch sensor, pressure sensor, etc.), and the like.
In some embodiments, the kernel layer further includes a power driver module for power management.
In some embodiments, the software programs and/or modules corresponding to the software architecture in fig. 4 are stored in the first memory or the second memory shown in fig. 2 or fig. 3.
In some embodiments, the song recognition function may recognize the audio data being played to obtain song information for the audio data, such as song title, artist, album, etc. The user can utilize the song identification function of the display device to identify the song information, and then can play or download corresponding audio data on line based on the song information.
The display device typically performs song recognition by installing an application program having a song recognition function. When the user is interested in the songs heard, the application program with the song identification function can be started to identify the songs being played. A specific identification procedure is recording the song being played through the microphone of the display device. And uploading the recorded audio data to a server, and comparing the recorded audio data with the audio data in the database by the server so as to search the audio data matched with the recorded audio data in the database. And then the song information of the matched audio data is sent to the display equipment for display. The song information identification method only uses the recorded audio data to compare with the audio data in the audio library, and the song information identification efficiency and accuracy are low.
In order to solve the problems in the above embodiments, the embodiments of the present application provide a song information identification method, and the song information identification method provided in the embodiments of the present application may be applied to the system shown in fig. 5. As shown in fig. 5, the system may include: a server 400 and a display device 200 for use by a user. The server 400 may be, for example, any form of data processing server such as a cloud server, a distributed server, or the like.
In the user interface shown in fig. 6, besides displaying the controls, an song-listening and song-identifying button is further provided, and by clicking the song-listening and song-identifying button, the display device generates a song-identifying instruction in response to the clicking operation of the song-listening and song-identifying button, and at this time, the playing sound data is not paused. After receiving the music recognition instruction, the display device 200 starts recording the currently played sound data.
In some embodiments, buttons for listening to songs and identifying songs may not be provided on the user interface to avoid affecting the viewing of the user, and the user may trigger the display device to generate the song-identifying instruction by inputting a preset voice command through voice or may cause the display device to generate the song-identifying instruction through a predetermined case of the remote control device.
In some embodiments, some videos (e.g., television shows or movies) are played with background music if configured, and the sound of a person speaking in the video is typically played while the background music is played. In order to improve the accuracy of music recognition, the voice of the person speaking in the recorded audio can be removed, and the audio data of the person speaking can be removed. The specific embodiment in which audio data of a person speaking is removed is not described in detail herein.
The song-listening and song-identifying button in the embodiment of the application can be a control UI directly displayed on the application interface of the video player or a hidden control UI. If the UI is the hidden control UI, the hidden control UI can be popped up and displayed only when the user calls out the hidden song-listening and song-identifying button through the remote controller. For example, the song-listening and song-identifying button is not displayed on the video player application interface at first, and the user needs to click on the video player application interface first, and the song-listening and song-identifying button is displayed on the video player application interface. In order to avoid the false clicking of the song-listening and song-identifying buttons by the user, after the song-listening and song-identifying buttons are displayed on the application interface of the video player through the operation, the focus does not fall on the song-listening and song-identifying buttons immediately, but the user needs to operate the remote controller to move the focus to the song-listening and song-identifying buttons before clicking the song-listening and song-identifying buttons.
The control UI may be a play UI of a player, and the play UI may include a UI in which a video player application interface acquires an entry of sound data and displays a result of song information of the searched sound data. The entry control UI of the sound data may be displayed as a graphic control to represent, and the UI of the result of the searched song information may be to add a sidebar to the graphic of the entry control UI to display the searched song information.
There are two implementations of obtaining an audio clip of the sound data:
the first implementation manner is that, after the audio output device (for example, a speaker) of the display device 200 outputs the sound data, the audio receiving device (for example, a microphone) of the display device 200 receives the sound data, and then decodes the received sound data through the audio decoder of the video player on the display device 200 to obtain the audio clip of the sound data;
the second implementation is that the audio output device (e.g., speaker) of the display apparatus 200 has already decoded the audio clip of the sound data before outputting the sound data (e.g., the display apparatus 200 obtains all the sound data included in the television series in advance and stores the audio data of all the sound data in the memory before playing the television series). In such an implementation, the display apparatus 200 does not need to acquire the audio clip in such a way that the audio receiving device receives the sound data and decodes it again. Only the audio clip at the current moment is read from the memory after the music identifying instruction is received. Compared with the first implementation mode, the second implementation mode can avoid the content distortion and redundancy of the received audio fragments caused by external interference or faults of the audio receiving device in the process of secondarily receiving the sound data. The second implementation can achieve that accurate sound data information can be obtained even in a noisy environment. In addition, the second implementation mode can be realized without an audio output device or an audio receiving device, and accurate sound data information can be acquired.
If the audio clip is obtained in the first implementation, the audio receiving device continues to receive the sound data without recording any time point.
If the audio clip is obtained in the second implementation, the specific process may be:
after receiving the instruction for listening to song recognition, the display device 200 generates a search period with the time point of receiving the instruction for listening to song recognition as a starting time point. The specific way of generating the search period may be to obtain the end time point by adding a predetermined length of time to the start time point. The display device 200 performs decoding processing on the sound data of the period according to the start time point and the end time point, and obtains an audio clip of the sound data being played by the video player. The display apparatus 200 then transmits the resulting audio clip to the server 400, so that the server 400 searches for sound data according to the first audio clip.
The above-described scheme may be implemented by a core processor of a player on the display device 200. The player core is responsible for decoding the played sound data, and also can be responsible for video reading (for example, downloading video data from a network), audio/video decoding, audio/video synchronization and other functions. When the user triggers the function of searching song information, and after the player kernel decodes the audio clip of the current sound data, the player kernel is also responsible for additionally transmitting the decoded audio data to an audio fingerprint computing nursing device for fingerprint processing of the audio data to compute audio fingerprints.
In some embodiments, if song information of sound data is identified only from an audio clip, the detailed procedure of the display apparatus 200 to acquire song information of sound data being played is as follows:
the audio fingerprint calculation processing device is used for carrying out fingerprint processing on the audio fragments of the sound data being played by the player by using an audio fingerprint algorithm, and generating audio fingerprints corresponding to the audio fragments.
The audio fingerprint may be an acoustic fingerprint, and may be a digital digest extracted from an audio signal of background music by a specific algorithm, which represents a digital signature of the background music on the acoustic feature content.
And then the information of the audio fingerprint is sent to a third server through a network transmission device, and query operation is carried out in an audio fingerprint database on the third party server to obtain song information of sound data being played by the player. The database is used for storing an audio fingerprint data table, and the audio fingerprint data table is used for recording preset audio fingerprint information and corresponding song information.
The above-mentioned audio fingerprints are mainly used for screening similar or identical audio from an audio database, and further obtaining song information corresponding to the audio data, where the audio database stores the audio fingerprint of at least one audio data and the corresponding song information (such as song name, word song author, lyrics, play address, download address, singer, etc.), and the audio fingerprint may be used as an index of the corresponding song information in the database. The audio fingerprint algorithm employed by the audio fingerprint calculation module must be consistent with the algorithm used by the audio fingerprints stored in the database.
It should be noted that, in the embodiment of the present application, the currently played sound data may be video or background music of a voiced novel, and may also be a title, a tail, an episode, a complete song, etc.
The embodiment of the application is an embodiment for identifying song information by combining the audio clips with the content in the control, wherein the content in the control is specifically the media asset content corresponding to the control. The display device first displays the current user interface, and the display device 200, in response to the received music recognition instruction, collects the audio clip of the sound data being played, and obtains the content data corresponding to the control in the current user interface. The display device 200 then transmits the collected audio clips and content data to the server 400. The server 400 recognizes song information corresponding to sound data from the audio clip and the content data, and feeds back the song information to the display device 200. The display device 200 receives and displays the identified song information fed back from the server 400. Therefore, song information of sound data is identified by adding content data of the control on the basis of recorded audio data, and song information identification efficiency and accuracy can be improved.
In the signaling diagram shown in fig. 7, if an active control exists in the current user interface, the controller acquires first content data of the active control, wherein the first content data characterizes media asset information corresponding to the active control, but does not characterize media asset information corresponding to an inactive control. An active control refers to a control in which a dynamic item exists, and an inactive control refers to a control in which a dynamic item does not exist. The activity control may be a video play control (e.g., video player), an audio play control (e.g., audio player), a picture play control (picture player), and the like. The dynamic items in the video playing control can comprise continuously played video frames, progress bars, scrolled speech and the like. Dynamic items in the audio playback control may include items such as sound effects presentation, scrolling lyrics, and the like. The dynamic items of the picture playing control may include items such as scrolling a picture. For the active control described above, the first content data acquired may be plain text and pictures.
In some embodiments, the presence dynamic item characterizes whether the audio/video content corresponding to the presence of the control is played at the current interface. Audio-visual content refers to audio and/or video content.
In some embodiments, the collected sound clip is sound data played by the display device, and the sound data played by the display device will generally be that the current interface is sound content playing with corresponding audio/video content of the control, so that the track corresponding to the sound clip can be determined more accurately and rapidly through the control title, introduction text, speech, and audio/video media resource title of the control corresponding to the activity item.
In some embodiments, the title of the control refers to the text control title outside the drawing area in the control. The audio and video asset title may be a title of an audio and video asset corresponding to an audio and video asset played by the player.
After the first content data of the active control is acquired, the audio clip and the first content data are sent to the server 400, and the server 400 identifies the first song information of the sound data according to the audio clip and the first content data. The server 400 then feeds back the first song information to the display apparatus 200.
And if the active control does not exist in the current user interface, acquiring second content data of the inactive control in the current page. The inactive control refers to a space where no dynamic item exists, and the inactive control may be a navigation bar control, a sidebar control, or the like. The items in the inactive control are typically static text (non-scrolling text) and static single-piece pictures (non-scrolling pictures). For the inactive controls described above, the second content data acquired may also be plain text and pictures.
After the second content data of the inactive control is acquired, the audio clip and the second content data are transmitted to the server 400, and the server 400 identifies second song information of the sound data according to the audio clip and the second content data. The server 400 then feeds back the second song information to the display apparatus 200.
The first content data and the second content data are different data, and the first song information and the second song information are different information. The number of active controls in the user interface is usually much smaller than the number of inactive controls, so if active controls exist in the user interface, song information is identified only according to content data and audio clips of the active controls, and the efficiency of identifying song information can be improved. And the sound data is usually derived from the active control, for example, background music is derived from the video player, and the song is derived from the audio player, so that the song information is only identified according to the first content data and the audio clip of the active control, and the accuracy of identifying the song information can be improved. If the active control does not exist in the user interface, the second content data and the audio clip of the inactive control can also be sent to the server 400, and the server 400 can identify the song information by combining the second content data and the audio clip, so that the efficiency and the accuracy of identifying the song information can be improved as well compared with the method for identifying the song information by only the audio clip.
In some embodiments, the process of acquiring the first content data may be: before receiving the music identifying instruction, the system judges the active control in the current user interface in advance. The method for judging whether the control in the user interface is an active control by the system can be as follows: and screenshot is carried out on the user interface according to a preset time interval, whether the active control is changed or not is judged according to the screenshot, if so, the active control is the active control, and if not, the active control is the inactive control.
For example, in the user interface shown in fig. 6, the user interface of the video playing platform includes a navigation bar, a side bar, a video recommendation bar, and a video playing bar. The navigation bar comprises a plurality of text controls, the side bar comprises a plurality of text controls, the video recommendation bar comprises text controls and picture controls, and the video play bar comprises a plurality of video play controls. The video being played is a video playing control B, and the video playing control A and the video playing control C only show profile pictures (regardless of the situation of scrolling the showing pictures). For example, the user interface shown in fig. 6 may be captured multiple times within 1 minute, and since only the image of the video playing control B is continuously changed in the user interface shown in fig. 8, it may be determined that the video playing control B is an active control. And the text control in the navigation bar, the text control of the sidebar, the video playing control A and the video playing control B are inactive controls.
And then when receiving a music composing instruction input by a user, performing screenshot on the current user interface, and cutting the screenshot after the screenshot, wherein only the activity control is reserved for content recognition. For example, in the user interface shown in fig. 8, the screenshot is cut after the current user interface screenshot, and only the video playing control B is reserved for content identification. The method for identifying the text of the picture can be the following process:
firstly, binarizing the cut picture to obtain a binarized picture. The binarization processing of the picture can be to set the gray value of the pixel point on the picture to 0 or 255, so that the whole picture presents black and white effect. Binarization is a method for dividing a picture, and when binarizing a picture, a pixel gray level greater than a preset critical gray level in a mobile terminal may be used as a gray level maximum value (the gray level maximum value may be 255), and a pixel gray level less than the critical gray level may be used as a gray level minimum value (the gray level maximum value may be 0), so that the picture binarization may be realized. The binarization algorithm may use a global fixed threshold or a local adaptive threshold, and embodiments of the present invention are not limited. If the target picture is a color picture, the color picture needs to be grayed to obtain a grayed picture before the picture is binarized, and then the grayed picture is binarized to obtain the binarized picture.
And then the black effect part in the binarized picture with the black and white effect is framed. The black effect part is actually an aggregation of pixel points, the mobile terminal can judge the distance between all adjacent pixels contained in the black effect part of the whole picture, and frame-select the pixels contained in the black effect part of the whole picture according to the distance, so that at least one target frame can be obtained, and the distance between the adjacent pixels in the black part of each target frame is smaller than or equal to the preset pixel distance in a certain mobile terminal. And after at least one target frame is obtained by carrying out frame selection on the black part in the binarized picture, the center point coordinate of each target frame and the area of each target frame can be respectively obtained by taking the lower left corner of the target picture as the origin of the plane rectangular coordinate system. Determining a noise box set and a text box set according to at least one target box; the center point coordinate distance between adjacent target frames in the noise frame set is smaller than the preset distance, and the center point coordinate distance between adjacent target frames in the text frame set is larger than or equal to the preset distance. Acquiring the total area of the noise box set and the total area of the text box set; the total area of the noise box set is the sum of the areas of all the target boxes contained in the noise box set, and the total area of the text box set is the sum of the areas of all the target boxes contained in the text box set. And carrying out text recognition on the text box set to obtain text information contained in the binarized picture.
According to the above embodiment, the content data of the active control is obtained, the content data of the active control is combined with the audio clip, and the detailed process of identifying the song information of the sound data being played is as follows:
the audio fingerprint calculation processing device is used for carrying out fingerprint processing on the audio fragments of the sound data being played by the player by using an audio fingerprint algorithm, and generating audio fingerprints corresponding to the audio fragments. And then the information of the audio fingerprint is sent to a third party server through a network transmission device, and the query operation is carried out in an audio fingerprint database on the third party server to obtain the background music information of the background music being played by the player. The database is used for storing an audio fingerprint data table, and the audio fingerprint data table is used for recording preset audio fingerprint information and corresponding song information.
And screening the similar or identical audios from the audio database according to the generated audio fingerprints, and if a plurality of similar or identical audios are screened from the audio database, further screening the similar or identical audios according to the first content data (or the second content data) to obtain the audio which is finally matched with the first content data (or the second content data), namely obtaining the finally matched first song information.
For example, the sound data is played by a four-in-a song, and the audio database is searched according to the audio fingerprint, and the song information of the four-in-a song, the song information of the five-in-king song, and the song information of the Zhao San-in-a song may be searched simultaneously. If the song information corresponding to the accurate sound data cannot be obtained according to the general method. According to the method of the embodiment of the application, if the active control exists in the current user interface, song information of the A song of the Li-Si singing, song information of the A song of the Wang-Wu singing and song information of the A song of the Zhao San singing are further screened by using first content data of the active control, for example, lyrics of the A song of the Li-Si singing are possibly different from lyrics of the A song of the Wang-Wu singing and lyrics of the A song of the Zhao San singing, and through text comparison, lyrics of the song played by sound data are overlapped with the A lyrics of the Li-Si singing. Therefore, it may be finally determined that the sound data plays the four-in-a song, and finally the server 400 feeds back the song information of the four-in-a song to the display device 200, and then the display device 200 receives the song information of the four-in-a song, and may display the song information of the four-in-a song on the user interface.
In some embodiments, if the active control is a video play control, a video name is displayed on the video play control, and the sound data is sound data being played by the video play control, the audio clip and the video name are sent to the server 400. The server 400 searches a first configuration file corresponding to the first video of the video name, wherein the first configuration file at least comprises all audio data configured by the first video and song information corresponding to the audio data. And receiving the first song information fed back by the server, wherein the first song information is the song information of the audio data matched with the audio fragment and searched in the configuration file. Wherein the first profile includes audio data and song information for a background song, a title, an episode, a tail, etc. song configured for the first video.
For example, in the user interface shown in fig. 9, the video playing control B is playing video and simultaneously playing sound data, and the play window of the video playing control B also displays the name CCC of the television play being played. Upon receiving the music recognition instruction input by the user, the display apparatus 200 transmits the drama name CCC and the recorded audio clip to the server 400. The server 400 first locates song 1, song 2, and song 3 in the audio database based on the recorded audio clips. And then searching a configuration file corresponding to the video according to the television play name CCC. For example, in the profile schematic shown in fig. 10, the televised CCC is configured with song 2, song 4, song 5 …. Song 1, song 2 and song 3 are found from the recorded audio clips in the audio database, and are matched with song 2, song 4 and song 5 … in the configuration file (the matching method can be matching according to the text of the song name or further matching according to the song information), so that the source of the sound data being played is song 2, and finally the server 400 feeds back the song information of song 2 to the display device 200.
In some embodiments, if the active control is a video playing control, a video speech is displayed on the video playing control, and the sound data is the sound data being played by the video playing control, the audio clip and the first video speech are sent to the server 400. If the server 400 can find a second configuration file of the second video according to the first video line, where the second configuration file includes at least all audio data configured by the second video and song information corresponding to the audio data, the server 400 feeds back song information to the display device 200, where the song information is the song information of the audio data matched with the audio clip found in the second configuration file.
For example, in the user interface shown in fig. 11, the video playing control B is playing video and simultaneously playing sound data, and the station word "enjoy a husband red" of the television play is also displayed on the playing window of the video playing control B. Upon receiving the music recognition instruction input by the user, the display apparatus 200 transmits the station word "enjoy a husband' and the recorded audio clip of the television play to the server 400. The server 400 first locates song 1, song 2, and song 3 in the audio database based on the recorded audio clips. And then finding out the CCC of the television play containing the station word according to the station word 'rewarding a husband red'. And then searching a configuration file corresponding to the video according to the television play name CCC. The televised CCC is configured with song 2, song 4, song 5 …. And searching out song 1, song 2 and song 3 from the audio database according to the recorded audio clips, matching with song 2, song 4 and song 5 … in the configuration file, obtaining that the source of the playing sound data is song 2, and finally feeding back song information of song 2 to the display device 200 by the server 400.
In some embodiments, if the server 400 cannot find the second configuration file of the second video according to the first video line (essentially, the server 400 cannot determine the unique video according to the first video line, a segment of line may correspond to multiple videos), that is, the display device 200 cannot receive the song information fed back by the server 400, the display device 200 continues to send the second video line to the server 400, where the video frame where the second video line is located is the next video frame of the video frame where the first video line is located. If the server 400 can find a second configuration file of the second video according to the first video line and the second video line, where the second configuration file at least includes all audio data configured by the second video and song information corresponding to the audio data, the server 400 feeds back song information to the display device 200, where the song information is song information of the audio data matched with the audio clip found in the second configuration file. If the server 400 still cannot find the second configuration file of the second video, the display device 200 continues to send the third video line … to the server 400 until the server 400 finds the corresponding second configuration file or the finding times out.
For example, in the user interface shown in fig. 12, the video playing control B is playing video and simultaneously playing sound data, and the station word "live like a box of chocolate" playing a television play is also displayed on the playing window of the video playing control B. Upon receiving the music recognition instruction input by the user, the display device 200 transmits the station word "live like a box of chocolate" of the television play and the recorded audio clip to the server 400. The server 400 first locates song 1, song 2, and song 3 in the audio database based on the recorded audio clips. Then it is not possible to determine the unique drama from the station word "live like a box of chocolate" of the drama, for example, the station a, the station B, the station C all contain the station word. The display device 200 continues to send the speech "you never know what you will get" contained in the next video frame to the server 400.
The server 400 finds the determined unique television series a based on the station words "live like a box of chocolate" and the station words "you never know what you will get". And then obtain a corresponding profile for a television episode a configured with song 3, song 4, song 5 …. And searching out song 1, song 2 and song 3 from the audio database according to the recorded audio clips, matching with song 3, song 4 and song 5 … in the configuration file, obtaining that the source of the playing sound data is song 3, and finally feeding back song information of song 3 to the display device 200 by the server 400.
In some embodiments, if the activity control is a music play control that plays at least a rolling lyrics, the display device 200 sends the audio clip and a first lyrics text of the rolling lyrics to the server 400, wherein playing the audio clip while the rolling lyrics is playing the first lyrics text. The server 400 identifies first song information of the sound data after calibrating the audio clip according to the first lyric text.
For example, in the user interface shown in fig. 13, the music playing control is playing song M (the song is adapted to M1 version, M2 version, M3 version, each version of lyrics is different, e.g. M1 version contains lyrics AAAA, M2 version contains lyrics BBBB, M3 version contains lyrics CCCC), and the lyrics "living like a box of chocolate AAAA you never know what you will get" of song M being played is also displayed on the playing window of the music playing control. Upon receiving the song recognition instruction input by the user, the display apparatus 200 transmits lyrics of the song and the recorded audio clip to the server 400. The server 400 first locates song M1, song M2, and song M3 in the audio database based on the recorded audio clips. Since the words of song M1, song M2, and song M3 are different, a unique song may be determined based on lyrics F, e.g., lyrics F match the lyrics of song M2, and it may be determined that the music playing control is playing song M2. The final server 400 feeds back song information of the song M2 to the display apparatus 200.
In some embodiments, if there is no active control in the current user interface, the condition to obtain the second content data for the inactive control in the current page further includes that there is no active control currently running in the foreground.
If there is no active control in the current user interface and there is an active control currently running in the foreground, after displaying the active control on the user interface, first content data of the active control is obtained, and the audio clip and the first content data are sent to the server 400. The server 400 identifies first song information of the sound data according to the audio clip and the first content data, and receives the first song information fed back by the server.
For example, in the user interface shown in fig. 14, a navigation bar, a sidebar, and a plurality of game profile windows (containing only text and no dynamics, and thus no active controls) are displayed in the current user interface, and a video playback control is also running in the current foreground, but the video playback control (from which sound data is also derived) is dragged outside the current display window. When a music recognition instruction input by a user is received, the video playing control can automatically move back to the current user interface, such as the user interface shown in fig. 15, and the video playing control is displayed on the user interface. Then, according to the method of the above embodiment, the video name or the line and the audio clip displayed on the video playing control are sent to the server 400, and the server 400 searches the song information corresponding to the sound data according to the video name or the line and the audio clip.
The specific implementation procedure of song information identification of this embodiment may be as shown in the signaling diagram shown in fig. 16:
the user inputs the music recognition instruction by clicking the music listening and music recognition button in the video playing application page, and sends the music recognition instruction to the music recognition application (the user can also directly click the music recognition application, and the video player still normally plays the current video at the moment), wherein the music recognition application can be an audio playing application with the music listening and music recognition functions.
The music recognition application requests an audio file (the audio output module can be an audio player, the audio file is an audio fragment of the sound data being played) from the audio output module, meanwhile, the music recognition application requests the display text of the current interface activity control from the application service system, and the application service credit returns the display text of the current interface activity control to the music recognition application according to the request. The audio output module returns the audio file to the music recognition application. And the music recognition application performs data assembly on the received audio file and the display text, and then sends the assembled data to the music recognition cloud. The music recognition cloud end firstly searches a plurality of matched audio data according to the audio file, and can acquire a plurality of song information at the same time. And then the music recognition cloud further screens the acquired plurality of audio data according to the display text to obtain unique song information corresponding to the sound data. And finally, the song identification cloud feeds the identified song information result back to the song identification application, and the song information can be displayed by the song identification application.
The application provides a song information identification method. Fig. 17 is a flowchart illustrating a song information identification method according to an exemplary embodiment. The song information identification method is applicable to the display device 200 in the system shown in fig. 5. As shown in fig. 17, the song information identification method may include the steps of:
in step S101, a current user interface is displayed;
in step S102, in response to the received music recognition instruction, collecting the sound data being played to generate an audio clip and obtaining content data corresponding to a control in the current user interface, wherein the content data characterizes media corresponding to the control;
in step S103, the audio clip and the content data are sent to a server, so that the server identifies song information corresponding to the sound data according to the audio clip and the content data;
in step S104, the identified song information fed back by the server is received and displayed.
The application provides a song information identification method. Fig. 18 is a flowchart illustrating a song information identification method according to an exemplary embodiment. The song information identification method is applicable to the display device 200 in the system shown in fig. 5. As shown in fig. 17, the song information identification method may include the steps of:
In step S201, in response to the received music recognition instruction, an audio clip of sound data being played is collected.
In step S202, if an active control exists in the current user interface, first content data of the active control is acquired, and the audio clip and the first content data are sent to a server, so that the server identifies first song information of the sound data according to the audio clip and the first content data, and receives the first song information fed back by the server, wherein the active control is a control with a dynamic item.
In step S203, if no active control exists in the current user interface, second content data of an inactive control in the current page is obtained, and the audio clip and the second content data are sent to a server, so that the server identifies second song information of the sound data according to the audio clip and the second content data, and receives the second song information fed back by the server, wherein the inactive control is a control without a dynamic item, the first content data and the second content data are different data, and the first song information and the second song information are different information.
In some embodiments, if the active control is a video playing control, a video name is displayed on the video playing control, and the sound data is sound data being played by the video playing control, the audio clip and the video name are sent to a server, so that the server searches a first configuration file corresponding to a first video of the video name, wherein the first configuration file at least comprises all audio data configured by the first video and song information corresponding to the audio data; and receiving the first song information fed back by the server, wherein the first song information is the song information of the audio data matched with the audio fragment and searched in the first configuration file.
Those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in terms of several patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," controller, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application and are not intended to limit the order in which the processes and methods of the application are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed herein and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.

Claims (10)

1. A display device, characterized by comprising:
a display for displaying a user interface;
a user interface for receiving an input signal;
a controller coupled to the display and the user interface, respectively, for performing:
displaying a current user interface;
responding to the received music recognition instruction, collecting the sound data being played to generate an audio fragment and obtaining content data corresponding to a control in a current user interface, wherein the content data characterizes media corresponding to the control;
transmitting the audio clip and the content data to a server so that the server can identify song information corresponding to the sound data according to the audio clip and the content data;
and receiving and displaying the identified song information fed back by the server.
2. The display device of claim 1, wherein the current user interface comprises at least two controls;
the step of obtaining the content data corresponding to the control in the current user interface comprises the following steps:
when an active control exists in a current user interface, first content data of the active control is also acquired, wherein the active control is a control with a dynamic item, the first content data represents media information corresponding to the active control and does not represent media information corresponding to an inactive control, and the inactive control is a control without the dynamic item;
The sending the audio clip and the content data to a server, so that the server identifies song information corresponding to the sound data according to the audio clip and the content data, including:
transmitting the audio clip and the first content data to a server, so that the server identifies first song information of the sound data according to the audio clip and the first content data;
the receiving and displaying the identified song information fed back by the server includes:
and receiving and displaying the first song information fed back by the server.
3. The display device of claim 1, wherein the current user interface comprises at least two controls;
the step of obtaining the content data corresponding to the control in the current user interface comprises the following steps:
when no active control exists in the current user interface, second content data of an inactive control in the current page is also acquired, wherein the inactive control is a control without a dynamic item, and the second content data represents media information corresponding to the inactive control;
the sending the audio clip and the content data to a server, so that the server identifies song information corresponding to the sound data according to the audio clip and the content data, including:
Transmitting the audio clip and the second content data to a server so that the server identifies second song information of the sound data according to the audio clip and the second content data;
the receiving and displaying the identified song information fed back by the server includes:
and receiving second song information fed back by the server.
4. The display device of claim 2, wherein the active control is a video play control, the video play control has a video name displayed thereon, the first content data is the video name, and the sound data is sound data being played by the video play control;
the transmitting the audio clip and the first content data to a server, so that the server identifies first song information of the sound data according to the audio clip and the first content data includes:
the audio clips and the video names are sent to a server, so that the server searches a first configuration file corresponding to a first video of the video names, wherein the first configuration file at least comprises all audio data configured by the first video and song information corresponding to the audio data;
And receiving the first song information fed back by the server, wherein the first song information is the song information of the audio data matched with the audio fragment and searched in the first configuration file.
5. The display device of claim 2, wherein the active control is a video play control, a first video line is displayed on the video play control, the first content data is the first video line, and the sound data is sound data being played by the video play control;
transmitting the audio clip and the first content data to a server, such that the server identifies first song information for the sound data from the audio clip and the first content data includes:
and sending the audio clip and the first video line to the server, and if the server can find a second configuration file of a second video according to the first video line, wherein the second configuration file at least comprises all audio data configured by the second video and song information corresponding to the audio data, receiving the first song information fed back by the server, wherein the first song information is the song information of the audio data matched with the audio clip and found in the second configuration file.
6. The display device of claim 5, wherein transmitting the audio clip and the first content data to a server to cause the server to identify first song information for the sound data from the audio clip and the first content data further comprises:
if the server cannot find the second configuration file of the second video according to the first video line, continuing to send the second video line to the server, wherein the video frame where the second video line is located is the next video frame of the video frame where the first video line is located;
and if the server can find a second configuration file of a second video according to the first video line and the second video line, wherein the second configuration file at least comprises all audio data configured by the second video and song information corresponding to the audio data, and the first song information fed back by the server is received, wherein the first song information is the song information of the audio data matched with the audio fragment and found in the second configuration file.
7. The display device of claim 2, wherein the activity control is a music playing control and the music playing control plays at least rolling lyrics, the first content data being the rolling lyrics;
Transmitting the audio clip and the first content data to a server, such that the server identifies first song information for the sound data from the audio clip and the first content data includes:
and sending the audio fragment and the first lyric text of the rolling lyrics to a server, wherein the audio fragment is played while the rolling lyrics are playing the first lyric text, so that the server identifies first song information of the sound data according to the first lyric text and the audio fragment.
8. The display device of claim 3, wherein if no active control is present in the current user interface, the condition to obtain the second content data for the inactive control in the current page further comprises that no active control is currently running in the current foreground;
and if the current foreground has the active control running, after the active control is displayed on the user interface, acquiring first content data of the active control, and sending the audio fragment and the first content data to a server, so that the server identifies first song information of the sound data according to the audio fragment and the first content data, and receives the first song information fed back by the server.
9. A song information identification method, the method comprising:
responding to the received music recognition instruction, collecting the sound data being played to generate an audio fragment and obtaining content data corresponding to a control in a current user interface, wherein the content data characterizes media corresponding to the control;
transmitting the audio clip and the content data to a server so that the server can identify song information corresponding to the sound data according to the audio clip and the content data;
and receiving and displaying the identified song information fed back by the server.
10. The song information identification method of claim 9, the current user interface comprising at least two controls;
the step of obtaining the content data corresponding to the control in the current user interface comprises the following steps:
when an active control exists in a current user interface, first content data of the active control is also acquired, wherein the active control is a control with a dynamic item, the first content data represents media information corresponding to the active control and does not represent media information corresponding to an inactive control, and the inactive control is a control without the dynamic item;
The sending the audio clip and the content data to a server, so that the server identifies song information corresponding to the sound data according to the audio clip and the content data, including:
transmitting the audio clip and the first content data to a server, so that the server identifies first song information of the sound data according to the audio clip and the first content data;
the receiving and displaying the identified song information fed back by the server includes:
and receiving and displaying the first song information fed back by the server.
CN202211666085.8A 2022-12-23 2022-12-23 Song information identification method and display device Pending CN117290539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211666085.8A CN117290539A (en) 2022-12-23 2022-12-23 Song information identification method and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211666085.8A CN117290539A (en) 2022-12-23 2022-12-23 Song information identification method and display device

Publications (1)

Publication Number Publication Date
CN117290539A true CN117290539A (en) 2023-12-26

Family

ID=89257752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211666085.8A Pending CN117290539A (en) 2022-12-23 2022-12-23 Song information identification method and display device

Country Status (1)

Country Link
CN (1) CN117290539A (en)

Similar Documents

Publication Publication Date Title
US20210152870A1 (en) Display apparatus, server apparatus, display system including them, and method for providing content thereof
US20220084160A1 (en) Picture display device, and setting modification method and setting modification program therefor
CN105578267B (en) Terminal installation and its information providing method
RU2614137C2 (en) Method and apparatus for obtaining information
CN110996136B (en) Video resource display method and device
WO2018095219A1 (en) Media information processing method and device
CN109474843A (en) The method of speech control terminal, client, server
CN111131898B (en) Method and device for playing media resource, display equipment and storage medium
CN112000820A (en) Media asset recommendation method and display device
CN110958470A (en) Multimedia content processing method, device, medium and electronic equipment
CN111405322B (en) Method and device for acquiring login information
CN111343509A (en) Action control method of virtual image and display equipment
CN109792502B (en) Information processing apparatus, information processing method, storage medium, and information processing system
CN112086082A (en) Voice interaction method for karaoke on television, television and storage medium
CN104038774A (en) Ring tone file generating method and device
CN111083538A (en) Background image display method and device
CN107483993B (en) Voice input method of television, television and computer readable storage medium
US20230291772A1 (en) Filtering video content items
CN104866477A (en) Information processing method and electronic equipment
US10503776B2 (en) Image display apparatus and information providing method thereof
CN114900386B (en) Terminal equipment and data relay method
CN117290539A (en) Song information identification method and display device
CN115278341A (en) Display device and video processing method
CN115460452A (en) Display device and channel playing method
US11700285B2 (en) Filtering video content items

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination