CN108966007B - Method and device for distinguishing video scenes under HDMI - Google Patents

Method and device for distinguishing video scenes under HDMI Download PDF

Info

Publication number
CN108966007B
CN108966007B CN201811019037.3A CN201811019037A CN108966007B CN 108966007 B CN108966007 B CN 108966007B CN 201811019037 A CN201811019037 A CN 201811019037A CN 108966007 B CN108966007 B CN 108966007B
Authority
CN
China
Prior art keywords
keywords
scene
video
backtracking
record
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811019037.3A
Other languages
Chinese (zh)
Other versions
CN108966007A (en
Inventor
姜俊厚
董娜
李鑫
吴汉勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN201811019037.3A priority Critical patent/CN108966007B/en
Publication of CN108966007A publication Critical patent/CN108966007A/en
Application granted granted Critical
Publication of CN108966007B publication Critical patent/CN108966007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Abstract

The invention provides a method and a device for distinguishing video scenes under HDMI, which can enable an intelligent television to search backtracking records corresponding to the currently played video scene when the currently played video scene is determined to be a scene to be judged; acquiring keywords in the backtracking record, and obtaining a statistical result according to the keywords; if the statistical result belongs to the video operation, determining that the current video scene is a video operation scene; and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene. Therefore, the method and the device can accurately identify the video scene so as to adjust the parameter setting according to the video scene and improve the watching experience of the user.

Description

Method and device for distinguishing video scenes under HDMI
Technical Field
The invention relates to the technical field of televisions, in particular to a method and a device for distinguishing video scenes under an HDMI.
Background
At present, the smart television can automatically set parameters such as images and sounds of the television through the content watched by the user to improve the watching experience of the user. Most users watch the television content by using an HDMI (High Definition Multimedia Interface) in which an intelligent television is connected to a cable set top box, so that the intelligent television is only used as a display in the watching mode, and cannot sense the related information of the program watched by the users, and the content watched by the users cannot be analyzed.
For the above reasons, a plurality of methods for applying deep learning to the smart television are currently available, so as to assist the smart television to identify the watching content of the user. The Caffe is one of the above methods, and the trained Caffe model is run on the smart television, so that the Caffe model analyzes and informs the smart television of information such as a program type watched by a current user. That is, by continuously capturing the content image displayed on the smart television, the captured image is compared with the preset image in the Caffe model, and then different watching scenes are identified through the image identification process. For example, a love movie viewing scene and an action movie viewing scene are identified.
However, when a user operates a television through a set-top box, the Caffe model cannot distinguish whether the user is operating the television or watching an operating video, for example, when the user is playing a game, the Caffe model cannot distinguish whether the user is playing the game or watching the game video, or when the user is doing exercises, the Caffe model cannot distinguish whether the user is doing exercises or watching an educational video, so that the current video scene of the user cannot be determined, and thus parameters such as images and sounds cannot be adjusted for different video scenes, which affects the visual experience of the user.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for distinguishing video scenes under HDMI to solve the problem in the prior art that video scenes cannot be distinguished.
Specifically, the invention is realized by the following technical scheme:
the invention provides a method for distinguishing video scenes under HDMI, which is applied to a smart television and comprises the following steps:
when the video scene played by the current smart television is determined to be a scene to be judged, searching backtracking records corresponding to the video scene played currently;
acquiring keywords in the backtracking record, and obtaining a statistical result according to the keywords;
if the statistical result belongs to the video operation, determining that the current video scene is a video operation scene;
and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene.
Based on the same concept, the invention also provides a device for distinguishing video scenes under the HDMI, wherein the device is applied to the smart television and comprises:
the searching unit is used for searching backtracking records corresponding to the currently played video scene when the video scene played by the current smart television is determined to be a scene to be judged;
the statistical unit is used for acquiring keywords in the backtracking records and obtaining statistical results according to the keywords;
the determining unit is used for determining that the current video scene is a video operation scene if the statistical result belongs to the video operation; and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene.
Therefore, the method and the device can enable the intelligent television to search the backtracking record corresponding to the currently played video scene when the currently played video scene is determined to be the scene to be judged; acquiring keywords in the backtracking record, and obtaining a statistical result according to the keywords; if the statistical result belongs to the video operation, determining that the current video scene is a video operation scene; and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene. Compared with the prior art, the method and the device can distinguish video operation from video watching, so that the method and the device can accurately identify the video scene, so that parameter setting can be adjusted conveniently according to the video scene, and watching experience of a user is improved.
Drawings
Fig. 1 is a process flow diagram of a method of distinguishing video scenes under HDMI in an exemplary embodiment of the invention;
FIGS. 2a, 2b, and 2c are diagrams of a game play scenario in an exemplary embodiment of the invention;
FIGS. 3a, 3b, and 3c are diagrams of game video scenes in an exemplary embodiment of the invention;
FIG. 4 is a block diagram of the logic of an apparatus for distinguishing video scenes under HDMI in an exemplary embodiment of the invention;
fig. 5 is a logical block diagram of a smart tv in an exemplary embodiment of the invention.
Detailed Description
In order to solve the problems in the prior art, the invention provides a method and a device for distinguishing video scenes under an HDMI (high-definition multimedia interface), which can enable an intelligent television to search backtracking records corresponding to the currently played video scene when the intelligent television determines that the currently played video scene is a scene to be judged; acquiring keywords in the backtracking record, and obtaining a statistical result according to the keywords; if the statistical result belongs to the video operation, determining that the current video scene is a video operation scene; and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene. Compared with the prior art, the method and the device can distinguish video operation from video watching, so that the method and the device can accurately identify the video scene, so that parameter setting can be adjusted conveniently according to the video scene, and watching experience of a user is improved.
Referring to fig. 1, a flowchart of a method for distinguishing video scenes under HDMI in an exemplary embodiment of the present invention is shown, where the method is applied to a smart tv, and the method includes:
step 101, when the video scene played by the current smart television is determined to be a scene to be judged, searching backtracking records corresponding to the video scene played currently;
in this embodiment, the smart television may continuously perform screenshot on a content picture displayed on the smart television by running a trained recognition model, for example, a Caffe model, analyze information such as a type of a program currently played by the smart television after image recognition, and the smart television determines whether a video scene currently played is a scene to be determined according to information input by the recognition model. In this embodiment, the scene to be determined refers to a video scene that cannot be accurately identified by the identification model when the video scene is a video operation scene and a video watching scene of the same type of video, for example, a video scene for playing a game and a scene for watching a game video, a video scene for doing a problem and a scene for watching an education video, and the like, which are video scenes that cannot be accurately identified by the identification model in the prior art.
When the video scene played currently by the smart television is determined to be a scene to be determined after the image recognition is carried out on the plurality of content picture screenshots, the smart television can search a backtracking record corresponding to the video scene played currently, and the backtracking record comprises the following records: and starting the keywords in at least one display scene on the path of the currently played video scene so as to further judge whether the scene to be judged is a video operation scene or a video watching scene.
As an embodiment, the smart television can initialize the backtracking record when being started, and then continuously acquire keywords of a current display scene on a television screen and add the keywords into the backtracking record; the method for acquiring the keywords of the current display scene comprises at least one of the following methods:
in the first method, when the user moves the focus, the smart television obtains a keyword corresponding to an option where the focus is located and adds the keyword to a backtracking record, such as a display scene shown in fig. 2a and 2b, and if the user moves the focus in sequence and selects the focus step by step, for example, a focus path specifically includes: my → channel → game → three countries of China for a word → open, the smart television comprises the following keywords according to the option of the focus: my, channel, game, three kingdoms rehearsal, open, thus save the keyword to the backtracking record.
The second method is that the smart television acquires keywords corresponding to the interface content displayed at the uppermost layer and adds the keywords to the backtracking record, and before the user enters the game, the interface content displayed at the uppermost layer is shown in fig. 2b, wherein the keywords existing in the interface content displayed at the uppermost layer include the game name, such as "three countries' rehearsal"; and the intelligent television can save the characters as keywords into a backtrack record by operating buttons, such as an 'opening' button and relevant recommendation information, such as 'vitality courier', 'alloy soldier', 'war-ji legend' and 'flower swordsman'.
And if the screenshot of the smart television is shown in fig. 2c, the keywords extracted from the screenshot comprise 'task', 'cumulative gift', 'wonderful activity', 'permanent member' and 'menu', so that the keywords can be stored in the backtracking record, and then screenshot operation is executed in the next period.
The method for acquiring the keywords from each display scene on the path of starting the currently played video scene can be used independently or in combination, and the accuracy of judging the scene in the later period can be higher by using the combination.
As an embodiment, since a user may not always maintain a video playing scene, when determining a video scene, there may be a scene switching situation, for example, a previous video scene is a game, and then is switched to a game video scene, and after the video scene is switched, the attributes of displayed keywords may also have a relatively obvious difference. The attribute is whether the keyword belongs to a video operation type or a video viewing type. In order to avoid storing the keywords of the video scene with large attribute difference together, the smart television can add the acquired first keyword to the first node of the backtracking record; and then acquiring a keyword, and judging the attribute of the currently acquired keyword and the attribute of the previous keyword, wherein if the attribute difference between the currently acquired keyword and the attribute of the previous keyword is large, the video scene can be considered to be switched, or the difference between the video scene is large, so that a second node can be newly added in the backtracking record, and the current keyword is added into the second node, thereby avoiding the keywords with large attribute differences from being stored together, and improving the accuracy of scene judgment.
As an embodiment, the smart television may set a preset number of nodes in the backtracking record, and before adding an nth node, determine whether N is greater than the preset number; if so, deleting the earliest node and newly adding an Nth node; if not, the Nth node is newly added. For example, assuming that 5 nodes are preset, when the first node, the second node, the third node, the fourth node, and the fifth node are created, since the number of nodes is not greater than the preset number, the nodes may be directly created, but when the sixth node is created, since the current number of nodes is greater than the preset number, the first node may be deleted, and then the sixth node is created. According to the invention, some keywords in the nodes with earlier creation time and larger attribute change can be deleted by controlling the number of the nodes, so that excessive keywords can be prevented from being stored in the backtracking record, the storage space is further saved, and the efficiency of scene judgment is improved.
102, acquiring keywords in backtracking records, and obtaining a statistical result according to the keywords;
in this embodiment, when the intelligent television determines that the current video scene is the scene to be determined, the keywords in the backtracking record may be acquired, and statistical calculation is performed according to the keywords to obtain the statistical result. Specifically, the smart television may sort the keywords in the backtracking record according to the chronological order of the recording time, and then assign different weights to the keywords at different recording times, where the weight of the keyword sorted before is smaller than the weight of the keyword sorted after, in other words, the weight of the keyword at the later recording time is greater, because the keyword at the later recording time is the keyword that has just been seen by the user or is on the viewing interface, the keyword at the later recording time has a high proportion to the attribute of the video scene to be determined. After the smart television acquires the keywords in the backtracking record, the weights corresponding to the keywords are also acquired, and then a statistical result is obtained by performing weighted calculation on all the keywords. The invention carries out weighted calculation on the keywords by simulating the judgment thinking of the user, thereby enabling the calculation result to be more accurate.
103, if the statistical result belongs to video operation, determining that the current video scene is a video operation scene;
in this embodiment, if the attribute of the statistical result belongs to a video operation, it may be determined that the current video scene is a video operation scene, so as to adjust the parameter of the playing picture according to the playing quality requirement of the video operation scene.
And step 104, if the statistical result belongs to video watching, determining that the current video scene is a video watching scene.
In this embodiment, if the attribute of the statistical result belongs to video viewing, it may be determined that the current video scene is a video viewing scene, so as to adjust the parameter of the playing picture according to the playing quality requirement of the video viewing scene.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following describes the solution of the present invention in further detail based on fig. 2a, 2b, 2c and 3a, 3b, 3 c.
After the smart television is started, a backtracking record is initialized, and a user can watch a video under the following three conditions:
the scene model A: when the video scenes watched by the user are sequentially shown in fig. 2a, fig. 2b, and fig. 2c, the paths of the video scenes acquired by the smart television are as follows:
the user opens the game label page, enters the three kingdoms rehearsal interface, and starts to really play the game by clicking the game label page.
The keywords in the backtracking record should be: the first node comprises { my, channel, game and the like }, the second node comprises { three kingdoms rehearsal, opening, vitality courier, alloy soldier, war legend, flower martial art and the like }, the third node comprises { task, accumulated big ceremony, wonderful activity, permanent members, menus and the like }, weights are distributed to all keywords, the weight of the keyword of the first node is the minimum, the weight of the keyword of the third node is the maximum, weighting calculation is carried out according to the keywords of the nodes, and as the attributes of the keyword of the third node are game playing types, the obtained final statistical result belongs to a game playing scene, and the final scene model A is analyzed to obtain the game playing scene.
And (3) a scene model B: when the video scenes watched by the user are sequentially shown in fig. 3a, fig. 3b, and fig. 3c, the paths of the video scenes acquired by the smart television are as follows:
the user opens the movie label page, enters the royal and glory comment interface, and watches the game video through hi food (comment platform logo).
The keywords in the backtracking record should be: the first node comprises my, channel, movie, etc., the second node comprises hi king glory narration, full screen play, episode listing, more highlights, favorites, etc., the third node comprises hi eating not enough, the platform of the fighting fish, 00:39, determine, change, etc., all keywords are assigned weights, wherein the weight of the keyword of the first node is minimum, the weight of the keyword of the third node is maximum, and the weighted calculation is carried out according to the keywords of the nodes, since the attributes of hi food (commentary platform logo) in the third node and hi king glory commentary, full screen play, episode list, more highlights, collection belong to the type of game video, and are greatly different from the attributes of playing the game, therefore, the obtained final statistical result belongs to the game video scene, and the final scene model B is analyzed to obtain the game video scene.
Compared with the prior art, on the basis of analyzing and opening a plurality of screenshots on a video scene path, the video operation and the video watching scene can be distinguished through the attribute of the keyword extracted from at least one display picture on the path, so that the video scene can be accurately identified, the parameter setting can be conveniently adjusted according to the video scene, and the watching experience of a user is improved.
Based on the same conception, the invention also provides a device for distinguishing the video scenes under the HDMI, which can be realized by software, or by hardware or a combination of the software and the hardware. Taking software implementation as an example, the device for distinguishing video scenes under HDMI of the present invention is a logical device, and is implemented by the CPU of the device in which the device is located reading and running the corresponding computer program instruction in the memory.
Referring to fig. 4, a device 400 for distinguishing video scenes under HDMI in an exemplary embodiment of the present invention is applied to a smart tv, and from a logical level, the logical structure of the device 400 includes:
the searching unit 401 is configured to search a backtracking record corresponding to a currently played video scene when it is determined that the currently played video scene is a scene to be determined;
a statistical unit 402, configured to obtain keywords in the backtracking record, and obtain a statistical result according to the keywords;
a determining unit 403, configured to determine that a current video scene is a video operation scene if the statistical result belongs to a video operation; and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene.
As an embodiment, the apparatus further comprises:
an obtaining unit 404, configured to initialize a backtracking record when the computer is started; acquiring keywords of a current video scene and adding the keywords into a backtracking record;
the method for acquiring the keywords of the current video scene comprises at least one of the following methods:
the method comprises the steps that when a user moves a focus, keywords corresponding to an option where the focus is located are obtained and added to a backtracking record;
acquiring keywords corresponding to the interface content displayed on the uppermost layer at present and adding the keywords into a backtracking record;
and thirdly, acquiring screenshots of the current video scene periodically, and adding keywords extracted from the screenshots into a backtracking record.
As an embodiment, the obtaining unit 404 is further configured to add the obtained first keyword to the first node of the backtracking record; and if the attribute difference between the current keyword and the previous keyword is large, adding a second node in the backtracking record, and adding the current keyword into the second node.
As an embodiment, the obtaining unit 404 is further configured to determine whether N is greater than the preset number before adding an nth node; if so, deleting the earliest node and newly adding an Nth node; if not, the Nth node is newly added.
As an embodiment, the apparatus further comprises:
an allocating unit 405, configured to sort the keywords according to the sequence of the recording time; distributing different weights to the keywords in different recording times, wherein the weight of the keywords ranked in the front is smaller than that of the keywords ranked in the back;
the statistical unit 402 is specifically configured to perform weighted calculation on the keywords in the backtracking record to obtain a statistical result.
Based on the same concept, the invention further provides a smart television, as shown in fig. 5, which includes a memory 51, a processor 52, a communication interface 53, a display 54 and a communication bus 55; wherein, the memory 51, the processor 52, the communication interface 53 and the display 54 communicate with each other through the communication bus 55;
the memory 51 is used for storing computer programs;
the processor 52 is configured to execute the computer program stored in the memory 51, and when the processor 52 executes the computer program, any step of the method for distinguishing video scenes under HDMI provided by the embodiment of the present invention is implemented.
The present invention further provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any step of the method for distinguishing video scenes under HDMI provided by the embodiment of the present invention.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments of the computer device and the computer-readable storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to what is described in the partial description of the method embodiments.
In summary, the method and the device can enable the intelligent television to search the backtracking record corresponding to the currently played video scene when the intelligent television determines that the currently played video scene is the scene to be determined; acquiring keywords in the backtracking record, and obtaining a statistical result according to the keywords; if the statistical result belongs to the video operation, determining that the current video scene is a video operation scene; and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene. Compared with the prior art, the method and the device can distinguish video operation from video watching, so that the method and the device can accurately identify the video scene, so that parameter setting can be adjusted conveniently according to the video scene, and watching experience of a user is improved.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method for distinguishing video scenes under HDMI, wherein the method is applied to a smart television, and the method comprises:
when the video scene played by the current smart television is determined to be a scene to be judged, searching a backtracking record corresponding to the video scene played at present, wherein the backtracking record is used for recording keywords in at least one display scene on a path for starting the video scene;
sorting the keywords according to the sequence of the recording time; distributing different weights to the keywords in different recording times, wherein the weight of the keywords ranked in the front is smaller than that of the keywords ranked in the back;
weighting and calculating the keywords in the backtracking records to obtain a statistical result;
if the statistical result belongs to the video operation, determining that the current video scene is a video operation scene;
and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene.
2. The method of claim 1, further comprising:
when the computer is started, initializing a backtracking record;
acquiring keywords of a current video scene and adding the keywords into a backtracking record;
the method for acquiring the keywords of the current video scene comprises at least one of the following methods:
the method comprises the steps that when a user moves a focus, keywords corresponding to an option where the focus is located are obtained and added to a backtracking record;
acquiring keywords corresponding to the interface content displayed on the uppermost layer at present and adding the keywords into a backtracking record;
and thirdly, acquiring screenshots of the current video scene periodically, and adding keywords extracted from the screenshots into a backtracking record.
3. The method of claim 2, wherein obtaining keywords for the current video scene is added to the backtrack record, further comprising:
adding the acquired first keyword into the first node of the backtracking record;
and if the attribute difference between the current keyword and the previous keyword is large, adding a second node in the backtracking record, and adding the current keyword into the second node.
4. The method of claim 2, wherein obtaining keywords for the current video scene is added to the backtrack record, further comprising:
before the Nth node is newly added, judging whether N is larger than a preset number;
if so, deleting the earliest node and newly adding an Nth node;
if not, the Nth node is newly added.
5. An apparatus for distinguishing video scenes under HDMI, wherein the apparatus is applied to a smart TV, the apparatus comprising:
the searching unit is used for searching a backtracking record corresponding to the currently played video scene when the currently played video scene is determined to be a scene to be judged, wherein the backtracking record is used for recording keywords in at least one display scene on a path for starting the video scene;
the distribution unit is used for sequencing the keywords according to the sequence of the recording time; distributing different weights to the keywords in different recording times, wherein the weight of the keywords ranked in the front is smaller than that of the keywords ranked in the back;
the statistical unit is used for carrying out weighted calculation on the keywords in the backtracking records to obtain a statistical result;
the determining unit is used for determining that the current video scene is a video operation scene if the statistical result belongs to the video operation; and if the statistical result belongs to video watching, determining that the current video scene is a video watching scene.
6. The apparatus of claim 5, further comprising:
the obtaining unit is used for initializing a backtracking record when the computer is started; acquiring keywords of a current video scene and adding the keywords into a backtracking record;
the method for acquiring the keywords of the current video scene comprises at least one of the following methods:
the method comprises the steps that when a user moves a focus, keywords corresponding to an option where the focus is located are obtained and added to a backtracking record;
acquiring keywords corresponding to the interface content displayed on the uppermost layer at present and adding the keywords into a backtracking record;
and thirdly, acquiring screenshots of the current video scene periodically, and adding keywords extracted from the screenshots into a backtracking record.
7. The apparatus of claim 6,
the obtaining unit is further configured to add the obtained first keyword to the first node of the backtracking record; and if the attribute difference between the current keyword and the previous keyword is large, adding a second node in the backtracking record, and adding the current keyword into the second node.
8. The apparatus of claim 6,
the acquiring unit is further configured to determine whether N is greater than a preset number before an nth node is newly added; if so, deleting the earliest node and newly adding an Nth node; if not, the Nth node is newly added.
CN201811019037.3A 2018-09-03 2018-09-03 Method and device for distinguishing video scenes under HDMI Active CN108966007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811019037.3A CN108966007B (en) 2018-09-03 2018-09-03 Method and device for distinguishing video scenes under HDMI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811019037.3A CN108966007B (en) 2018-09-03 2018-09-03 Method and device for distinguishing video scenes under HDMI

Publications (2)

Publication Number Publication Date
CN108966007A CN108966007A (en) 2018-12-07
CN108966007B true CN108966007B (en) 2021-08-31

Family

ID=64475558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811019037.3A Active CN108966007B (en) 2018-09-03 2018-09-03 Method and device for distinguishing video scenes under HDMI

Country Status (1)

Country Link
CN (1) CN108966007B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556604B (en) * 2020-04-24 2023-07-18 深圳市万普拉斯科技有限公司 Sound effect adjusting method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101554047A (en) * 2006-08-04 2009-10-07 先进微装置公司 Video display mode control
CN103596044A (en) * 2013-11-22 2014-02-19 深圳创维数字技术股份有限公司 Method, device and system for processing and displaying video file
CN104834686A (en) * 2015-04-17 2015-08-12 中国科学院信息工程研究所 Video recommendation method based on hybrid semantic matrix
CN105072483A (en) * 2015-08-28 2015-11-18 深圳创维-Rgb电子有限公司 Smart home equipment interaction method and system based on smart television video scene
CN107451148A (en) * 2016-05-31 2017-12-08 北京金山安全软件有限公司 Video classification method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227988B (en) * 2014-06-03 2018-10-26 Tcl集团股份有限公司 A kind of method and device that smart television is arranged according to scene display system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101554047A (en) * 2006-08-04 2009-10-07 先进微装置公司 Video display mode control
CN103596044A (en) * 2013-11-22 2014-02-19 深圳创维数字技术股份有限公司 Method, device and system for processing and displaying video file
CN104834686A (en) * 2015-04-17 2015-08-12 中国科学院信息工程研究所 Video recommendation method based on hybrid semantic matrix
CN105072483A (en) * 2015-08-28 2015-11-18 深圳创维-Rgb电子有限公司 Smart home equipment interaction method and system based on smart television video scene
CN107451148A (en) * 2016-05-31 2017-12-08 北京金山安全软件有限公司 Video classification method and device and electronic equipment

Also Published As

Publication number Publication date
CN108966007A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
US20230012732A1 (en) Video data processing method and apparatus, device, and medium
KR102436734B1 (en) method for confirming a position of video playback node, apparatus, electronic equipment, computer readable storage medium and computer program
US8583725B2 (en) Social context for inter-media objects
US11317139B2 (en) Control method and apparatus
CN110493653B (en) Barrage play control method, device, equipment and storage medium
CN103718166A (en) Information processing apparatus, information processing method, and computer program product
CN109408672B (en) Article generation method, article generation device, server and storage medium
CN110784751B (en) Information display method and device
CN113779381B (en) Resource recommendation method, device, electronic equipment and storage medium
CN108536414A (en) Method of speech processing, device and system, mobile terminal
CN114095749A (en) Recommendation and live interface display method, computer storage medium and program product
CN106162357A (en) Obtain the method and device of video content
CN112312142B (en) Video playing control method and device and computer readable storage medium
CN113542833A (en) Video playing method, device and equipment based on face recognition and storage medium
CN110309753A (en) A kind of race process method of discrimination, device and computer equipment
JP3323842B2 (en) Video summarizing method and apparatus
CN111586466A (en) Video data processing method and device and storage medium
CN108966007B (en) Method and device for distinguishing video scenes under HDMI
Husa et al. HOST-ATS: automatic thumbnail selection with dashboard-controlled ML pipeline and dynamic user survey
CN109040838B (en) Video data processing method and device, video playing method and client
CN113746874B (en) Voice package recommendation method, device, equipment and storage medium
CN105979331A (en) Smart television data recommend method and device
US20170139933A1 (en) Electronic Device, And Computer-Readable Storage Medium For Quickly Searching Video Segments
KR102534270B1 (en) Apparatus and method for providing meta-data
CN114398514B (en) Video display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant after: Hisense Video Technology Co., Ltd

Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant before: HISENSE ELECTRIC Co.,Ltd.

GR01 Patent grant
GR01 Patent grant