CN115776592A - Display method, display device, electronic equipment and storage medium - Google Patents

Display method, display device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115776592A
CN115776592A CN202211389802.7A CN202211389802A CN115776592A CN 115776592 A CN115776592 A CN 115776592A CN 202211389802 A CN202211389802 A CN 202211389802A CN 115776592 A CN115776592 A CN 115776592A
Authority
CN
China
Prior art keywords
target
video data
target video
display
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211389802.7A
Other languages
Chinese (zh)
Inventor
田园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth RGB Electronics Co Ltd
Original Assignee
Shenzhen Skyworth RGB Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth RGB Electronics Co Ltd filed Critical Shenzhen Skyworth RGB Electronics Co Ltd
Priority to CN202211389802.7A priority Critical patent/CN115776592A/en
Publication of CN115776592A publication Critical patent/CN115776592A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the disclosure relates to a display method, a display device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring target video data; determining a display area of a target video represented by the target video data on a screen of the television; determining target display parameters of the target video; and displaying the target video in the display area according to the display mode indicated by the target display parameter. By the method, the display parameters of the video can be automatically determined, the display mode of the display area of the video is adjusted according to the display parameters, and the matching degree between the television picture quality and the displayed video can be improved.

Description

Display method, display device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a display method and apparatus, an electronic device, and a storage medium.
Background
Television (Television, TV) refers to a device that delivers image pictures and audio signals using electronic technology. Which is an important broadcast and video communication tool.
In the prior art, the requirement of users on the image quality of the picture presented by a television is increasingly improved. However, the current setting of the television image quality parameters is generally performed for the whole television screen, and the matching degree between the television image quality and the displayed video is low.
Disclosure of Invention
In view of the above, in order to solve some or all of the above technical problems, embodiments of the present disclosure provide a display method, an apparatus, an electronic device, and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a display method, where the method includes:
acquiring target video data;
determining a display area of a target video represented by the target video data on a screen of the television;
determining target display parameters of the target video;
and displaying the target video in the display area according to the display mode indicated by the target display parameter.
In one possible embodiment, the determining the target display parameters of the target video includes:
extracting audio data from the target video data;
determining a target category of the target video from a predetermined set of categories based on the audio data;
determining display parameters associated with the target category from a predetermined set of display parameters;
and determining the determined display parameters as target display parameters of the target video.
In one possible embodiment, the determining the target category of the target video from a predetermined category set based on the audio data includes:
identifying the audio data to obtain a first identification result;
determining whether a category matching the first recognition result is included in a predetermined category set;
and determining the category matched with the first recognition result as the target category of the target video under the condition that the category matched with the first recognition result is included in the category set.
In one possible embodiment, the determining a target category of the target video from a predetermined set of categories based on the audio data further includes:
extracting image data from the target video data in the case that a category matching the first recognition result is not included in the category set;
identifying the extracted image data to obtain a second identification result;
determining a category matching the second recognition result from the category set;
and determining the category matched with the second recognition result as the target category of the target video.
In one possible embodiment, the target video data includes a plurality of channels of video data; and
the determining a display area of a target video represented by the target video data on a screen of the television comprises:
determining a display area of a target video represented by the video data on a screen of the television aiming at each path of video data in the multi-path of video data to obtain the display area of the target video represented by the video data; and
the determining the target display parameters of the target video comprises:
determining target display parameters of a target video represented by the video data of each path aiming at the video data of the multiple paths of video data; and
the displaying the target video in the display area according to the display mode indicated by the target display parameter includes:
and aiming at each path of video data in the multi-path video data, displaying the target video represented by the path of video data in the display area of the target video represented by the path of video data according to the display mode indicated by the target display parameter of the target video represented by the path of video data.
In one possible embodiment, the method further comprises:
detecting image quality adjustment operation of a target video represented by single-channel video data in the multi-channel video data;
determining a display area of a target video represented by the single-channel video data under the condition that the image quality adjustment operation is detected;
during the operation of the image quality adjustment operation, the image quality adjustment operation is only performed on the display area of the target video represented by the single-channel video data.
In one possible embodiment, the target acquisition unit acquires target video data including at least one of:
determining the video data of the television as target video data;
and determining the screen projection video data sent to the television as target video data.
In a second aspect, an embodiment of the present disclosure provides a display device, where the display device includes:
an acquisition unit configured to acquire target video data;
a first determining unit, configured to determine a display area of a target video represented by the target video data on a screen of the television;
a second determining unit, configured to determine a target display parameter of the target video;
and the display unit is used for displaying the target video in the display area according to the display mode indicated by the target display parameter.
In one possible embodiment, the determining the target display parameters of the target video includes:
extracting audio data from the target video data;
determining a target category of the target video from a predetermined set of categories based on the audio data;
determining display parameters associated with the target category from a predetermined set of display parameters;
and determining the determined display parameters as target display parameters of the target video.
In one possible embodiment, the determining the target category of the target video from a predetermined category set based on the audio data includes:
identifying the audio data to obtain a first identification result;
determining whether a category matching the first recognition result is included in a predetermined set of categories;
and determining the category matched with the first recognition result as the target category of the target video under the condition that the category matched with the first recognition result is included in the category set.
In one possible embodiment, the determining the target category of the target video from a predetermined category set based on the audio data further includes:
extracting image data from the target video data in the case that a category matching the first recognition result is not included in the category set;
identifying the extracted image data to obtain a second identification result;
determining a category matching the second recognition result from the category set;
and determining the category matched with the second recognition result as the target category of the target video.
In one possible embodiment, the target video data includes multiple channels of video data; and
the determining a display area of a target video represented by the target video data on a screen of the television comprises:
aiming at each path of video data in the multi-path video data, determining a display area of a target video represented by the path of video data on a screen of the television to obtain the display area of the target video represented by the path of video data; and
the determining the target display parameters of the target video comprises:
determining target display parameters of a target video represented by the video data of each path aiming at the video data of the multiple paths of video data; and
the displaying the target video in the display area according to the display mode indicated by the target display parameter includes:
and aiming at each path of video data in the multi-path video data, displaying the target video represented by the path of video data in the display area of the target video represented by the path of video data according to the display mode indicated by the target display parameter of the target video represented by the path of video data.
In one possible embodiment, the apparatus further comprises:
the detection unit is used for detecting the image quality adjustment operation of a target video represented by single-channel video data in the multi-channel video data;
a third determining unit, configured to determine a display area of a target video represented by the one-way video data when the image quality adjustment operation is detected;
and the adjusting unit is used for performing the image quality adjusting operation only on the display area of the target video represented by the single-channel video data during the operation period of the image quality adjusting operation.
In one possible embodiment, the target acquisition target video data includes at least one of:
determining the video data of the television as target video data;
and determining the screen projection video data sent to the television as target video data.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory for storing a computer program;
a processor, configured to execute the computer program stored in the memory, and when the computer program is executed, implement the method of any embodiment of the display method of the first aspect of the present disclosure.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium, which when executed by a processor, implements the method as in any of the embodiments of the display method of the first aspect described above.
In a fifth aspect, the disclosed embodiments provide a computer program comprising computer readable code which, when run on a device, causes a processor in the device to execute instructions for implementing the steps in the method as described in any of the embodiments of the display method of the first aspect.
According to the display method provided by the embodiment of the disclosure, target video data is acquired, then, a display area of a target video represented by the target video data on a screen of a television is determined, then, target display parameters of the target video are determined, and then, the target video is displayed in the display area according to a display mode indicated by the target display parameters. By the method, the display parameters of the video can be automatically determined, the display mode of the display area of the video is adjusted according to the display parameters, the matching degree between the television image quality and the displayed video can be improved, and the pertinence of image quality display is improved.
Drawings
Fig. 1 is a schematic flow chart of a display method according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart diagram of another display method provided in the embodiment of the present disclosure;
fig. 3A is a schematic flowchart of another display method provided by the embodiment of the disclosure;
FIG. 3B is a schematic diagram of one application scenario for FIG. 3A;
FIG. 3C is a schematic illustration of another application scenario for FIG. 3A;
fig. 4 is a schematic structural diagram of a display device according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions, and values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those within the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one object, step, device, or module from another object, and do not denote any particular technical meaning or logical order therebetween.
It is also understood that in the present embodiment, "a plurality" may mean two or more, and "at least one" may mean one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the present disclosure may be generally understood as one or more, unless explicitly defined otherwise or indicated to the contrary hereinafter.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. For the purpose of facilitating an understanding of the embodiments of the present disclosure, the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Fig. 1 is a schematic flow chart of a display method according to an embodiment of the present disclosure. As shown in fig. 1, the method specifically includes:
101. target video data is acquired.
In the present embodiment, the target video data may be data of any one or more videos. In other words, the target video data may include one or more paths of video data, each path of video data may be streaming media data of one video, and the video data may represent the video.
102. And determining a display area of a target video represented by the target video data on the screen of the television.
In this embodiment, the target video may be a video represented by target video data.
Under the condition that the target video data only comprises one path of video data, the display area of the target video on the screen of the television can be the display area where the whole television screen is located, the local display area of the television screen set by a user can also be the display area where one split screen is located after the television screen is split.
Under the condition that the target video data comprises at least two paths of video data, the display area of each target video on the screen of the television can be a local display area of the television screen set by a user, or a display area where one split screen is located after the television screen is split.
In some cases, the size of the display area of each target video on the television screen may be determined by:
first, size information of a television screen, a distance between a user and the television screen (which may be measured by a distance sensor), historical display information of the user, the number of split screens of the television screen, and a network speed of the television are acquired. And the historical display information represents the size of a display area of the displayed video on the television screen during the historical time for displaying the video for the user.
And then, determining the size of a display area of the target video on the television screen based on the size information, the distance between the user and the television screen, the historical display information of the user, the split screen number and the network speed of the television.
As an example, the size information, the distance between the user and the television screen, the historical display information of the user, the number of split screens, and the network speed of the television may be substituted into a preset formula, so as to calculate the size of the display area of the target video on the television screen. The preset formula can represent the corresponding relation among size information, the distance between a user and a television screen, historical display information of the user, the split screen number, the network speed of the television and the size of a display area of a video on the television screen.
It can be understood that, by determining the size of the display area of the target video on the television screen through the above dimension information, the distance between the user and the television screen, the historical display information of the user, the split screen number and the network speed of the television, the determined size of the display area of the target video on the television screen can be more suitable for the actual scene and the habit of the user.
103. And determining target display parameters of the target video.
In this embodiment, the target display parameter may be a display parameter of a target video. As an example, the target display parameters may include at least one of: sharpness, lens distortion, dispersion, resolution, gamut range, color purity (brilliance), color balance, etc.
Here, please refer to the following description for details, which are not repeated herein.
104. And displaying the target video in the display area according to the display mode indicated by the target display parameter.
In some optional implementations of this embodiment, the step 102 may be performed in the following manner to obtain the target video data: and determining the video data of the television as target video data.
It can be understood that, in the above alternative implementation manner, the display parameter of the television video (that is, the video represented by the video data of the television) may be automatically determined, and the display manner of the display area of the television video is adjusted according to the display parameter, so that the matching degree between the television display manner and the displayed television video may be improved.
In some optional implementations of this embodiment, the step 102 may also be executed in the following manner, so as to obtain the target video data: and determining the screen projection video data sent to the television as target video data.
It can be understood that, in the above alternative implementation manner, the display parameter of the screen projection video (that is, the video represented by the screen projection video data sent to the television) may be automatically determined, and the display manner of the display area of the screen projection video is adjusted according to the display parameter, so that the matching degree between the television display manner and the displayed screen projection video may be improved.
In some optional implementations of this embodiment, the step 102 may be further performed in the following manner, so as to obtain the target video data:
and determining the video data of the television and the screen projection video data sent to the television as target video data so as to obtain multi-channel video data.
It can be understood that, in the above optional implementation manner, the display parameters of the television video and the screen projection video may be automatically determined, and the display manners of the display areas of the television video and the screen projection video are respectively adjusted according to the display parameters, so that the matching degree between the television display manner and the displayed television video and the screen projection video may be improved.
According to the display method provided by the embodiment of the disclosure, target video data is acquired, then, a display area of a target video represented by the target video data on a screen of a television is determined, then, target display parameters of the target video are determined, and then, the target video is displayed in the display area according to a display mode indicated by the target display parameters. According to the method, the display parameters of the video can be automatically determined, and the display mode of the display area of the video is adjusted according to the display parameters, so that the matching degree between the television image quality and the displayed video can be improved, the pertinence of image quality display is improved, and the pertinence of image quality display is improved.
Fig. 2 is a schematic flow chart of another display method provided in the embodiment of the present disclosure. As shown in fig. 2, the method specifically includes:
201. target video data is acquired.
In this embodiment, step 201 is substantially the same as step 101 in the embodiment corresponding to fig. 1, and is not described herein again.
202. And determining a display area of a target video represented by the target video data on the screen of the television.
In this embodiment, step 202 is substantially the same as step 102 in the corresponding embodiment of fig. 1, and is not described herein again.
203. And extracting audio data from the target video data.
In this embodiment, the python package moviepy may be used to extract audio data from the target video data.
204. Determining a target category of the target video from a predetermined set of categories based on the audio data.
In this embodiment, the target category may be a category of a target video.
The category set may include the following categories: sports, movies, music, standards, reading, etc.
As an example, the above step 204 may be performed in the following manner:
first, from the audio data, audio feature information is extracted. Wherein the audio feature information comprises at least one of: features extracted directly from audio waveform signals (such as zero-crossing rate), features extracted after audio signals are transformed from time domain to Frequency domain (such as spectrum centroid), features obtained through a specific model (features obtained after audio is separated into music and noise and then based on any part), and features obtained after quantization scale is changed by inspiring of human auditory perception (such as Mel Frequency Cepstrum Coefficient (MFCC)), and the like.
And then, inputting the audio characteristic information into a first classification model trained in advance, thereby obtaining the target class of the target video. The first classification model may be a convolutional neural network trained by using a predetermined training sample set. The training samples in the training sample set may include audio feature information and corresponding categories.
205. Determining a display parameter associated with the target category from a predetermined set of display parameters.
In this embodiment, each target category may be associated with one or more display parameters in the display parameter set in advance, and thus, the display parameters associated with the target category may be determined from a predetermined display parameter set.
206. And determining the determined display parameters as target display parameters of the target video.
207. And displaying the target video in the display area according to the display mode indicated by the target display parameter.
In this embodiment, step 207 is substantially the same as step 104 in the embodiment corresponding to fig. 1, and is not described here again.
In some optional implementations of this embodiment, the step 204 may be performed in a manner as follows, so as to determine the target category of the target video from a predetermined category set based on the audio data:
firstly, the audio data is identified to obtain a first identification result. The first recognition result may be a result of recognizing the audio data. The first recognition result may be a text corresponding to the audio data, or may be audio feature information of the audio data. Wherein the audio feature information comprises at least one of: the features extracted directly from the audio waveform signal (such as the zero-crossing rate), the features extracted after the audio signal is transformed from the time domain to the frequency domain (such as the spectrum centroid), the features obtained through a specific model (the features obtained after the audio is separated into musical tones and noises and then based on any part), the features obtained after the quantization scale is changed by the inspiration of human auditory perception (such as the Mel cepstrum coefficient), and the like.
Then, it is determined whether a category matching the first recognition result is included in a predetermined set of categories.
As an example, the first recognition result may be input to a second classification model trained in advance, so as to obtain a class matching the first recognition result. The second classification model may be a convolutional neural network trained by using a predetermined training sample set. The training samples in the training sample set may include the first recognition result and a category to which the first recognition result matches.
Then, in a case where a category matching the first recognition result is included in the category set, the category matching the first recognition result is determined as a target category of the target video.
It is to be understood that, in the above alternative implementation manner, the target category of the target video may be determined by identifying the audio data, and thus, the accuracy and speed of determining the target category may be further improved.
In some optional implementations of this embodiment, the step 204 may be performed in a manner as follows, so as to determine the target category of the target video from a predetermined category set based on the audio data:
first, in a case where a category matching the first recognition result is not included in the category set, image data is extracted from the target video data.
Here, openCV (open source computer vision library) may be employed to extract image data from target video data.
And then, identifying the extracted image data to obtain a second identification result.
The second recognition result may be a result of recognizing the extracted image data. As an example, the second recognition result may be feature data (e.g., texture feature) of the image data or may be character information included in the image data.
Then, from the set of categories, a category matching the second recognition result is determined.
As an example, the second recognition result may be input to a third classification model trained in advance, so as to obtain a class matching the second recognition result. The third classification model may be a convolutional neural network trained by using a predetermined training sample set. The training samples in the training sample set may include the second recognition result and a category to which the second recognition result matches.
And then, determining the category matched with the second recognition result as the target category of the target video.
It is to be understood that, in the above alternative implementation, the target category of the target video may be determined based on the image data in the target video data in a case where the category matching the first recognition result is not included in the category set, and thus, the target category may be determined using the image data in a scenario where the target category cannot be determined based on the audio data.
It should be noted that, in addition to the above-mentioned contents, the present embodiment may further include the technical features described in the embodiment corresponding to fig. 1, so as to achieve the technical effect of the display method shown in fig. 1.
According to the display method provided by the embodiment of the disclosure, the target category of the target video can be determined by identifying the audio data, so that the accuracy and the speed of determining the target category can be improved.
Fig. 3A is a schematic flow chart of another display method provided in the embodiment of the present disclosure. The method can be applied to one or more electronic devices such as smart phones, notebook computers, desktop computers, portable computers and servers. In addition, the execution main body of the method can be hardware or software. When the execution main body is hardware, the execution main body may be one or more of the electronic devices. For example, a single electronic device may perform the method, or multiple electronic devices may cooperate with each other to perform the method. When the execution subject is software, the method may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module. And is not particularly limited herein.
Specifically, as shown in fig. 3A, the method specifically includes:
301. and acquiring multi-channel video data.
In this embodiment, each of the multiple paths of video data may be streaming media data of one video, and the video data may represent the video.
The multi-channel video data may include at least one of video data of a television and screen projection video data transmitted to the television.
302. And aiming at each path of video data in the multi-path video data, determining the display area of the target video represented by the path of video data on the screen of the television to obtain the display area of the target video represented by the path of video data.
In this embodiment, the display area of each target video on the screen of the television may be a local display area of a television screen set by a user, or a display area where one split screen is located after the television screen is split.
303. And determining target display parameters of a target video represented by the video data of each path aiming at the video data of the multiple paths of video data.
In this embodiment, the target display parameter may be a display parameter of a target video. As an example, the target display parameters may include at least one of: sharpness, lens distortion, dispersion, resolution, gamut, color purity (brilliance), color balance, etc.
304. And aiming at each path of video data in the multi-path video data, displaying the target video represented by the path of video data in the display area of the target video represented by the path of video data according to the display mode indicated by the target display parameter of the target video represented by the path of video data.
In this embodiment, after the screen of the television screen is split, a path of target video represented by the video data may be displayed in each split screen area.
In some optional implementations of this embodiment, the following steps may also be performed:
firstly, detecting the image quality adjustment operation of a target video represented by single-channel video data in the multi-channel video data.
The image quality adjustment operation may be used to adjust the image quality of the target video by adjusting a target display parameter of the target video.
And then, when the image quality adjustment operation is detected, determining a display area of a target video represented by the single-channel video data.
Then, during the operation of the image quality adjustment operation, the image quality adjustment operation is performed only on the display area of the target video represented by the single-channel video data. In other words, during the operation of the quality adjustment operation, the quality adjustment operation does not need to be performed for the display area of the video displayed on the television screen except for the target video represented by the one-way video data.
It is to be understood that, in the above alternative implementation manner, the image quality adjustment operation may be performed only on the display area of the target video represented by the single channel video data, thereby implementing independent adjustment of the image quality of the single video represented by the single channel video data.
In some optional implementations of this embodiment, the following steps may also be performed:
firstly, detecting the image quality adjustment operation of a target video represented by single-channel video data in the multi-channel video data.
The image quality adjustment operation may be used to adjust the image quality of the target video by adjusting the target display parameter of the target video.
And then, under the condition that the image quality adjustment operation is detected, determining a display area of a target video represented by the single-channel video data and videos of the same category which are displayed on a television screen and are the same as the target category of the target video represented by the single-channel video data.
The videos of the same category can represent videos which are displayed on a television screen and have the same target category as that of the target video represented by the single-channel video data.
Then, during the operation of the image quality adjustment operation, the image quality adjustment operation is performed on the display area of the target video represented by the single-channel video data and the display area of the videos of the same category.
It can be understood that, in the above alternative implementation, the image quality of a plurality of videos belonging to the same category may be automatically adjusted in synchronization, thereby increasing the speed of image quality adjustment.
The following description is made for the purpose of illustrating the embodiments of the present disclosure, but it should be noted that the embodiments of the present disclosure may have the features described below, but the following description should not be construed as limiting the scope of the embodiments of the present disclosure.
Step one, the mobile phone is connected with the television.
The screen is projected to the television through a mobile phone miracast (a Wireless display standard based on Wi-Fi (Wireless Fidelity) direct connection) or a standard protocol of screen projection. After the screen is projected, the television can be divided into 2 split screens (for example, 2 split screens, 3 split screens and 4 split screens can be supported). Each split screen may be a screen of the television itself (corresponding to the video data of the television) or a screen shot (corresponding to the video data of the screen shot). As an example, please refer to fig. 3B, fig. 3B is a schematic diagram of an application scenario for fig. 3A. In fig. 3B, the television screen is divided into 2 divided screens, and a television picture and a projected picture 1 are presented. As yet another example, please refer to fig. 3C, fig. 3C is a schematic diagram of another application scenario for fig. 3A. In fig. 3C, the television screen is divided into 4 divided screens, and a television picture, a screen cast picture 1, a screen cast picture 2, and a screen cast picture 3 are presented.
And step two, the television automatically identifies display content.
According to the displayed content, the television can automatically screen the content type (corresponding to the target type) and automatically adapt according to the content type in the database. By way of example, the content types may include: sports, novels, caricatures, movies, etc.
And step three, automatic picture quality control.
The television screen can be divided into different areas (2 split screen/3 split screen/4 split screen, and the image quality of each area can support independent adjustment.
For example, when the projected content (corresponding to the target video) is a movie content (corresponding to the target category), the image quality may be automatically switched to sports, movies, music, standards, and the like. When the screen projection content is characters, cartoons and novels, the paper reading mode can be automatically switched, and the blue light stimulation is reduced. Specifically, the RGB (red green blue ) color can be directly adjusted to black and white. The content of the non-projected portion (for example, video data of the television) can be automatically matched with the picture quality mode according to the position of the content.
Therefore, in the embodiment, the television can be divided into different modules, the image quality can be independently adjusted according to the different modules, the display content of the television screen can be automatically judged, the image quality of the multi-channel pictures can be automatically adjusted, and eye protection and extremely visual experience can be provided for users.
It should be noted that, in addition to the above-mentioned contents, the present embodiment may further include technical features described in the embodiment corresponding to fig. 1 and/or fig. 2, so as to further achieve the technical effects of the display method shown in fig. 1 and/or fig. 2, and for brevity, the description related to fig. 1 and/or fig. 2 is specifically referred to, and is not repeated herein.
According to the display method provided by the embodiment of the disclosure, the display parameters of each video displayed on the television screen can be automatically determined, the display parameters of each video are respectively and independently determined, and then the corresponding video can be displayed according to the determined display parameters, so that independent image quality setting of a specific area of the television screen is realized.
Fig. 4 is a schematic structural diagram of a display device according to an embodiment of the disclosure. The method specifically comprises the following steps:
an acquisition unit 401 configured to acquire target video data;
a first determining unit 402, configured to determine a display area of a target video represented by the target video data on a screen of the television;
a second determining unit 403, configured to determine a target display parameter of the target video;
and a display unit 404, configured to display the target video in the display area according to the display mode indicated by the target display parameter.
In one possible embodiment, the determining the target display parameter of the target video includes:
extracting audio data from the target video data;
determining a target category of the target video from a predetermined set of categories based on the audio data;
determining display parameters associated with the target category from a predetermined set of display parameters;
and determining the determined display parameters as target display parameters of the target video.
In one possible embodiment, the determining the target category of the target video from a predetermined category set based on the audio data includes:
identifying the audio data to obtain a first identification result;
determining whether a category matching the first recognition result is included in a predetermined category set;
and under the condition that the category matched with the first recognition result is included in the category set, determining the category matched with the first recognition result as a target category of the target video.
In one possible embodiment, the determining the target category of the target video from a predetermined category set based on the audio data further includes:
extracting image data from the target video data in the case that a category matching the first recognition result is not included in the category set;
identifying the extracted image data to obtain a second identification result;
determining a category matching the second recognition result from the category set;
and determining the category matched with the second recognition result as the target category of the target video.
In one possible embodiment, the target video data includes multiple channels of video data; and
the determining a display area of a target video represented by the target video data on a screen of the television comprises:
aiming at each path of video data in the multi-path video data, determining a display area of a target video represented by the path of video data on a screen of the television to obtain the display area of the target video represented by the path of video data; and
the determining target display parameters of the target video comprises:
determining target display parameters of a target video represented by the video data of each path aiming at the video data of the multiple paths of video data; and
the displaying the target video in the display area according to the display mode indicated by the target display parameter includes:
and aiming at each path of video data in the multi-path video data, displaying the target video represented by the path of video data in the display area of the target video represented by the path of video data according to the display mode indicated by the target display parameter of the target video represented by the path of video data.
In one possible embodiment, the apparatus further comprises:
a detection unit (not shown in the figure) for detecting an image quality adjustment operation of a target video represented by one-way video data in the multi-way video data;
a third determining unit (not shown in the figure) configured to determine a display area of a target video represented by the single-channel video data when the image quality adjustment operation is detected;
an adjusting unit (not shown in the figure) is configured to, during an operation of the image quality adjustment operation, perform the image quality adjustment operation only on a display area of a target video represented by the single-channel video data.
In one possible embodiment, the target acquisition target video data includes at least one of:
determining the video data of the television as target video data;
and determining the screen projection video data sent to the television as target video data.
The display device provided in this embodiment may be the display device shown in fig. 4, and may perform all the steps of the display method shown in fig. 1-3B, so as to achieve the technical effect of the display method shown in fig. 1-3B.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and other user interfaces 503. The various components in the electronic device 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 502 in embodiments of the present disclosure may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), enhanced Synchronous SDRAM (ESDRAM), synchlronous SDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 502 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 502 stores elements, executable units or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 5021 and application programs 5022.
The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 5022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. A program implementing the method of the embodiments of the present disclosure may be included in the application program 5022.
In this embodiment, by calling a program or an instruction stored in the memory 502, specifically, a program or an instruction stored in the application 5022, the processor 501 is configured to execute the method steps provided by the method embodiments, for example, including:
acquiring target video data;
determining a display area of a target video represented by the target video data on a screen of the television;
determining target display parameters of the target video;
and displaying the target video in the display area according to the display mode indicated by the target display parameter.
The method disclosed by the embodiment of the present disclosure can be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 502, and the processor 501 reads the information in the memory 502 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented in one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the above-described functions of the present disclosure, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be the electronic device shown in fig. 5, and may perform all the steps of the display method shown in fig. 1-3B, so as to achieve the technical effect of the display method shown in fig. 1-3B.
The disclosed embodiments also provide a storage medium (computer-readable storage medium). The storage medium herein stores one or more programs. Among others, the storage medium may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of the above kinds of memories.
When one or more programs in the storage medium are executable by one or more processors, the display method performed on the electronic device side as described above is implemented.
The processor is configured to execute the display program stored in the memory to implement the following steps of the display method executed on the electronic device side:
acquiring target video data;
determining a display area of a target video represented by the target video data on a screen of the television;
determining target display parameters of the target video;
and displaying the target video in the display area according to the display mode indicated by the target display parameter.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments, objects, technical solutions and advantages of the present disclosure are described in further detail, it should be understood that the above-mentioned embodiments are merely illustrative of the present disclosure and are not intended to limit the scope of the present disclosure, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A display method, wherein the method is applied to a television, and the method comprises:
acquiring target video data;
determining a display area of a target video represented by the target video data on a screen of the television;
determining target display parameters of the target video;
and displaying the target video in the display area according to the display mode indicated by the target display parameter.
2. The method of claim 1, wherein determining the target display parameters of the target video comprises:
extracting audio data from the target video data;
determining a target category of the target video from a predetermined set of categories based on the audio data;
determining display parameters associated with the target category from a predetermined set of display parameters;
and determining the determined display parameters as target display parameters of the target video.
3. The method of claim 2, wherein determining the target category of the target video from a predetermined set of categories based on the audio data comprises:
identifying the audio data to obtain a first identification result;
determining whether a category matching the first recognition result is included in a predetermined set of categories;
and determining the category matched with the first recognition result as the target category of the target video under the condition that the category matched with the first recognition result is included in the category set.
4. The method of claim 3, wherein determining the target category of the target video from a predetermined set of categories based on the audio data further comprises:
extracting image data from the target video data in the case that a category matching the first recognition result is not included in the category set;
identifying the extracted image data to obtain a second identification result;
determining a category matching the second recognition result from the category set;
and determining the category matched with the second recognition result as the target category of the target video.
5. The method of claim 1, wherein the target video data comprises multiple channels of video data; and
the determining a display area of a target video represented by the target video data on a screen of the television comprises:
determining a display area of a target video represented by the video data on a screen of the television aiming at each path of video data in the multi-path of video data to obtain the display area of the target video represented by the video data; and
the determining target display parameters of the target video comprises:
determining target display parameters of a target video represented by the video data of each path aiming at the video data of the multiple paths of video data; and
the displaying the target video in the display area according to the display mode indicated by the target display parameter includes:
and aiming at each path of video data in the multi-path video data, displaying the target video represented by the path of video data in the display area of the target video represented by the path of video data according to the display mode indicated by the target display parameter of the target video represented by the path of video data.
6. The method of claim 5, further comprising:
detecting image quality adjustment operation of a target video represented by single-channel video data in the multi-channel video data;
determining a display area of a target video represented by the single-channel video data under the condition that the image quality adjustment operation is detected;
during the operation of the image quality adjustment operation, the image quality adjustment operation is only performed on the display area of the target video represented by the single-channel video data.
7. The method according to any one of claims 1-6, wherein said target acquisition target video data comprises at least one of:
determining the video data of the television as target video data;
and determining the screen projection video data transmitted to the television as target video data.
8. A display device, characterized in that the device comprises:
an acquisition unit configured to acquire target video data;
the first determining unit is used for determining a display area of a target video represented by the target video data on a screen of the television;
a second determining unit, configured to determine a target display parameter of the target video;
and the display unit is used for displaying the target video in the display area according to the display mode indicated by the target display parameter.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing a computer program stored in the memory, and when executed, implementing the method of any of the preceding claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of the preceding claims 1 to 7.
CN202211389802.7A 2022-11-03 2022-11-03 Display method, display device, electronic equipment and storage medium Pending CN115776592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211389802.7A CN115776592A (en) 2022-11-03 2022-11-03 Display method, display device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211389802.7A CN115776592A (en) 2022-11-03 2022-11-03 Display method, display device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115776592A true CN115776592A (en) 2023-03-10

Family

ID=85388868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211389802.7A Pending CN115776592A (en) 2022-11-03 2022-11-03 Display method, display device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115776592A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102630383A (en) * 2009-10-27 2012-08-08 夏普株式会社 Display device, control method for said display device, program, and computer-readable recording medium having program stored thereon
CN105975228A (en) * 2016-04-27 2016-09-28 努比亚技术有限公司 Control method and electronic device
CN109359636A (en) * 2018-12-14 2019-02-19 腾讯科技(深圳)有限公司 Video classification methods, device and server
CN110147711A (en) * 2019-02-27 2019-08-20 腾讯科技(深圳)有限公司 Video scene recognition methods, device, storage medium and electronic device
CN111263188A (en) * 2020-02-17 2020-06-09 腾讯科技(深圳)有限公司 Video image quality adjusting method and device, electronic equipment and storage medium
CN113590061A (en) * 2021-07-01 2021-11-02 深圳康佳电子科技有限公司 Screen projection control method and device, intelligent terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102630383A (en) * 2009-10-27 2012-08-08 夏普株式会社 Display device, control method for said display device, program, and computer-readable recording medium having program stored thereon
CN105975228A (en) * 2016-04-27 2016-09-28 努比亚技术有限公司 Control method and electronic device
CN109359636A (en) * 2018-12-14 2019-02-19 腾讯科技(深圳)有限公司 Video classification methods, device and server
CN110147711A (en) * 2019-02-27 2019-08-20 腾讯科技(深圳)有限公司 Video scene recognition methods, device, storage medium and electronic device
CN111263188A (en) * 2020-02-17 2020-06-09 腾讯科技(深圳)有限公司 Video image quality adjusting method and device, electronic equipment and storage medium
CN113590061A (en) * 2021-07-01 2021-11-02 深圳康佳电子科技有限公司 Screen projection control method and device, intelligent terminal and computer readable storage medium

Similar Documents

Publication Publication Date Title
WO2019109801A1 (en) Method and device for adjusting photographing parameter, storage medium, and mobile terminal
CN109729420B (en) Picture processing method and device, mobile terminal and computer readable storage medium
US9628837B2 (en) Systems and methods for providing synchronized content
US8331735B2 (en) Image display apparatus and method
US11330342B2 (en) Method and apparatus for generating caption
US10762332B2 (en) Image optimization during facial recognition
US8976191B1 (en) Creating a realistic color for a virtual object in an augmented reality environment
US9171352B1 (en) Automatic processing of images
CN110619350B (en) Image detection method, device and storage medium
US10354124B2 (en) Electronic apparatus and controlling method for improve the image quality preference of skin area
US10706512B2 (en) Preserving color in image brightness adjustment for exposure fusion
EP3070959A1 (en) Methods and systems for content presentation optimization
US10027878B2 (en) Detection of object in digital image
US9799099B2 (en) Systems and methods for automatic image editing
US10192473B2 (en) Display apparatus and method for image processing
US11416974B2 (en) Image processing method and electronic device supporting the same
US9786076B2 (en) Image combining apparatus, image combining method and non-transitory computer readable medium for storing image combining program
KR20160014513A (en) Mobile device and method for pairing with electric device
US20220318964A1 (en) Display device
CN107801282B (en) Desk lamp and desk lamp control method and device
CN116057574A (en) Color blindness assisting technical system and method
CN110767229B (en) Voiceprint-based audio output method, device and equipment and readable storage medium
CN115776592A (en) Display method, display device, electronic equipment and storage medium
US20220375430A1 (en) Adjusting Signal Settings for a Display Using a Light Sensor
US20180288297A1 (en) Information processing device, information processing method, program, and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination