WO2023045281A1 - 广播接收装置 - Google Patents

广播接收装置 Download PDF

Info

Publication number
WO2023045281A1
WO2023045281A1 PCT/CN2022/081779 CN2022081779W WO2023045281A1 WO 2023045281 A1 WO2023045281 A1 WO 2023045281A1 CN 2022081779 W CN2022081779 W CN 2022081779W WO 2023045281 A1 WO2023045281 A1 WO 2023045281A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
adjustment
unit
current
displayed
Prior art date
Application number
PCT/CN2022/081779
Other languages
English (en)
French (fr)
Inventor
石丸大
山内日美生
木村忠良
徳永将之
Original Assignee
海信视像科技股份有限公司
东芝视频解决方案株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海信视像科技股份有限公司, 东芝视频解决方案株式会社 filed Critical 海信视像科技股份有限公司
Priority to CN202280007592.2A priority Critical patent/CN116671116A/zh
Publication of WO2023045281A1 publication Critical patent/WO2023045281A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback

Definitions

  • Embodiments of the present application relate to a broadcast receiving device.
  • a broadcast receiving device for viewing broadcast content.
  • a broadcast receiving device adjusts parameters related to determination such as image quality, or performs various settings for adjustment.
  • the mainstream method of parameter adjustment as described above is to apply settings according to the type of content to the entire content.
  • a method of performing adjustment according to the characteristics of the scene for each scene (scene) included in the content has also been adopted.
  • Patent Document 1 Japanese Patent Laid-Open No. 2012-015877
  • Patent Document 2 Japanese Patent Laid-Open No. 2004-336317
  • Patent Document 3 Japanese Patent Laid-Open No. 2008-028871
  • the problem to be solved by this application is to obtain a broadcast receiving device that can apply an appropriate adjustment while avoiding the influence of the specificity of the scene and the time required to start applying the adjustment when adjusting the image quality of each scene during image display .
  • the broadcast receiving device includes: a storage unit that associates a type of scene with settings related to adjustment of the image quality of the scene and stores settings that are likely to be displayed next with respect to the type of scene.
  • the type of the scene is stored in association with each other; the current scene recognition unit identifies the scene being displayed, that is, the type of the current scene; the next scene prediction unit refers to the storage unit to predict the next display of the current scene.
  • the scene is the type of the next scene;
  • the setting preparation unit for the next scene refers to the storage unit, and prepares settings related to the adjustment of the image quality of the next scene;
  • the scene is adjusted based on the setting prepared by the setting preparation unit for the next scene at a predetermined timing.
  • FIG. 1 is a diagram showing an example of the configuration of an image quality adjustment system according to a first embodiment
  • FIG. 2 is a block diagram showing an example of the hardware configuration of the television device according to the first embodiment
  • FIG. 3 is a diagram illustrating the functional structure of the television device and the relationship between the respective functions
  • FIG. 4 is a diagram showing an example of information stored in a scene type DB
  • FIG. 5 is a diagram showing an example of information stored in an associated scene DB
  • FIG. 6 is a diagram showing an example of information stored in the setting DB according to scene characteristics
  • FIG. 7 is a diagram showing an example of information stored in a scene tag storage unit according to the second embodiment.
  • FIG. 8 is a diagram showing an example of information stored in a related scene DB according to the second embodiment.
  • FIG. 9 is a diagram showing information stored in a scene type DB of the second embodiment.
  • FIG. 10 is a diagram illustrating the functional configuration of the television device according to the third embodiment and the relationship between the respective functions
  • FIG. 11 is a diagram showing information stored in the scene type DB 2071 according to the fourth embodiment.
  • FIG. 12 is a diagram showing information stored in the associated scene DB of the fourth embodiment.
  • Fig. 13 is a diagram illustrating a functional configuration of a television set according to a fifth embodiment and a relationship between respective functions.
  • FIG. 1 is a diagram showing an example of the configuration of an image quality adjustment system 1 according to the first embodiment.
  • the image quality adjustment system 1 includes a management server 10 and a television device 20 .
  • the management server 10 and the television device 20 are communicably connected via the network 30 .
  • the image quality adjustment system 1 illustrated in FIG. 1 includes one management server 10 and one television device 20 respectively, during implementation, these devices included in the image quality adjustment system 1 are not limited to one. Can be multiple.
  • the management server 10 is, for example, a server device such as a personal computer (PC) or cloud computing.
  • the management server 10 is not limited to one server device, but may be constituted by a plurality of server devices.
  • the television device 20 is a broadcast receiving device that displays broadcast content in a viewable manner.
  • the management server 10 provides the television device 20 with various data for adjusting the image quality and the like when displaying content.
  • the television device 20 uses various data provided from the management server 10 to adjust image quality when displaying content.
  • the above-mentioned various data provided from the management server 10 to the television device 20 are updated as necessary.
  • information stored in various DBs 2051 to 2053 described later is an example of the various data described above.
  • FIG. 2 is a block diagram showing an example of the hardware configuration of the television device 20 according to the first embodiment.
  • the television set 20 includes an antenna 201, a first input terminal 202a, a second input terminal 202b, a third input terminal 202c, a tuner 203, a demodulator 204, a demultiplexer 205, and an A/D (analog/digital) converter.
  • 206 selector 207, signal processing unit 208, speaker 209, display panel 210, operation unit 211, light receiving unit 212, IP communication unit 213, CPU (Central Processing Unit, central processing unit) 214, memory (memory) 215, storage (storage) 216, microphone (microphone) 217, audio I/F (interface) 218.
  • CPU Central Processing Unit, central processing unit
  • memory memory
  • storage storage
  • microphone microphone
  • audio I/F interface
  • the antenna 201 receives broadcasting signals of digital broadcasting, and supplies the received broadcasting signals to the tuner 203 via the first input terminal 202a.
  • the tuner 203 selects a broadcast signal of a desired channel from the broadcast signal supplied from the antenna 201 , and supplies the selected broadcast signal to the demodulator 204 .
  • Broadcast signals may also be referred to as broadcast waves.
  • the demodulator 204 demodulates the broadcast signal supplied from the tuner 203 and supplies the demodulated broadcast signal to the demultiplexer 205 .
  • the demultiplexer 205 separates the broadcast signal supplied from the demodulator 204 , generates metadata, image signals, and audio signals, and supplies the generated metadata, image signals, and audio signals to the selector 207 .
  • the selector 207 selects one of the signals supplied from the demultiplexer 205 , the A/D converter 206 , and the third input terminal 202 c, and supplies the selected one signal to the signal processing unit 208 .
  • the signal processing unit 208 performs predetermined signal processing on the image signal supplied from the selector 207 , and supplies the processed image signal to the display panel 210 . Also, the signal processing unit 208 performs predetermined signal processing on the audio signal supplied from the selector 207 , and supplies the processed audio signal to the speaker 209 .
  • the speaker 209 outputs voice or various sounds based on the audio signal supplied from the signal processing unit 208 .
  • the speaker 209 changes the volume of output voice or various sounds based on the control of the CPU 214 .
  • the display panel 210 displays images such as still images and moving images based on image signals supplied from the signal processing unit 208 or controlled by the CPU 214.
  • the display panel 210 is an example of a display unit.
  • the second input terminal 202b receives analog signals (metadata, image signals, and audio signals) input from the outside.
  • the third input terminal 202c receives digital signals (metadata, image signals, and audio signals) input from the outside.
  • the third input terminal 202c can realize the input of a digital signal through a video recorder (BD video recorder) equipped with a drive device, wherein the drive device drives a BD (Blu-ray Disc) (registered trademark) or the like to record and play a memory media to record and play back.
  • the A/D converter 206 supplies the selector 207 with a digital signal generated by performing A/D conversion on the analog signal supplied from the second input terminal 202b.
  • the operation unit 211 receives user's operation input.
  • the light receiving unit 212 receives infrared rays from the remote controller 119 .
  • the IP communication unit 213 is a communication interface for performing IP (Internet Protocol) communication via the network 30 .
  • the CPU 214 is a control unit that controls the television device 20 as a whole.
  • Internal memory 215 is the ROM (Read Only Memory, read-only memory) that preserves the various computer programs that CPU 214 executes, or provides the RAM (Random Access Memory, random access memory) that is used to carry out the work partition (Work area) of program to CPU 214 )wait.
  • the memory 216 is HDD (Hard Disk Drive, hard disk drive) or SSD (Solid State Drive, solid state drive) or the like.
  • the memory 216 stores, for example, the signal selected by the selector 207 as recording data.
  • the microphone 217 captures the voice uttered by the user and sends it to the audio I/F 218 .
  • the audio I/F 218 performs analog/digital conversion on the sound obtained by the microphone 217, and sends it to the CPU 214 as a sound signal.
  • FIG. 3 is a diagram illustrating the functional configuration of the television device 20 and the relationship between the respective functions.
  • the television device 20 includes an image input unit 2001, an image playback unit 2002, a scene change determination unit 2003, an adjustment execution unit 2004, an image output unit 2005, a metadata acquisition unit 2011, a current scene recognition unit 2012, a next scene prediction unit 2013, a next scene prediction unit 2013, and a next scene prediction unit 2013.
  • Various functional sections such as setting preparation section 2014 for one scene.
  • the television device 20 holds a scene type DB 2051 , a related scene DB 2052 , and a scene-by-scene characteristic setting DB 2053 in the memory 216 , for example.
  • the memory 216 is an example of a storage unit.
  • the image input unit 2001 receives input of information from various input terminals 202a, 202b, and 202c as a state of a signal.
  • the image playback unit 2002 plays back the image signal among the signals received by the image input unit 2001 , and outputs it to the current scene recognition unit 2012 and the scene change determination unit 2003 .
  • information serving as metadata is output to the metadata acquisition unit 2011 .
  • the metadata acquisition unit 2011 acquires metadata input via the image input unit 2001 through various input terminals 202a, 202b, and 202c, or acquires metadata from an external device communicable via the network 30 and the IP communication unit 213, and outputs the metadata to the current scene recognition Ministry 2012.
  • genre information is information indicating what kind of image is, for example, information such as "golf program”, “travel program”, "music program”.
  • the current scene recognition unit 2012 recognizes what kind of scene the image output from the image playback unit 2002 is. More specifically, the current scene recognition unit 2012 determines which of the scene types stored in the scene type DB 2051 the image output by the image playback unit 2002 corresponds to. In addition, existing techniques can also be used as a specific method of identification here. Then, the current scene recognition unit 2012 outputs the recognition result to the next scene prediction unit 2013 .
  • FIG. 4 is a diagram showing an example of information stored in the scene type DB 2051.
  • the scene type DB 2051 stores the scene ID, the scene type, and the applicable PQ in an associated manner.
  • the scene type DB 2051 may include a plurality of tables for each type of image, for example.
  • the table of the example shown in FIG. 4 is a golf program as a genre.
  • the scene type is a type of scene, for example, there are a court introduction scene, a kick-off scene, a fly ball scene, and the like.
  • the scene ID is information specific to the scene type, such as a unique number or the like.
  • scenes showing player actions include tee shots, putt shots, other shots, grass reading actions, and movement on the course (scene IDs: 11 to 13, 22, 23).
  • scenes of chasing a flying ball (fly ball) a scene of the ball falling to the ground and jumping up, and a scene of chasing a ball rolling on the green (scene IDs: 31 to 33).
  • next scene prediction unit 2013 predicts the type of the scene next to the scene of the type recognized by the current scene recognition unit 2012 . More specifically, the next scene prediction unit 2013 refers to the related scene DB 2052 and acquires the “predicted next scene” associated with the scene of the type recognized by the current scene recognition unit 2012 as the predicted result.
  • FIG. 5 is a diagram showing an example of information stored in the related scene DB 2052 .
  • the associated scene DB 2052 is a table that associates and stores information corresponding to items such as scene ID, current scene, predicted next scene, next scene ID, probability, and the like.
  • the scene ID and the current scene in the related scene DB 2052 are the same as the scene ID and the scene type stored in the scene type DB 2051 .
  • Predicting the next scene is a type of scene that is likely to be continuously played following the current scene. Predicting the next scene is associated with at least one current scene, and multiple associations can also be established.
  • the next scene ID is the scene ID corresponding to the predicted next scene (associated with the scene type DB 2051).
  • the probabilities are the probabilities that each predicted next scene is played next to the associated current scene.
  • the associated scene DB 2052 can also store prediction information of multiple scenes such as the next scene and the next scene after the current scene.
  • the next scene prediction unit 2013 uses the predicted next scene with the highest probability as the prediction result of the current scene. As long as it is an example in FIG. 5 , when the current scene is a kick-off, the predicted next scene is a fly ball. Further, the next scene prediction unit 2013 outputs the predicted result (predicted next scene and next scene ID) to the next scene setting preparation unit 2014 and the scene change determination unit 2003 .
  • the next scene setting preparation unit 2014 prepares settings suitable for predicting the next scene so that it can be applied in response to a request. More specifically, the next scene setting preparation unit 2014 refers to the scene type DB 2051 and the scene-by-scenario characteristic setting DB 2053, and acquires a PQ suitable for predicting the next scene (applicable PQ) and various adjustment objects set in the PQ. The value of the item.
  • the above-mentioned PQ is an abbreviation of "Picture Quality” indicating image quality and the like, and in the present embodiment, it refers to characteristics such as image quality (image quality characteristics) suitable for the characteristics of the scene.
  • FIG. 6 is a diagram showing an example of information stored in the scene-by-scene characteristic setting DB 2053 .
  • the image quality characteristic code, scene characteristic, and setting are stored in association with each scene characteristic setting DB 2053 .
  • the settings in the scene-by-scene characteristic setting DB 2053 are combinations of values of various items to be adjusted, such as image quality.
  • the items include, for example, the presence or absence of the double-speed mode, the value of RGB, the brightness of the backlight, the contrast, and the like.
  • the values of these respective items are specified to suit the characteristics of the scene.
  • the scene characteristic is information representing the characteristic of the scene, for example, “nature, landscape, green”, “human face”, “fast moving object”, and the like.
  • the image quality characteristic code is a code (name) that specifies a combination of a scene characteristic and a setting applicable thereto, and is information such as "PQ1", "PQ2", . . . , "PQ9", for example.
  • the applicable PQ of the scene type DB 2051 is the same as the image quality characteristic code stored in the scene characteristic setting DB 2053 .
  • the scene type DB 2051 stores the scene ID and the scene in association with the applicable PQ, thereby defining various settings of the applicable scene via the applicable PQ.
  • the setting preparation unit 2014 for the next scene outputs the acquired settings (values of various items to be adjusted) to the scene change determination unit 2003 and the adjustment execution unit 2004 .
  • the scene change determination unit 2003 Based on the image output from the image playback unit 2002, the scene change determination unit 2003 detects that the image has changed greatly, and determines that the scene (scene) has been switched. More specifically, for example, when there is no or almost no common portion between the current frame and the previous frame, or when the ratio of the common portion in the entire frame is lower than a predetermined threshold, the scene switch determination unit 2003 It is determined that the image has undergone extreme changes, that is to say, the scene has been switched.
  • the scene change determination unit 2003 refers to the predicted next scene output by the next scene prediction unit 2013 , and determines whether the predicted scene matches the scene after the switch.
  • the scene change determination unit 2003 detects a scene change, and instructs the adjustment execution unit 2004 to execute the setting (adjustment target The value of various items) adjustment. Then, the adjustment executing unit 2004 executes adjustment based on the setting received from the setting preparation unit 2014 for the next scene.
  • the aforementioned predetermined time is, for example, the time when an instruction is received from the scene change determination unit 2003 .
  • the scene switch determination unit 2003 outputs information indicating this to the adjustment execution unit 2004 .
  • the adjustment executing unit 2004 to which the information indicating the predicted deviation is input does not use the setting output by the next scene setting preparation unit 2014 as the setting for the next scene, but uses the common setting as the setting for the next scene. Set and execute the adjustment of the value of various items.
  • the adjustment execution unit 2004 outputs the adjusted image to the image output unit 2005 .
  • the image output unit 2005 outputs the image adjusted by the adjustment execution unit 2004 to the display panel 210 .
  • the display panel 210 displays the input image.
  • the television device 20 of the present embodiment When displaying an image, the television device 20 of the present embodiment having such a structure recognizes what kind of scene the image being displayed is, investigates what kind of scene is after the recognized scene, and prepares a setting suitable for the next scene. value. Next, the television device 20 detects the switching of the scene, and if the switched scene matches the expected scene, the prepared setting is used to adjust the image quality, etc., and if it deviates from the expected one, the common setting is used for adjustment.
  • the present embodiment by referring to the DB stored in the storage unit (memory 216 ), it is possible to obtain a scene with a high possibility of being displayed next to the current scene (predicted next scene). Also, by referring to the DB, PQ suitable for predicting the next scene and the set value can be obtained. Furthermore, the set value can be applied accordingly with the start of the next scene.
  • setting values suitable for the scene can be prepared before displaying the current scene without being affected by the time required for specifying the scene and starting application of the adjustment. As described above, according to the present embodiment, it is possible to apply an appropriate setting value when performing image quality adjustment for each scene while displaying an image.
  • a related scene DB 2062 (see FIG. 8 ) and a scene type DB 2061 (see FIG. 9 ) described later are used instead of the related scene DB 2052 and the scene type DB 2051 in the first embodiment.
  • the associated scene DB2052 is constructed by manpower, while the associated scene DB2062 is constructed by AI (artificial intelligence, artificial intelligence). Regarding this construction, AI inputs a plurality of images of a specific type, and classifies the scenes contained in the images through autonomous learning without teaching learning, thereby obtaining the records shown in FIG. 7 .
  • FIG. 7 is a diagram showing an example of information stored in the scene tag storage unit 2060 of the second embodiment.
  • the scene tag storage unit 2060 stores the display order of the scenes in association with the tags attached to the scenes.
  • the AI plays the modeled image, detects extreme changes in the image, and extracts a scene between a certain detection point and the next detection point. Next, label the scene.
  • the labels are, for example, A, B, C...etc.
  • the AI performs the following operations. First, since the scene whose display order is No. 1 is a new scene type that does not exist in the existing scenes, label A is attached. Since the next scene No. 2 in the display order is a different type of scene from the existing scene (here, the scene in the display order 1 to which the label A is pasted), the AI pastes a new label B. Similarly, since the next scene with display order No. 3 is a different type of scene from the existing scene (the scene with the display order 1 and 2 of labels A and B pasted here), the AI pastes a new label c.
  • the AI pastes the existing label A to the scene No. 4 in the display order. Since the scene numbered 5 is of a different type from any scene so far, the AI has attached a new label D. By repeating the operations described above, the AI obtains the records shown in FIG. 7 .
  • FIG. 8 is a diagram showing an example of information stored in the related scene DB 2062 according to the second embodiment.
  • the related scene DB 2062 associates and stores the label of the current scene, the label of the predicted next scene and the probability thereof, and the label of the next predicted next scene and the probability thereof in association.
  • the tags of the current scene, the predicted next scene, and the next next scene are tags included in the scene tag storage unit 2060 of FIG. 7 .
  • the AI first extracts the label attached to the scene next to the scene with the label A attached, and calculates the probability that the scenes with the extracted label can be displayed next to the scene with the label A at how often. For example, according to the records of the related scene DB 2062 in FIG. 8 , for a specific type of image, after the scene of the type of label A is pasted, the scene of the label D is displayed with a frequency of 75%, and the scene of the label B is displayed with a frequency of 25%.
  • the label and probability of the next next scene with respect to the predicted next scene are extracted, calculated, and recorded in the same manner as the probability of the label of the next scene predicted with respect to the current scene.
  • FIG. 9 is a diagram showing information stored in the scene type DB 2061 according to the second embodiment.
  • the scene type DB 2061 stores the scene tag in association with the applicable PQ.
  • the scene tag is changed to the scene ID and scene type of the scene type DB2051.
  • the next scene setting preparation unit 2014 of this embodiment refers to the scene type DB 2061 in Fig. 9 to obtain a PQ suitable for predicting the next scene (applicable PQ).
  • the setting preparing unit 2014 for the next scene of the present embodiment refers to the setting DB 2053 according to the scene characteristics, and acquires the values of various items set to the adjustment target to which the PQ is applied. Subsequent processing is the same as that of the first embodiment.
  • FIG. 10 is a diagram illustrating the functional configuration of the television device 20 according to the third embodiment and the relationship between the respective functions.
  • the television device 20 of this embodiment includes a scene end determination unit 2023 instead of the scene change determination unit 2003 of the first embodiment.
  • the scene end judging unit 2023 judges that a specific scene is close to the end, and predicts the timing of the end. Specifically, for example, in a golf program, when the current scene is a tee shot scene, the scene end determination unit 2023 can predict that the scene ends at the time of swinging the club and shifts to the fly ball scene. Therefore, when the scene end determination unit 2023 detects that the club is swinging during the display of the kick-off scene, it determines that the current scene ends and instructs the adjustment executing unit 2004 to perform adjustment. The adjustment executing unit 2004 receiving this instruction immediately executes the adjustment.
  • the scene end determination unit 2023 determines the end of the current scene by detecting a specific movement in a specific scene, such as a club swing in the tee off scene, for example.
  • a specific action refers to an action associated with the end of a scene.
  • the adjustment by the adjustment execution unit 2004 can be started without waiting for detection of a scene change.
  • the scene change can be detected by the scene change determination unit 2003 of the first embodiment, and the process can be performed in the same manner as in the first embodiment. That is, during implementation, the scene end determination unit 2023 of this embodiment and the scene change determination unit 2003 of the first embodiment may be used together.
  • the current scene recognition unit 2012 of this embodiment does not recognize all the scenes in one program, but recognizes only specific scenes.
  • FIG. 11 is a diagram showing information stored in the scene type DB 2071 according to the fourth embodiment.
  • the scene type DB 2071 associates and stores the type of the scene recognized by the current scene recognition unit 2012 according to the present embodiment with metadata.
  • Metadata is, for example, genre, specifically "golf program” and the like.
  • the type of scene to be recognized is, for example, "kick-off".
  • the current scene recognition unit 2012 of the first embodiment must determine which of the scene types stored in the scene type DB 2051 matches the scene being displayed (current scene), but the scene type DB 2071 of the present embodiment is restricted to The number of scene types to be selected is reduced (more preferably, one scene type is assigned to one type). Furthermore, as for scenes of other types than those stored in the scene type DB 2071 , as long as the inconsistency can be determined, the process can be terminated, regardless of the type. Therefore, according to the present embodiment, the load required for the processing of the current scene recognition unit 2012 can be suppressed.
  • next scene prediction unit 2013 of the present embodiment predicts a predicted scene next to the current scene recognized by the current scene recognition unit 2012 (predicts the next scene). More specifically, the related scene DB 2072 is referred to, and the "predicted next scene" associated with the current scene of the type recognized by the current scene recognition unit 2012 is acquired as a predicted result.
  • FIG. 12 is a diagram showing information stored in the related scene DB 2072 according to the fourth embodiment.
  • the related scene DB 2072 associates and stores information corresponding to items such as scene ID, current scene, predicted next scene, and next scene ID. That is, the relevant scene DB 2072 does not have the item "probability" included in the relevant scene DB 2052 of the first embodiment.
  • the relevant scene DB 2072 stores the scene type of the recognition target stored in the scene type DB 2071 of FIG. 11 as the current scene, and stores the type of the scene predicted as the subsequent scene as the predicted next scene.
  • next scene prediction unit 2013 in the first embodiment takes the predicted next scene with the highest probability in the related scene DB 2052 as the predicted result, but in this embodiment, the predicted next scene associated with the current scene is linked to the related scene DB 2072.
  • the scene is one. Therefore, according to the present embodiment, the load required for the next scene prediction unit 2013 to predict and extract the next scene can be suppressed.
  • the control unit of the television device 20 performs adjustment suitable for predicting the next scene associated with the associated scene DB 2072 only when recognizing a scene of a type stored in the scene type DB 2071 .
  • the prediction that the type of the next scene is a "flying ball” scene is established, so then when the scene is detected
  • the PQ used for adjustment is switched to the PQ used for flying balls.
  • the use of the fly ball PQ is terminated (switched to the use of the general-purpose PQ).
  • the adjustment execution unit 2004 performs image quality adjustment using a common PQ.
  • the load required for the processing of the current scene recognition unit 2012 can be suppressed, and the load required for the next scene prediction unit 2013 to extract and predict the next scene can be suppressed.
  • FIG. 13 is a diagram illustrating the functional configuration of the television device 20 according to the fifth embodiment and the relationship between the respective functions.
  • this embodiment is a simplified version of the fourth embodiment.
  • the fourth embodiment similar to the first embodiment, it is assumed that the scene change determination unit 2003 exists, but the control unit of the television device 20 in this embodiment is configured so that the scene change determination unit 2003 is omitted.
  • the output of the image playback unit 2002 in this embodiment is input to the adjustment execution unit 2004 .
  • the output of the next scene prediction unit 2013 is input only to the next scene setting preparation unit 2014 .
  • the output of the setting preparation unit 2014 for the next scene is input only to the adjustment execution unit 2004 .
  • the adjustment execution unit 2004 of this embodiment when information is input from the setting preparation unit 2014 for the next scene, immediately switches to the PQ adjustment based on the input information at that time, and switches to the common PQ after a certain period of time. (return).
  • the load required for the processing of the scene change determination unit 2003 in the first embodiment can be reduced. According to this embodiment, by simply obtaining the effect of the first embodiment, when performing image quality adjustment for each scene while displaying an image, it is possible to apply Appropriate settings.
  • each device management server 10 and television device 20
  • the programs executed in each device (management server 10 and television device 20) of each of the above-mentioned embodiments may be stored in a CD (Compact Disc)-ROM (Compact Disc)-ROM ( Read Only Memory, Read Only Memory), Flexible Disk (FD), CD-R (Recordable, Recordable), DVD (Digital Versatile Disk, Digital Versatile Disk) and other storage media that can be read in computer devices.
  • the program may be provided or distributed via a network such as the Internet.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)
  • Circuits Of Receivers In General (AREA)

Abstract

本申请涉及广播接收装置。在图像的显示中进行每个场景的画质调整时,避免场景的特定以及开始适用调整所需的时间的影响,适用合适的调整。广播接收装置具备:存储部,其将场景的种类和所述场景的画质的调整相关的设定建立关联地进行存储,并且将相对于场景的种类而言下一个显示的可能性高的场景的种类建立关联地进行存储;当前场景识别部,其识别显示中的场景即当前场景的种类;下一个场景预测部,其参照所述存储部,预测所述当前场景的下一个显示的场景即下一个场景的种类;下一个场景用设定准备部,其参照所述存储部,准备所述下一个场景的画质的调整相关的设定;以及调整执行部,对显示中的场景,在规定的时刻执行基于所述下一个场景用设定准备部准备的设定的调整。

Description

广播接收装置
本申请要求在2021年9月27日提交日本专利局、申请号为2021-156827、发明名称为“广播接收装置”的日本专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请的实施方式涉及广播接收装置。
背景技术
以往,存在收看广播的内容的广播接收装置。这种广播接收装置为了以适当的状态显示图像内容,调整画质等的确定相关的参数,或者进行用于调整的各种设定。
以往,如上所述的参数调整的主流方式是将与内容的种类相应的设定应用于内容整体。然而,近年来,还采用了按照包含在内容中的每个场景(场面)而进行与场景的特性相应的调整的方法。
在先技术文献
专利文献
专利文献1:日本特开2012-015877号公报
专利文献2:日本特开2004-336317号公报
专利文献3:日本特开2008-028871号公报
发明内容
然而,如上所述对每个场景进行调整时,如果在图像的显示中特定是什么样的场景并使用该特定的信息进行调整,则从显示用于特定的图像至开始应用调整为止需要一些时间。因此,在应用了调整时,有时对象的场景结束而显示不同的场景。
本申请所要解决的课题是得到一种广播接收装置,在图像的显示中进行每个场景的画质调整时,可以避免场景的特定以及开始应用调整所需的时间的影响,并应用适当的调整。
实施方式的广播接收装置具备:存储部,其将场景的种类和所述场景的画质的调整相关的设定建立关联地进行存储,并且将相对于场景的种类而下一个显示的可能性高的场景的种类建立关联地进行存储;当前场景识别部,其识别显示中的场景即当前场景的种类;下一个场景预测部,其参照所述存储部,预测所述当前场景的下一个显示的场景即下一个场景的种类;下一个场景用设定准备部,其参照所述存储部,准备所述下一个场景的画质的调整相关的设定;以及调整执行部,其对显示中的场景,在规定的时刻执行基于所述下一个场景用设定准备部准备的设定的调整。
附图说明
图1是表示第一实施方式的画质调整系统的构成的一例的图;
图2是表示第一实施方式的电视装置的硬件构成的一例的框图;
图3是说明电视装置具备的功能结构以及各个功能的关联的图;
图4是表示场景种类DB存储的信息的例子的图;
图5是表示关联场景DB存储的信息的例子的图;
图6是表示按照场景特性设定DB中存储的信息的例子的图;
图7是表示第二实施方式的场景标签存储部中存储的信息的例子的图;
图8是表示第二实施方式的关联场景DB存储的信息的例子的图;
图9是表示第二实施方式的场景种类DB存储的信息的图;
图10是说明第三实施方式的电视装置具备的功能结构以及各个功能的关联的图;
图11是表示第四实施方式的场景种类DB2071存储的信息的图;
图12是表示第四实施方式的关联场景DB存储的信息的图;
图13是说明第五实施方式的电视装置具备的功能结构以及各个功能的关 联的图。
附图标记说明
1···画质调整系统,
10···管理服务器,
119···遥控器,
20···电视装置(广播接收装置的一例),
201···天线,202a、202b、202c···输入端子,
203···调谐器,204···解调器,205···解复用器,
206···A/D转换器,207···选择器,208···信号处理部,
209···扬声器,210···显示面板,211···操作部,
212···受光部,213···IP通信部,
214···CPU,215···内存,216···存储器(存储部的一例),217···麦克风,218···音频I/F,
30···网络,
2001···图像输入部,2002···图像播放部,
2003···场景切换判定部,2004···调整执行部,2005···图像输出部,
2011···元数据获取部,2012···当前场景识别部,
2013···下一个场景预测部,2014···下一个场景用设定准备部,
2023···场景结束判定部,
2051···场景种类DB,2052···关联场景DB,
2053···按照场景特性设定DB,
2060···场景标签存储部,
2061···场景种类DB,2062···关联场景DB,
2071···场景种类DB,2072···关联场景DB。
具体实施方式
下面,参照附图详细说明实施方式。
(第一实施方式)
图1是表示第一实施方式的画质调整系统1的结构的一例的图。画质调整系统1具备管理服务器10、以及电视装置20。管理服务器10、电视装置20经由网络30而能够通信地连接。另外,虽然图1中例示的画质调整系统1分别具备一台管理服务器10、以及一台电视装置20,在实施过程中,画质调整系统1具备的这些装置并不限定于一台,还可以为多台。
管理服务器10是例如个人计算机(Personal Computer、PC)、云计算等服务器装置。此外,管理服务器10并不限定于一台服务器装置,还可以由多台服务器装置构成。
电视装置20是可收看地显示广播的内容的广播接收装置。
在这种画质调整系统1中,管理服务器10向电视装置20提供在内容的显示时用于调整画质等的各种数据。电视装置20使用从管理服务器10提供的各种数据,在内容的显示时进行画质的调整。另外,从管理服务器10向电视装置20提供的上述的各种数据根据需要被更新。另外,后述的各种DB2051~2053存储的信息为上述的各种数据的一例。
图2是表示第一实施方式的电视装置20的硬件构成的一例的框图。电视装置20具备天线201、第一输入端子202a、第二输入端子202b、第三输入端子202c、调谐器203、解调器204、解复用器205、A/D(模拟/数字)转换器206、选择器207、信号处理部208、扬声器209、显示面板210、操作部211、受光部212、IP通信部213、CPU(Central Processing Unit,中央处理器)214、内存(memory)215、存储器(storage)216、麦克风(microphone)217、音频I/F(接口)218。
天线201接收数字广播的广播信号,并将接收到的广播信号经由第一输入端子202a提供给调谐器203。调谐器203从由天线201提供的广播信号对期望的频道的广播信号进行选台,并将选台的广播信号提供给解调器204。广 播信号还可以称为广播波。
解调器204解调从调谐器203提供的广播信号,并将解调过的广播信号提供给解复用器205。解复用器205分离从解调器204提供的广播信号,生成元数据、图像信号、以及声音信号,并将生成的元数据、图像信号、以及声音信号提供给选择器207。
选择器207从解复用器205、A/D转换器206、以及第三输入端子202c提供的多个信号中选择一个,并将选择出的一个信号提供给信号处理部208。
信号处理部208对由选择器207提供的图像信号实施规定的信号处理,并将处理后的图像信号提供给显示面板210。此外,信号处理部208对由选择器207提供的声音信号实施规定的信号处理,并将处理后的声音信号提供给扬声器209。
扬声器209基于由信号处理部208提供的声音信号而输出语音、或各种声音。此外,扬声器209基于CPU214的控制,变更输出的语音或各种声音的音量。
显示面板210基于由信号处理部208提供的图像信号或CPU 214的控制,显示静态图像、动态图像等图像。显示面板210为显示部的一例。
第二输入端子202b接收从外部输入的模拟信号(元数据、图像信号、以及声音信号)。此外,第三输入端子202c接收从外部输入的数字信号(元数据、图像信号、以及声音信号)。例如,第三输入端子202c可以通过搭载了驱动装置的录像机(BD录像机)等来实现数字信号的输入,其中,该驱动装置通过驱动BD(Blu-ray Disc)(注册商标)等录制播放用存储媒介而录制以及播放。A/D转换器206将数字信号提供给选择器207,该数字信号通过对由第二输入端子202b提供的模拟信号实施A/D转换而生成。
操作部211接收用户的操作输入。此外,受光部212接收来自遥控器119的红外线。IP通信部213是通过网络30进行IP(互联网协议)通信的通信接口。
CPU 214是控制电视装置20整体的控制部。内存215是保存CPU 214执 行的各种计算机程序的ROM(Read Only Memory,只读存储器),或向CPU214提供用于执行程序的工作分区(Work area)的RAM(Random Access Memory,随机存取存储器)等。此外,存储器216是HDD(Hard Disk Drive,硬盘驱动器)或SSD(Solid State Drive,固态硬盘)等。存储器216存储例如由选择器207选择的信号作为录制数据。
麦克风217获取用户发声的声音发送给音频I/F218。
音频I/F218对麦克风217获取到的声音进行模拟/数字转换,作为声音信号发送给CPU 214。
电视装置20的控制部(CPU214)通过执行程序而作为图3所示的各种功能部发挥作用。图3是对电视装置20具备的功能构成以及各个功能的关联进行说明的图。
电视装置20具备图像输入部2001、图像播放部2002、场景切换判定部2003、调整执行部2004、图像输出部2005、元数据获取部2011、当前场景识别部2012、下一个场景预测部2013、下一个场景用设定准备部2014等各种功能部。
此外,电视装置20将场景种类DB2051、关联场景DB2052、按照场景特性设定DB2053保持在例如存储器216中。在此,存储器216是存储部的一例。这些各种DB(Data base,数据库)通过上述各个功能部适当参照。
首先,图像输入部2001将来自各种输入端子202a、202b、202c的信息的输入作为信号的状态来接收。图像播放部2002播放图像输入部2001接收到的信号中的图像信号,并且输出到当前场景识别部2012以及场景切换判定部2003。另外,在图像输入部2001接收到的信息中,成为元数据的信息输出到元数据获取部2011。
元数据获取部2011获取通过各种输入端子202a、202b、202c经由图像输入部2001输入的元数据,或者从经由网络30以及IP通信部213可通信的外部装置获取元数据,输出到当前场景识别部2012。作为元数据存在种类信息等。种类信息是表示什么样的图像的信息,例如是“高尔夫节目”、“旅行 节目”、“音乐节目”等信息。
当前场景识别部2012识别图像播放部2002输出的图像是什么种类的场景。更详细而言,当前场景识别部2012判断图像播放部2002输出的图像相当于场景种类DB2051存储的场景种类中哪一个。另外,作为这里的识别的具体的方法,也可以使用现有的技术。然后,当前场景识别部2012将识别的结果输出到下一个场景预测部2013。
图4是表示场景种类DB2051存储的信息的例子的图。场景种类DB2051将场景ID、场景种类、适用PQ建立关联地存储。另外,场景种类DB2051还可以具备例如图像的每个种类的、多个表。图4所示的例的表是种类为高尔夫节目。
场景种类为场景的种类,例如有球场介绍场景、开球场景、飞球场景等。场景ID是便于场景种类的特定的信息,例如是唯一编号等。
如图4所示,例如高尔夫节目中不仅有展示比赛中选手或球的动作的场景,还有鸟瞰球场而介绍的场景(场景ID:1)、或特写选手的场景(场景ID:2)、在远处展示观众(gallery)的场景(场景ID:3)等。
作为展示选手的动作的场景,例如有开球、推杆击球、其它击球、读草动作、球场(course)上的移动等(场景ID:11~13、22、23)。此外,作为展示球的动作的场景,存在追飞球的场景(飞球)、球落到地上跳起来的场景、追赶果岭上滚动的球的场景等(场景ID:31~33)。
对于适用PQ进行后述。
回到图3,下一个场景预测部2013预测当前场景识别部2012识别出的种类的场景的下一个场景的种类。更详细而言,下一个场景预测部2013参照关联场景DB2052,获取与当前场景识别部2012识别出种类的场景建立关联的“预测下一个场景”,将其作为预测的结果。
图5是表示关联场景DB2052存储的信息的例子的图。关联场景DB2052是将例如场景ID、当前场景、预测下一个场景、下一个场景ID、概率等项目对应的信息建立关联而存储的表。
关联场景DB2052的场景ID以及当前场景与场景种类DB2051存储的场景ID以及场景的种类相同。预测下一个场景是有可能接着当前场景而连续播放的场景的种类。预测下一个场景对当前场景建立关联至少一个,也可以建立关联多个。下一个场景ID是与预测下一个场景对应的(通过场景种类DB2051来建立关联)场景ID。概率是各个预测下一个场景接着建立关联了的当前场景而播放的概率。
另外,关联场景DB2052还可以存储当前场景的下一个的再下一个场景等多个场景的预测信息。
下一个场景预测部2013将概率最高的预测下一个场景作为对当前场景的预测的结果。只要是图5的例子,当前场景为开球时的预测下一个场景为飞球。而且,下一个场景预测部2013将预测的结果(预测下一个场景以及下一个场景ID)输出到下一个场景用设定准备部2014以及场景切换判定部2003。
下一个场景用设定准备部2014准备适合预测下一个场景的设定,使得能够顺应请求而适用。更详细而言,下一个场景用设定准备部2014参照场景种类DB2051以及按照场景特性设定DB2053,获取适合预测下一个场景的PQ(适用PQ)、设定于该PQ的调整对象的各种项目的值。
在这里,上述的PQ是表示画质等的“Picture Quality”的缩写,在本实施方式中,是适合于场景的特性的画质等特性(画质特性)。
图6是表示按照场景特性设定DB2053存储的信息的例子的图。按照个场景特性设定DB2053将画质特性代码、场景特性、设定(与画质的调整有关的设定的一例)建立关联地存储。
按照场景特性设定DB2053中的设定是对画质等进行规定的调整对象的各种项目的、值的组合。作为项目,例如有倍速模式的有无、RGB的值、背光的亮度、对比度等。这些各个项目的值规定为适合场景特性。
场景特性是表示场景的特性的信息,例如是“自然、风景、绿色”、“人脸”、“快速移动的物体”等。
画质特性代码是特定场景特性与适用于此的设定的组合的代码(名称), 例如是“PQ1”、“PQ2”、···、“PQ9”等信息。
在这里,场景种类DB2051的适用PQ与按照场景特性设定DB2053存储的画质特性代码相同。如此,场景种类DB2051通过将场景ID以及场景与适用PQ建立关联地进行存储,从而经由适用PQ来规定适用场景的各种设定。
下一个场景用设定准备部2014将获取到的设定(调整对象的各种项目的值)输出到场景切换判定部2003以及调整执行部2004。
场景切换判定部2003基于图像播放部2002输出的图像,检测图像发生了极大变化,并判定为场景(场面)已经被切换。更详细而言,例如在当前的帧和前一帧之间没有或几乎没有共同部分的情况下,或者在整个帧中的共同部分的比率低于规定的阈值的情况下,场景切换判定部2003判定为图像发生了极端的变化,也就是说场景被切换了。
此外,场景切换判定部2003参照下一个场景预测部2013输出的预测下一个场景,判定预测是否与切换后的场景一致。
场景切换判定部2003检测场景的切换,在切换后的场景与预测的场景一致的情况下,对调整执行部2004指示执行基于从下一个场景用设定准备部2014接收到的设定(调整对象的各种项目的值)的调整。然后,调整执行部2004执行基于从下一个场景用设定准备部2014接收到的设定的调整。上述规定的时刻为例如从场景切换判定部2003接收到指示的时刻。
此外,在切换后的场景与预测的场景不一致的情况下,也就是说预测偏离的情况下,场景切换判定部2003将表示该意旨的信息输出到调整执行部2004。被输入了预测偏离的意旨的信息的调整执行部2004不使用下一个场景用设定准备部2014输出的设定作为下一个场景用的设定,而使用通用的设定作为下一个场景用的设定,执行各种项目的值的调整。
调整执行部2004将调整后的图像输出到图像输出部2005。图像输出部2005将调整执行部2004调整过的图像输出到显示面板210。显示面板210显示被输入的图像。
这种结构的本实施方式的电视装置20在显示图像时,识别显示中的图像 为什么样的场景,调查识别出的场景之后是什么样的场景的概率高,准备适合下一个场景的设定的值。接着,电视装置20检测场景的切换,若切换后的场景与预期的场景一致,则使用准备的设定来调整画质等,在偏离了预期的情况下以通用的设定来进行调整。
如上所述,根据本实施方式,通过参照存储部(存储器216)存储的DB,从而能够得到当前场景的下一个显示的可能性高的场景(预测下一个场景)。此外,通过参照DB,可以得到适合预测下一个场景的PQ以及该设定值。而且,由此伴随着下一个场景的开始可以适用设定值。由此,根据本实施方式,避开场景的特定以及调整的适用开始所需的时间的影响,可以在显示当前场景之前准备完成适合该场景的设定值。如此,根据本实施方式,在显示图像过程中进行每个场景的画质调整时,可以适用合适的设定值。
(第二实施方式)
说明第二实施方式。在以下说明中,对于与第一实施方式相同的结构,用相同的符号表示,省略详细的说明。
在本实施方式中,取代第一实施方式的关联场景DB2052以及场景种类DB2051,使用后面说明的关联场景DB2062(参照图8)以及场景种类DB2061(参照图9)。关联场景DB2052是人力构建的,而关联场景DB2062是用AI(artificial intelligence,人工智能)构建的。关于该构建,AI输入特定种类的多个图像,通过非示教学习的自主学习对该图像包含的场景进行分类,从而得到图7所示的记录。
图7是表示第二实施方式的场景标签存储部2060存储的信息的例子的图。场景标签存储部2060将场景的显示顺序和粘贴到场景的标签建立关联而进行存储。
在得到场景标签存储部2060存储的信息时,AI播放成为模型的图像,检测图像的极端变化,将某一检测点和其下一个检测点之间作为一个场景进行提取。接着,对场景进行标签化。标签是例如,A、B、C···等。
在得到图7的记录时,AI进行如下所述的动作。首先,由于显示顺序为 1号的场景是现有的场景中不存在的新的场景种类,因此粘贴标签A。由于下一个显示顺序为2号的场景为与现有的场景(在这里是粘贴了标签A的显示顺序1的场景)不同的种类的场景,因此AI粘贴了新的标签B。同理,由于下一个显示顺序为3号的场景为与现有的场景(在这里粘贴了标签A、B的显示顺序1、2的场景)不同的种类的场景,因此AI粘贴了新的标签C。
由于显示顺序4号的场景为与显示顺序1号的场景类似的场景,因此AI将现有的标签A粘贴到显示顺序4号的场景。由于显示顺序5号的场景为与至今为止任何一个场景都不同的种类,因此AI粘贴了新的标签D。通过重复如上所述的动作,AI得到图7所示的记录。
接着,AI基于图7的场景标签存储部2060存储的信息,生成图8的关联场景DB2062。图8是表示第二实施方式的关联场景DB2062存储的信息的例子的图。关联场景DB2062将当前场景的标签、预测下一个场景的标签以及该概率、作为再下一个预测的下下一个场景的标签以及该概率建立关联地进行存储。
当前场景、预测下一个场景、下下一个的场景的标签分别为图7的场景标签存储部2060包含的标签。AI首先提取粘贴了标签A的场景的下一个场景所粘贴的标签,计算出提取到的标签的场景分别可以以多少频率显示于标签A的场景的下一个的概率。例如,根据图8的关联场景DB2062的记录,关于特定种类的图像,在粘贴标签A的种类的场景之后,标签D的场景以75%的频率显示,标签B的场景以25%的频率显示。
关于相对于预测下一个场景的下下一个场景的标签以及概率,以与相对于当前场景的预测下一个场景的标签的概率的情况相同的方式进行提取以及计算、记录。
图9是表示第二实施方式的场景种类DB2061存储的信息的图。场景种类DB2061将场景标签和适用PQ建立关联地进行存储。场景标签更改为场景种类DB2051的场景ID以及场景种类。
本实施方式的下一个场景用设定准备部2014参照图9的场景种类 DB2061得到适合预测下一个场景的PQ(适用PQ)。接着,本实施方式的下一个场景用设定准备部2014参照按照场景特性设定DB2053获取设定于适用PQ的调整对象的各种项目的值。以后的处理与第一实施方式相同。
如上所述,即使使用如第二实施方式的DB(关联场景DB2062、场景种类DB2061),也能够起到与第一实施方式相同的效果。
(第三实施方式)
对第三实施方式进行说明。在如下说明中,对与第一实施方式相同的结构用相同的符号表示并省略详细的说明。
图10是对第三实施方式的电视装置20具备的功能结构以及各个功能的关联进行说明的图。取代第一实施方式的场景切换判定部2003,本实施方式的电视装置20具备场景结束判定部2023。
场景结束判定部2023对特定的场景判定该场景接近结束,并预测结束的时机。具体而言,场景结束判定部2023例如在高尔夫节目中,当前场景为开球场景的情况下,能够预测该场景在挥动球杆的时刻结束而转移到飞球场景。因此,当场景结束判定部2023检测到在开球场景的显示过程中球杆挥动,则判定当前场景结束,对调整执行部2004指示执行调整。接收该指示的调整执行部2004立刻执行调整。
如上述例子,场景结束判定部2023通过检测例如开球场景中的球杆的挥动等、特定的场景的特定的动作来判定当前场景的结束。在这里,特定的动作是指与场景的结束关联的动作。
如此,根据本实施方式,取代第一实施方式的场景切换判定部2003,通过使用场景结束判定部2023,能够无需等待检测场景的切换而开始执行调整执行部2004的调整。
另外,根据场景的不同,有时没有像开球场景中球杆的挥动动作等可以预测结束的时刻的动作。对于这种场景,可以通过第一实施方式的场景切换判定部2003检测场景的切换,从而以与第一实施方式相同的方式进行处理。即,在实施过程中,可以并用本实施方式的场景结束判定部2023、和第一实 施方式的场景切换判定部2003。
(第四实施方式)
对第四实施方式进行说明。在以下说明中,对与第一实施方式相同的结构用相同的符号表示并省略详细的说明。
本实施方式的当前场景识别部2012为了减轻控制部(CPU214)的负担,在1个节目中不识别全部场景,而只识别特定的场景。
图11是表示第四实施方式的场景种类DB2071存储的信息的图。场景种类DB2071将本实施方式的当前场景识别部2012识别的对象即场景的种类与元数据建立关联地进行存储。元数据为例如种类,具体而言是“高尔夫节目”等。识别对象的场景种类为例如“开球”。
在这里,第一实施方式的当前场景识别部2012必须判定显示中的场景(当前场景)与场景种类DB2051存储的多个场景种类中的哪一个一致,而本实施方式的场景种类DB2071被抑制为成为选择分支的场景种类减少(更优选为对一个种类赋予一个场景种类)。此外,对于除了场景种类DB2071存储的以外的种类的场景,只要能够判断不一致就可以结束处理,无需根究是哪一种类。因此,根据本实施方式,可以抑制当前场景识别部2012的处理所需的负荷。
此外,本实施方式的下一个场景预测部2013预测当前场景识别部2012识别出的当前场景的下一个被预测的场景(预测下一个场景)。更详细而言,参照关联场景DB2072,获取与当前场景识别部2012识别出种类的当前场景建立关联的“预测下一个场景”,并将其作为预测的结果。
图12是表示第四实施方式的关联场景DB2072存储的信息的图。关联场景DB2072将与场景ID、当前场景、预测下一个场景、下一个场景ID等项目对应的信息建立关联地进行存储。即,关联场景DB2072不具备第一实施方式的关联场景DB2052包含的“概率”的项目。该关联场景DB2072将图11的场景种类DB2071存储的识别对象的场景种类作为当前场景进行存储,并且将作为后续的场景而被预测的场景的种类作为预测下一个场景进行存储。
在这里,第一实施方式的下一个场景预测部2013将关联场景DB2052中 概率最高的预测下一个场景作为预测的结果,但在本实施方式中,通过关联场景DB2072关联到当前场景的预测下一个场景为一个。因此,根据本实施方式,能够抑制下一个场景预测部2013在预测下一个场景的提取中所需的负荷。
以后的处理与第一实施方式相同。
本实施方式的电视装置20的控制部只有在识别出存储于场景种类DB2071中的种类的场景的情况下,执行适合于通过关联场景DB2072而建立对应的预测下一个场景的调整。具体而言,例如,在播放图像时,当识别到当前场景的种类是“开球”场景的情况下,接下来的场景的种类是“飞球”场景的预测成立,因此接着在检测到场景的切换的时刻,将用于调整的PQ切换为飞球用的PQ。然后,再检测到场景的切换的时刻,结束飞球用的PQ的使用(切换为通用的PQ的使用)。
另外,在本实施方式的电视装置20中,当未识别到存储于场景种类DB2071的种类的场景的情况下,调整执行部2004使用通用的PQ进行画质调整。
以上,根据第四实施方式,能够抑制当前场景识别部2012的处理所需的负荷,并且能够抑制下一个场景预测部2013提取预测下一个场景所需的负荷。
(第五实施方式)
说明第五实施方式。在以下说明中,对于与第一实施方式相同的结构使用相同的符号进行表示,并省略详细的说明。
图13是对第五实施方式的电视装置20具备的功能结构以及各个功能的关联进行说明的图。在这里,本实施方式是将第四实施方式更简化的方式。在第四实施方式中,与第一实施方式相同,以存在场景切换判定部2003为前提,但本实施方式的电视装置20的控制部构成为,省略场景切换判定部2003。本实施方式的图像播放部2002的输出被输入到调整执行部2004。此外,下一个场景预测部2013的输出只被输入到下一个场景用设定准备部2014。此外,下一个场景用设定准备部2014的输出只被输入到调整执行部2004。
本实施方式的调整执行部2004当从下一个场景用设定准备部2014输入 信息时,在该时刻立刻进行切换到基于输入的信息的PQ的调整,在经过一定时间后,切换为通用的PQ(返回)。
如上所述,根据本实施方式,可以减少第一实施方式的场景切换判定部2003的处理所需的负荷。根据本实施方式,通过简单地获得基于第一实施方式的效果,在图像的显示中进行每个场景的画质调整时,避免场景的特定以及开始适用调整所需的时间的影响,从而能够适用合适的设定值。
另外,在上述各个实施方式的各个装置(管理服务器10以及电视装置20)中执行的程序可以通过可安装的形式或可执行的形式的文件存储于CD(Compact Disc,压缩碟片)-ROM(Read Only Memory,只读存储器)、柔性磁盘(FD)、CD-R(Recordable,可记录)、DVD(Digital Versatile Disk,数字多功能盘)等计算机装置中可读取的存储介质而提供。此外,也可以将该程序经由互联网等网络提供或发布。
虽然对本申请的实施方式进行了说明,但该实施方式是作为例子提出的方式,并不限定申请的范围。该新的实施方式可以以其它各种形态实施,在不脱离申请的主旨的范围内可以进行各种省略、替换、变更。这些实施方式、其变形包含在申请的范围、主旨中,并且包含在与权利要求书记载的发明等同的范围内。

Claims (4)

  1. 一种广播接收装置,其中,具备:
    存储部,其将场景的种类和所述场景的画质的调整相关的设定建立关联地进行存储,并且将相对于场景的种类而言下一个显示的可能性高的场景的种类建立关联地进行存储,
    当前场景识别部,其识别显示中的场景即当前场景的种类,
    下一个场景预测部,其参照所述存储部,预测所述当前场景的下一个显示的场景即下一个场景的种类,
    下一个场景用设定准备部,其参照所述存储部,准备所述下一个场景的画质的调整相关的设定,以及
    调整执行部,其对显示中的场景,在规定的时刻执行基于所述下一个场景用设定准备部准备的设定的调整。
  2. 根据权利要求1所述的广播接收装置,其中,
    所述广播接收装置还具备场景切换判定部,该场景切换判定部通过检测显示中的图像极端变化而判定为场面被切换,
    所述调整执行部在所述场景切换判定部检测到场面的切换的时刻执行所述调整。
  3. 根据权利要求2所述的广播接收装置,其中,
    所述当前场景识别部进行判断所述当前场景是否与特定的种类一致的处理,在所述当前场景与所述特定的种类一致的情况下,识别所述当前场景的种类。
  4. 根据权利要求1~3中任一项所述的广播接收装置,其中,
    所述广播接收装置还具备场景结束判定部,该场景结束判定部在特定的场景的显示中检测到与该场景的结束相关联的特定的动作的情况下,判定为该场景结束,
    所述调整执行部在所述场景结束判定部判定出场景的结束的时刻执行所 述调整。
PCT/CN2022/081779 2021-09-27 2022-03-18 广播接收装置 WO2023045281A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280007592.2A CN116671116A (zh) 2021-09-27 2022-03-18 广播接收装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021156827A JP7453948B2 (ja) 2021-09-27 2021-09-27 放送受信装置
JP2021-156827 2021-09-27

Publications (1)

Publication Number Publication Date
WO2023045281A1 true WO2023045281A1 (zh) 2023-03-30

Family

ID=85719967

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081779 WO2023045281A1 (zh) 2021-09-27 2022-03-18 广播接收装置

Country Status (3)

Country Link
JP (1) JP7453948B2 (zh)
CN (1) CN116671116A (zh)
WO (1) WO2023045281A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493639A (zh) * 2019-10-21 2019-11-22 南京创维信息技术研究院有限公司 一种基于场景识别的自动调整声音和图像模式的方法及系统
CN110933490A (zh) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 一种画质和音质的自动调整方法、智能电视机及存储介质
US20200134316A1 (en) * 2018-10-31 2020-04-30 Sony Interactive Entertainment Inc. Scene annotation using machine learning
CN111131889A (zh) * 2019-12-31 2020-05-08 深圳创维-Rgb电子有限公司 场景自适应调整图像及声音的方法、系统及可读存储介质
CN112543359A (zh) * 2020-11-12 2021-03-23 海信视像科技股份有限公司 一种显示设备及自动配置视频参数的方法
CN112887778A (zh) * 2021-01-15 2021-06-01 Vidaa美国公司 显示设备上视频资源播放模式的切换方法及显示设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008236603A (ja) 2007-03-23 2008-10-02 Pioneer Electronic Corp 動画コンテンツ判別装置、映像信号処理ユニット、及び、動画コンテンツ判別方法
JP2008294630A (ja) 2007-05-23 2008-12-04 Sharp Corp 録画・再生装置及びその制御方法
JP6024110B2 (ja) 2012-01-26 2016-11-09 ソニー株式会社 画像処理装置、画像処理方法、プログラム、端末装置及び画像処理システム
CN105828149A (zh) 2016-04-28 2016-08-03 合智能科技(深圳)有限公司 播放优化方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200134316A1 (en) * 2018-10-31 2020-04-30 Sony Interactive Entertainment Inc. Scene annotation using machine learning
CN110493639A (zh) * 2019-10-21 2019-11-22 南京创维信息技术研究院有限公司 一种基于场景识别的自动调整声音和图像模式的方法及系统
CN110933490A (zh) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 一种画质和音质的自动调整方法、智能电视机及存储介质
CN111131889A (zh) * 2019-12-31 2020-05-08 深圳创维-Rgb电子有限公司 场景自适应调整图像及声音的方法、系统及可读存储介质
CN112543359A (zh) * 2020-11-12 2021-03-23 海信视像科技股份有限公司 一种显示设备及自动配置视频参数的方法
CN112887778A (zh) * 2021-01-15 2021-06-01 Vidaa美国公司 显示设备上视频资源播放模式的切换方法及显示设备

Also Published As

Publication number Publication date
JP2023047732A (ja) 2023-04-06
JP7453948B2 (ja) 2024-03-21
CN116671116A (zh) 2023-08-29

Similar Documents

Publication Publication Date Title
US11503345B2 (en) Apparatus, systems and methods for control of sporting event presentation based on viewer engagement
JP4584250B2 (ja) 映像処理装置、映像処理装置の集積回路、映像処理方法、及び映像処理プログラム
US8009232B2 (en) Display control device, and associated method of identifying content
EP1827018B1 (en) Video content reproduction supporting method, video content reproduction supporting system, and information delivery program
JP2004513547A (ja) 音声特性を利用してビデオプログラム中のハイライトを検出するシステム及び方法
WO2006016590A1 (ja) 情報信号処理方法、情報信号処理装置及びコンピュータプログラム記録媒体
US20070071406A1 (en) Video recording and reproducing apparatus and video reproducing apparatus
US20080298767A1 (en) Method, medium and apparatus summarizing moving pictures of sports games
US20090196575A1 (en) Information playback apparatus and playback speed control method
JP2007325246A (ja) 録画時間を動的に調整する方法及び関連装置
US6496647B2 (en) Video signal recording apparatus and method, video signal reproduction apparatus and method, video signal recording and reproduction apparatus and method, and recording medium
WO2023045281A1 (zh) 广播接收装置
US20080118233A1 (en) Video player
US10200764B2 (en) Determination method and device
CN112135159A (zh) 公屏演播方法、装置、智能终端及储存介质
JP2016010102A (ja) 情報提示システム
WO2022127228A1 (zh) 广播接收设备、服务器设备、信息记录播放设备及显示系统
JP4293105B2 (ja) 情報処理装置および方法、並びにプログラム
CN101098439A (zh) 录放装置以及录放方法
US20140150043A1 (en) Scene fragment transmitting system, scene fragment transmitting method and recording medium
JPH11213167A (ja) 目標物の位置検出装置および方法並びにフォーム解析装置
US20050114392A1 (en) Image processing apparatus, image processing method, image processing program, and information record medium storing program
JP2022040665A (ja) 映像処理装置、映像処理方法、及びモデル生成装置
JP2011076551A (ja) 電子機器、アクセス制御方法及びプログラム
CN115943625A (zh) 广播接收装置、服务器设备、信息记录播放装置及显示系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871346

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280007592.2

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE