WO2017104416A1 - オーディオビジュアル品質推定装置、オーディオビジュアル品質推定方法、及びプログラム - Google Patents
オーディオビジュアル品質推定装置、オーディオビジュアル品質推定方法、及びプログラム Download PDFInfo
- Publication number
- WO2017104416A1 WO2017104416A1 PCT/JP2016/085553 JP2016085553W WO2017104416A1 WO 2017104416 A1 WO2017104416 A1 WO 2017104416A1 JP 2016085553 W JP2016085553 W JP 2016085553W WO 2017104416 A1 WO2017104416 A1 WO 2017104416A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- quality
- audiovisual
- content
- audio
- video
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 91
- 238000000034 method Methods 0.000 title claims description 39
- 230000015556 catabolic process Effects 0.000 claims abstract description 17
- 238000006731 degradation reaction Methods 0.000 claims abstract description 17
- 230000002123 temporal effect Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 abstract description 2
- 230000003139 buffering effect Effects 0.000 description 23
- 238000000605 extraction Methods 0.000 description 16
- 238000013441 quality evaluation Methods 0.000 description 15
- 230000007423 decrease Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 3
- 238000012886 linear function Methods 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2407—Monitoring of transmitted content, e.g. distribution time, number of downloads
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/60—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/004—Diagnosis, testing or measuring for television systems or their details for digital television systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/02—Diagnosis, testing or measuring for television systems or their details for colour television signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44209—Monitoring of downstream path of the transmission network originating from a server, e.g. bandwidth variations of a wireless network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/647—Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
- H04N21/64723—Monitoring of network processes or resources, e.g. monitoring of network load
- H04N21/64738—Monitoring network characteristics, e.g. bandwidth, congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Definitions
- the present invention relates to an audiovisual quality estimation device, an audiovisual quality estimation method, and a program.
- Video communication services that transfer video media including video and audio (hereinafter also including audio) between terminals or between a server and a terminal via the Internet have become widespread.
- the Internet is a network whose communication quality is not always guaranteed. Therefore, when communicating using audio media or video media, the bit rate is lowered and the line is congested due to the narrow line bandwidth of the viewer terminal and the network. As a result, packet loss, packet transfer delay, and packet retransmission occur, and the quality perceived by the viewer for audio media, video media, and the like deteriorates.
- the original video is encoded because the video cannot be distributed at an excessive bit rate via the network, but when the original video is encoded, the video signal in the frame is converted into a block unit. Deterioration due to the above processing occurs, or high-frequency components of the video signal are lost, so that the fineness of the entire video is lowered.
- the delivery bit rate cannot be secured, the resolution of the video is lowered, the sense of detail is lowered, and the continuity of the video cannot be secured by reducing the frame rate, resulting in a discontinuous video.
- encoded video data is transmitted as a packet to the viewer terminal via the network, if packet loss or discard occurs, the frame will deteriorate, the throughput will decrease, and the packet will be played back until the playback timing.
- the video buffer stops at the viewer terminal due to insufficient data buffer capacity at the viewer terminal.
- the original sound is encoded.
- the high frequency component of the sound is lost. Sound intelligibility is lost.
- video when encoded audio data is transmitted as a packet to the viewer terminal via the network, if packet loss or discard occurs, the audio is distorted, the throughput is reduced, and the packet is reproduced.
- the audio buffer does not reach by the timing of the above, and the data buffer amount at the viewer terminal is insufficient, so that the sound reproduction stops.
- the audio visual quality experienced by the viewer is measured during the service provision, and On the other hand, it is important to be able to monitor the high quality of the audiovisual provided.
- methods for evaluating audio visual quality include a subjective quality evaluation method (for example, see Non-Patent Document 1) and an objective quality evaluation method (for example, Non-Patent Document 2).
- a plurality of viewers actually watch a video and experience the quality experienced by a quality scale (very good, normal, bad, Very bad) and disturbance scale (no degradation is observed at all, degradation is observed but not bothered, slightly concerned about degradation, concerned about degradation, very anxious about degradation), etc.
- Average the quality evaluation values of each video for example, video with a packet loss rate of 0% and a bit rate of 2 Mbps
- MOS Mean Opinion Score
- DMOS Degradation Mean Opinion Score
- an objective quality evaluation method that outputs an audiovisual quality evaluation value by using feature quantities (for example, bit rate, packet loss information, etc.) that affect video quality and audio quality.
- feature quantities for example, bit rate, packet loss information, etc.
- One of the conventional objective quality evaluation methods is to input the transmitted packets and setting values obtained from service providers, etc., and consider how much degradation propagates due to loss of video frames caused by packet loss.
- the conventional objective quality evaluation method estimates the audio visual quality evaluation value in a short time using a packet as described above.
- Non-Patent Document 2 is based on the assumption that the quality of audiovisual quality is about 10 seconds, for example, and is intended for video communication services distributed at a constant bit rate. It is difficult to apply to quality estimation of services (for example, HLS (HTTP Live Streaming) and MPEG-DASH) in which audio visual quality varies with time. Specifically, in Non-Patent Document 2, since audio visual quality in a short time is estimated, it is not assumed that the quality greatly fluctuates with time as shown in FIG. It is difficult to estimate the audiovisual quality of the image (Problem 1).
- HLS HTTP Live Streaming
- MPEG-DASH MPEG-DASH
- Non-Patent Document 2 aims to estimate audio visual quality in a short time, and therefore, long-time content (for example, several minutes of video, 30 minutes of scale) at the time when the viewer finally finishes viewing. It is difficult to apply to the estimation of audio visual quality of an animation of 2 hours and a movie. Specifically, when the viewer views the content for a long time, the impression of the first part of the content tends to fade, and conversely, the impression of the last part tends to remain (forgetting effect / familiarity effect). However, Non-Patent Document 2 does not take into consideration that the temporal weight for quality increases with time (Problem 2).
- Non-Patent Document 2 it is not considered that the state where the audio visual quality is low has a stronger influence on the final audio visual quality than the state where the audio visual quality is high (Problem 3).
- the audiovisual quality is estimated in consideration of encoding degradation of audiovisual content, the number of playback stops of the audiovisual content, the playback stop time, and the playback stop interval. Since it is based on fluctuations, the relationship between the audio visual content time length and playback stop time cannot be considered. For example, even if the playback stop time is 10 seconds, an audio visual content of 10 seconds and an audio visual content of 1 hour have an effect on audio visual quality (the former is 10 seconds when the content length is 10 seconds). Audio visual quality is very low due to the playback stop of the video, while the audio visual quality remains high because the latter has a 10 second pause in audio visual content. ) Is very different, but this effect has not been taken into account. Similarly, regarding the number of playback stops and the playback stop interval, the content length cannot be taken into consideration (Problem 4).
- the present invention has been made in view of the above points, and an object of the present invention is to enable quality evaluation even when the quality of audiovisual changes with time.
- the audio visual quality estimation device estimates the sound quality per unit time from the start of the reproduction of the content based on the parameters affecting the sound quality among the parameters related to the audio visual content.
- An audio quality estimation unit a video quality estimation unit that estimates video quality per unit time based on a parameter that affects video quality among parameters relating to the content, and the audio quality and the video quality Unit time quality estimation unit that estimates audio visual quality per unit time by integrating every unit time, and audio visual quality per unit time are integrated into one, and temporal quality fluctuations are added.
- Estimate audiovisual coding quality against coding degradation And Goka quality estimation unit based on said audio-visual encoding quality, having an audio-visual quality estimation unit the viewer after the end of the content to estimate the felt audiovisual quality.
- ⁇ Quality evaluation can be performed even when the quality of audiovisual changes over time.
- FIG. 2 is a diagram illustrating a hardware configuration example of the audiovisual quality estimation apparatus according to the embodiment of the present invention.
- the audio visual quality estimation apparatus 10 in FIG. 2 includes a drive device 100, an auxiliary storage device 102, a memory device 103, a CPU 104, an interface device 105, and the like that are mutually connected by a bus B.
- a program that realizes processing in the audiovisual quality estimation apparatus 10 is provided by a recording medium 101 such as a flexible disk or a CD-ROM.
- a recording medium 101 such as a flexible disk or a CD-ROM.
- the program is installed from the recording medium 101 to the auxiliary storage device 102 via the drive device 100.
- the program need not be installed from the recording medium 101 and may be downloaded from another computer via a network.
- the program may be installed as a part of another program.
- the auxiliary storage device 102 stores the installed program and also stores necessary files and data.
- the memory device 103 reads the program from the auxiliary storage device 102 and stores it when there is an instruction to start the program.
- the CPU 104 executes functions related to the audiovisual quality estimation device 10 according to a program stored in the memory device 103.
- the interface device 105 is used as an interface for connecting to a network.
- FIG. 3 is a diagram illustrating a functional configuration example of the audiovisual quality estimation apparatus according to the embodiment of the present invention.
- the audio visual quality estimation device 10 is configured to estimate the audio visual quality finally felt by the viewer for audio visual content (hereinafter simply referred to as “content”). It includes a video quality estimation unit 12, a unit time quality estimation unit 13, an encoding quality estimation unit 14, an audio visual quality estimation unit 15, and the like. Each of these units is realized by processing that one or more programs installed in the audio visual quality estimation apparatus 10 cause the CPU 104 to execute. That is, these units are realized by cooperation of hardware resources of the audiovisual quality estimation apparatus 10 and a program (software) installed in the audiovisual quality estimation apparatus 10.
- the sound quality estimation unit 11 estimates the sound quality per unit time for the content based on the sound parameters (for example, sound bit rate, sampling rate, etc.) that affect the sound quality of the content among the parameters related to the content. (Hereinafter simply referred to as “acoustic quality”).
- the unit time is a relatively short time with respect to the content length (content time length) such as 1 second, 5 seconds, or 10 seconds.
- the video quality estimation unit 12 determines the video quality per unit time for the content based on video parameters (for example, video bit rate, resolution, frame rate, etc.) that affect the video quality of the content among the parameters related to the content.
- An estimated value (hereinafter simply referred to as “video quality”) is calculated.
- the unit time quality estimator 13 performs unit time on content based on the sound quality per unit time output from the sound quality estimator 11 and the video quality per unit time output from the video quality estimator 12. Calculate an estimate of the audiovisual quality for each.
- the encoding quality estimator 14 determines the audio visual encoding quality of the content against encoding degradation taking into account temporal quality fluctuations. An estimated value (hereinafter simply referred to as “audio visual coding quality”) is calculated.
- the audio visual quality estimation unit 15 and the audio visual encoding quality output from the encoding quality estimation unit 14 and buffering parameters related to audio visual playback stop (for example, the total time of playback stop time, the number of playback stops, and the playback stop)
- the average value of the audio visual quality that the viewer finally feels (hereinafter simply referred to as “audio visual quality”) is calculated.
- the reproduction stop is not an intentional reproduction stop by the viewer but a reproduction stop that occurs based on deterioration of the quality of the content.
- the audio visual quality estimation unit 15 may further calculate the audio visual quality based on the time length of the audio visual content (however, the pure content length not including the stop time).
- input parameters such as audio parameters, video parameters, and buffering parameters shown in FIG. 3 are automatically generated from, for example, a packet when content is transferred over a network or a viewer terminal (terminal used for viewing content). Or may be extracted from information other than the packet.
- FIG. 4 is a diagram showing an example of a functional configuration when input parameters are extracted from a content packet or the like. 4, the same parts as those in FIG. 3 are denoted by the same reference numerals, and the description thereof is omitted.
- the parameter extraction unit 20 may be realized by the audiovisual quality estimation apparatus 10 or may be realized by an apparatus (computer) other than the audiovisual quality estimation apparatus 10. In any case, the parameter extraction unit 20 is realized by a process in which a program installed in a computer (the audiovisual quality estimation apparatus 10 or another apparatus) is executed by a CPU of an installation destination apparatus.
- the parameter extraction unit 20 uses media parameters (acoustic parameters and video parameters) and buffering using information that can be extracted from any of the information provided by the service provider providing the video communication service, the packet for transferring the content, and the viewer terminal. Extract parameters.
- the parameter extraction unit 20 includes a media parameter extraction unit 21 and a buffering parameter extraction unit 22.
- the media parameter extraction unit 21 extracts an audio bit rate as an audio parameter, and extracts a video bit rate, a resolution, and a frame rate as video parameters.
- the media parameter may be extracted from Segmentation formats or MPD received by the viewer terminal, or the media parameter is extracted from the bit stream in which the encoding information is described. Also good.
- FIG. 5 is a diagram for explaining a method of extracting media parameters for each unit time from MPD.
- (1) in FIG. 5 shows that the length of the chunk of content (Chunk) is 5 seconds. From the MPD attached to each chunk, the audio bit rate abr, video bit rate vbr, resolution rs, and It indicates that the frame rate fr and the like can be extracted.
- the media parameter of the first chunk (Chunk1) can be assigned to each second up to the 5th second.
- the media parameters of the second chunk (Chunk 2) can be assigned to each second from the 6th to the 10th second.
- the media parameters extracted for the chunk corresponding to each second can be assigned.
- audio parameters such as audio bit rate, video bit rate, resolution, and frame rate can be considered as media parameters that affect audio quality and video quality.
- the service provider sets these values and encodes content. Therefore, the audio visual quality estimation apparatus 10 may directly use these set values.
- the buffering parameter extraction unit 22 extracts the total playback stop time, the number of playback stops, and the average value of the playback stop time intervals as buffering parameters.
- FIG. 6 is a diagram for explaining buffering parameters.
- a rectangle indicating the time required for reproducing the content A is shown. According to the lower rectangle, when 10 seconds have passed since the start of playback (when the content A is played back at the 10th second), when 5 seconds of playback stop (b1) has occurred and 25 seconds have passed since the start of playback.
- a playback stop (b2) of 10 seconds occurs, and when 65 seconds have elapsed from the start of the playback (at the playback of the 50th second of the content A), It is shown that the reproduction stop (b3) has occurred.
- the buffering parameter may be calculated by detecting the time when the content is actually stopped by the player. Since the player is playing while grasping the playback time, for example, the information of PTS (Presentation time stamp) can be obtained and the current time can be matched to obtain the playback stop start time and the playback stop time length. .
- PTS Presentation time stamp
- FIG. 7 is a flowchart for explaining an example of a processing procedure executed by the audiovisual quality estimation apparatus.
- target content When information such as packets (for example, all packets used for transferring the target content) related to the quality evaluation target content (hereinafter referred to as “target content”) is input to the parameter extraction unit 20, media parameter extraction is performed.
- the unit 21 calculates an audio parameter (audio bit rate) that affects the audio quality and an image parameter (video bit rate, resolution, frame rate) that affects the video quality, and extracts buffering parameters.
- the unit 22 calculates buffering parameters relating to buffering (the number of playback stops, the total playback stop time, and the average value of playback stop intervals) (S101).
- the audio parameters are output to the audio quality estimation unit 11
- the video parameters are output to the video quality estimation unit 12
- the buffering parameters are output to the audio visual quality estimation unit 15.
- the acoustic quality estimation unit 11 calculates the acoustic quality per unit time for the target content based on the input acoustic parameters, and outputs the calculated acoustic quality to the unit time quality estimation unit 13 ( S102).
- the acoustic quality estimation unit 11 calculates the acoustic quality AQ (t) per unit time from the acoustic bit rate abr (t) per unit time of the target content. Specifically, it is calculated using the following formula (1) in consideration of the characteristic that the acoustic quality AQ (t) decreases with a decrease in the acoustic bit rate abr (t).
- the acoustic quality estimation unit 11 may calculate the acoustic quality AQ (t) using a mathematical formula different from the mathematical formula (1).
- the video quality estimation unit 12 calculates the video quality per unit time for the target content based on the input video parameters, and outputs the calculated video quality to the unit time quality estimation unit 13 ( S103).
- the video quality estimation unit 12 calculates the video quality VQ (t) per unit time from the video bit rate vbr (t), resolution rs (t), and frame rate fr (t) per unit time of the target content. .
- the theoretical maximum / maximum video quality X (t) determined for each set of resolution and frame rate is considered, and the maximum / maximum video quality X (t) is determined by the resolution rs (t) or Considering a characteristic that decreases with a decrease in the frame rate fr (t), a characteristic that the video quality VQ (t) decreases with a decrease in the video bit rate vbr (t) with respect to the maximum / maximum video quality X (t). Is calculated using the following formulas (2) and (3).
- vbr (t) is obtained from the video bit rate t seconds after the start of content reproduction
- rs (t) is obtained from the number of lines in the vertical and horizontal directions and the number of pixels t seconds after the start of content reproduction.
- the resolution, fr (t) is the frame rate after t seconds from the start of content reproduction, and the values calculated by the media parameter extraction unit 21 and the coefficients v 1 , v 2 ,..., V 7 are set in advance. Constant.
- the video quality estimation unit 12 may calculate the video quality VQ (t) using a mathematical formula different from the mathematical formulas (2) and (3).
- the unit time quality estimation unit 13 integrates the input audio quality AQ (t) and video quality VQ (t) for each unit time, and calculates the audio visual quality for each unit time.
- the audio visual quality for each unit time is output to the encoding quality estimation unit 14 (S104).
- the unit time quality estimation unit 13 assigns the audio visual quality TAVQ (t) for each unit time to the weight of the influence of the sound quality AQ (t) and the video quality VQ (t) for each unit time. It is calculated using the following mathematical formula (4).
- av 1, av 2, av 3, av 4 is a preset constant.
- the encoding quality estimation unit 14 integrates the input audiovisual quality TAVQ (t) for each unit time, calculates the audiovisual encoding quality considering only encoding degradation, and calculates The audio-visual coding quality thus output is output to the audio-visual quality estimation unit 15 (S105).
- the encoding quality estimation unit 14 calculates the audiovisual encoding quality CAVQ using the following formula (5).
- duration is the time length (seconds) of the audiovisual content (however, a pure content length not including the playback stop time), and may be set in advance, for example.
- the audio visual quality TAVQ (t) for each unit time from the start to the end of the content is increased in weight toward the end of the content (relatively close to the end of the content).
- the audio visual coding quality CAVQ is derived by calculating the weighted average (by increasing the weight of the audio visual quality TAVQ per unit time).
- the weighted average is calculated by increasing the influence when the audio visual quality TAVQ (t) per unit time is small (that is, when the quality is low) as a weight, so that the audio visual coding quality CAVQ is derived. Is done.
- w 1 (u) is expressed by an exponential function, but w 1 (u) is an audiovisual quality related to a unit time that is relatively close to the end of the content, such as a linear function or a quadratic function. What is necessary is just to formulate with the function whose weight becomes large as TAVQ. Therefore, w 1 (u) is not limited to an exponential function.
- w 2 (TAVQ (t)) is expressed by a linear function, but w 2 (TAVQ (t)) is formulated by a function that increases weight when quality is low, such as an exponential function. That's fine. Therefore, w 2 (TAVQ (t)) is not limited to a linear function.
- the audio visual quality estimation unit 15 calculates the audio visual quality based on the input buffering parameters and the audio visual encoding quality CAVQ (S106).
- the audio visual quality estimation unit 15 performs the audio visual coding quality CAVQ, the total length of the playback stop time, which is a buffering parameter, the number of playback stop times, and the average value of the playback stop time intervals, the audio visual content Based on the time length (however, the pure content length not including the stop time), the audio visual quality AVQ finally experienced by the viewer is calculated using the following formula (6).
- duration is the time length of audio-visual content (however, pure content length not including stop time)
- numofBuff is the number of playback stops
- totalBuffLen is the total length of playback stop time
- avgBuffInterval is the average of playback stop time intervals
- the parameters related to playback stop are divided by the time length of the content.
- an exponential function is applied to the buffering parameters, and the degree of influence is formulated.
- the number of playback stops (numofBuff)
- the totalBuffLen the total length of playback stop times
- avgBuffInterval the average value of playback stop time intervals
- the number of playback stops (numofBuff), the total length of playback stop time (totalBuffLen), and the average value of playback stop time intervals (avgBuffInterval) are all formulated. You may formulate using. Furthermore, in this embodiment, the number of playback stops (numofBuff), the total length of playback stop times (totalBuffLen), and the average value of playback stop time intervals (avgBuffInterval) are used as buffering parameters.
- Average playback stop time (avgBuffLen) obtained by dividing the total length of time (totalBuffLen) by the number of playback stops (numofBuff), variance of playback stop time (varBuffLen), maximum / minimum value of playback stop time (maxBuffLen / minBuffLen), playback stop
- the maximum / minimum value / variance (maxBuffInterval / minBuffInterval / varBuffInterval) of the time interval is calculated, and the audio video that the viewer finally experiences is calculated. Yuaru quality AVQ may be calculated.
- the present embodiment it is possible to evaluate the quality even when the quality of the audiovisual changes with time based on the media parameter and the buffering parameter obtained from the information such as the packet. Or the accuracy of the evaluation can be improved.
- Each coefficient (a 1 , a 2 , a 3 , v 1 ,..., V 7 , av 1 ,..., Av 4 , t 1 ,..., T 5 , s 1 , s 2 , s 3 ) can be derived, for example, by performing a subjective quality evaluation experiment, using the obtained quality evaluation value, and using an optimization method such as a least square method.
- the audio visual quality value for each unit time is estimated from the sound quality and the video quality for each unit time (for example, a short time such as 1 second, 5 seconds, and 10 seconds), and the unit time.
- Each audiovisual quality value is weighted and integrated to estimate the audiovisual quality for a long time (for example, several minutes to several hours).
- the quality weight at the end of playback is made higher than the quality at the start of content playback.
- weighting is performed so that the low quality strongly influences the final quality.
- the audio visual quality can be estimated by taking into consideration the influence of the time length of the audiovisual content on the reproduction stop and the influence of the time length of the audiovisual content on the reproduction stop time.
- the audio visual quality value (that is, the audio visual quality AVQ output from the audio visual quality estimation apparatus 10) of the video communication service actually viewed by the viewer is monitored and provided. It is possible to easily determine whether or not the service inside is maintaining a certain level of quality for the viewer, and it is possible to grasp and manage the actual quality of the service being provided in real time.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Quality & Reliability (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
11 音響品質推定部
12 映像品質推定部
13 単位時間品質推定部
14 符号化品質推定部
15 オーディオビジュアル品質推定部
20 パラメータ抽出部
21 メディアパラメータ抽出部
22 バッファリングパラメータ抽出部
100 ドライブ装置
101 記録媒体
102 補助記憶装置
103 メモリ装置
104 CPU
105 インタフェース装置
B バス
Claims (9)
- オーディオビジュアルのコンテンツに関するパラメータのうち、音響品質に影響を与えるパラメータに基づいて、前記コンテンツの再生開始時から単位時間ごとの音響品質を推定する音響品質推定部と、
前記コンテンツに関するパラメータのうち、映像品質に影響を与えるパラメータに基づいて、前記単位時間ごとの映像品質を推定する映像品質推定部と、
前記音響品質及び前記映像品質を前記単位時間ごとに統合して、前記単位時間ごとのオーディオビジュアル品質を推定する単位時間品質推定部と、
前記単位時間ごとのオーディオビジュアル品質を一つに統合して、時間的な品質変動を加味した、符号化劣化に対するオーディオビジュアル符号化品質を推定する符号化品質推定部と、
前記オーディオビジュアル符号化品質に基づいて、前記コンテンツの終了後に視聴者が体感するオーディオビジュアル品質を推定するオーディオビジュアル品質推定部と、
を有することを特徴とするオーディオビジュアル品質推定装置。 - 前記符号化品質推定部は、前記単位時間ごとのオーディオビジュアル品質について、前記コンテンツの終了に相対的に近い単位時間に係る前記オーディオビジュアル品質ほど重みを大きくして加重平均を算出することで、前記オーディオビジュアル符号化品質を推定する、
ことを特徴とする請求項1記載のオーディオビジュアル品質推定装置。 - 前記符号化品質推定部は、前記単位時間ごとのオーディオビジュアル品質について、低いオーディオビジュアル品質ほど重みを大きくして加重平均を算出することで、前記オーディオビジュアル符号化品質を推定する、
ことを特徴とする請求項1又は2記載のオーディオビジュアル品質推定装置。 - オーディオビジュアルのコンテンツに関するパラメータのうち、音響品質に影響を与えるパラメータに基づいて、音響品質を推定する音響品質推定部と、
前記コンテンツに関するパラメータのうち、映像品質に影響を与えるパラメータに基づいて、映像品質を推定する映像品質推定部と、
前記音響品質及び前記映像品質を統合したオーディオビジュアル品質と、前記コンテンツの再生停止に関するパラメータとに基づいて、前記コンテンツの終了後に視聴者が体感するオーディオビジュアル品質を推定するオーディオビジュアル品質推定部と、
を備えるオーディオビジュアル品質推定装置であって、
前記オーディオビジュアル品質推定部は、
前記再生停止に関するパラメータが前記コンテンツの時間長との関係で相対的に大きくなるほど前記オーディオビジュアル品質が低くなるように前記オーディオビジュアル品質を推定する、
ことを特徴とするオーディオビジュアル品質推定装置。 - オーディオビジュアルのコンテンツに関するパラメータのうち、音響品質に影響を与えるパラメータに基づいて、前記コンテンツの再生開始時から単位時間ごとの音響品質を推定する音響品質推定手順と、
前記コンテンツに関するパラメータのうち、映像品質に影響を与えるパラメータに基づいて、前記単位時間ごとの映像品質を推定する映像品質推定手順と、
前記音響品質及び前記映像品質を前記単位時間ごとに統合して、前記単位時間ごとのオーディオビジュアル品質を推定する単位時間品質推定手順と、
前記単位時間ごとのオーディオビジュアル品質を一つに統合して、時間的な品質変動を加味した、符号化劣化に対するオーディオビジュアル符号化品質を推定する符号化品質推定手順と、
前記オーディオビジュアル符号化品質に基づいて、前記コンテンツの終了後に視聴者が体感するオーディオビジュアル品質を推定するオーディオビジュアル品質推定手順と、
をコンピュータが実行することを特徴とするオーディオビジュアル品質推定方法。 - 前記符号化品質推定手順は、前記単位時間ごとのオーディオビジュアル品質について、前記コンテンツの終了に相対的に近い単位時間に係る前記オーディオビジュアル品質ほど重みを大きくして加重平均を算出することで、前記オーディオビジュアル符号化品質を推定する、
ことを特徴とする請求項5記載のオーディオビジュアル品質推定方法。 - 前記符号化品質推定手順は、前記単位時間ごとのオーディオビジュアル品質について、低いオーディオビジュアル品質ほど重みを大きくして加重平均を算出することで、前記オーディオビジュアル符号化品質を推定する、
ことを特徴とする請求項5又は6記載のオーディオビジュアル品質推定方法。 - オーディオビジュアルのコンテンツに関するパラメータのうち、音響品質に影響を与えるパラメータに基づいて、音響品質を推定する音響品質推定手順と、
前記コンテンツに関するパラメータのうち、映像品質に影響を与えるパラメータに基づいて、映像品質を推定する映像品質推定手順と、
前記音響品質及び前記映像品質を統合したオーディオビジュアル品質と、前記コンテンツの再生停止に関するパラメータとに基づいて、前記コンテンツの終了後に視聴者が体感するオーディオビジュアル品質を推定するオーディオビジュアル品質推定手順と、
をコンピュータが実行し、
前記オーディオビジュアル品質推定手順は、
前記再生停止に関するパラメータが前記コンテンツの時間長との関係で相対的に大きくなるほど前記オーディオビジュアル品質が低くなるように前記オーディオビジュアル品質を推定する、
ことを特徴とするオーディオビジュアル品質推定方法。 - 請求項1乃至4いずれか一項記載の各部としてコンピュータを機能させることを特徴とするプログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/776,425 US10869072B2 (en) | 2015-12-16 | 2016-11-30 | Audio-visual quality estimation device, method for estimating audio-visual quality, and program |
KR1020187011969A KR102000590B1 (ko) | 2015-12-16 | 2016-11-30 | 오디오 비주얼 품질 추정 장치, 오디오 비주얼 품질 추정 방법, 및 프로그램 |
JP2017555964A JP6662905B2 (ja) | 2015-12-16 | 2016-11-30 | オーディオビジュアル品質推定装置、オーディオビジュアル品質推定方法、及びプログラム |
RU2018118746A RU2693027C1 (ru) | 2015-12-16 | 2016-11-30 | Устройство оценки качества аудиовизуального сигнала и способ оценки качества аудиовизуального сигнала |
EP16875400.0A EP3393125B1 (en) | 2015-12-16 | 2016-11-30 | Audio/visual quality estimation device, method for estimating audio/visual quality, and program |
CN201680073259.6A CN108476317B (zh) | 2015-12-16 | 2016-11-30 | 音频视频质量推测装置、音频视频质量推测方法以及程序 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-244983 | 2015-12-16 | ||
JP2015244983 | 2015-12-16 | ||
JP2016160182 | 2016-08-17 | ||
JP2016-160182 | 2016-08-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017104416A1 true WO2017104416A1 (ja) | 2017-06-22 |
Family
ID=59056339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/085553 WO2017104416A1 (ja) | 2015-12-16 | 2016-11-30 | オーディオビジュアル品質推定装置、オーディオビジュアル品質推定方法、及びプログラム |
Country Status (7)
Country | Link |
---|---|
US (1) | US10869072B2 (ja) |
EP (1) | EP3393125B1 (ja) |
JP (1) | JP6662905B2 (ja) |
KR (1) | KR102000590B1 (ja) |
CN (1) | CN108476317B (ja) |
RU (1) | RU2693027C1 (ja) |
WO (1) | WO2017104416A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019526190A (ja) * | 2016-06-29 | 2019-09-12 | テレフオンアクチーボラゲット エルエム エリクソン(パブル) | 適応マルチメディアストリーミングの品質推定 |
WO2019216197A1 (ja) * | 2018-05-09 | 2019-11-14 | 日本電信電話株式会社 | エンゲージメント推定装置、エンゲージメント推定方法及びプログラム |
US20220343485A1 (en) * | 2019-10-02 | 2022-10-27 | Nippon Telegraph And Telephone Corporation | Video quality estimation apparatus, video quality estimation method and program |
WO2023233631A1 (ja) * | 2022-06-02 | 2023-12-07 | 日本電信電話株式会社 | 映像品質推定装置、映像品質推定方法及びプログラム |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11277461B2 (en) * | 2019-12-18 | 2022-03-15 | The Nielsen Company (Us), Llc | Methods and apparatus to monitor streaming media |
US20230262277A1 (en) * | 2020-07-02 | 2023-08-17 | Nippon Telegraph And Telephone Corporation | Viewing completion rate estimation apparatus, viewing completion rate estimation method and program |
WO2022016406A1 (zh) * | 2020-07-22 | 2022-01-27 | 北京小米移动软件有限公司 | 信息传输方法、装置及通信设备 |
US11570228B2 (en) | 2020-10-15 | 2023-01-31 | Sandvine Corporation | System and method for managing video streaming quality of experience |
US11558668B2 (en) | 2021-06-03 | 2023-01-17 | Microsoft Technology Licensing, Llc | Measuring video quality of experience based on decoded frame rate |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004172753A (ja) * | 2002-11-18 | 2004-06-17 | Nippon Telegr & Teleph Corp <Ntt> | 映像・音声品質客観評価方法及び装置 |
JP2007194893A (ja) * | 2006-01-19 | 2007-08-02 | Nippon Telegr & Teleph Corp <Ntt> | 映像品質評価装置および方法 |
JP2015122638A (ja) * | 2013-12-24 | 2015-07-02 | 日本電信電話株式会社 | 品質推定装置、方法及びプログラム |
JP2015520548A (ja) * | 2012-04-23 | 2015-07-16 | 華為技術有限公司Huawei Technologies Co.,Ltd. | マルチメディア品質を評価する方法及び装置 |
JP2015154234A (ja) * | 2014-02-14 | 2015-08-24 | 日本電信電話株式会社 | ユーザ体感品質推定装置、ユーザ体感品質推定方法及びプログラム |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2617893C (en) * | 2005-09-06 | 2011-05-03 | Nippon Telegraph And Telephone Corporation | Video communication quality estimation device, method, and program |
RU2420022C2 (ru) | 2006-10-19 | 2011-05-27 | Телефонактиеболагет Лм Эрикссон (Пабл) | Способ определения качества видео |
EP2106154A1 (en) * | 2008-03-28 | 2009-09-30 | Deutsche Telekom AG | Audio-visual quality estimation |
US9191284B2 (en) * | 2010-10-28 | 2015-11-17 | Avvasi Inc. | Methods and apparatus for providing a media stream quality signal |
GB2533878B (en) | 2013-10-16 | 2020-11-11 | Intel Corp | Method, apparatus and system to select audio-video data for streaming |
-
2016
- 2016-11-30 RU RU2018118746A patent/RU2693027C1/ru active
- 2016-11-30 JP JP2017555964A patent/JP6662905B2/ja active Active
- 2016-11-30 CN CN201680073259.6A patent/CN108476317B/zh active Active
- 2016-11-30 WO PCT/JP2016/085553 patent/WO2017104416A1/ja active Application Filing
- 2016-11-30 KR KR1020187011969A patent/KR102000590B1/ko active IP Right Grant
- 2016-11-30 US US15/776,425 patent/US10869072B2/en active Active
- 2016-11-30 EP EP16875400.0A patent/EP3393125B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004172753A (ja) * | 2002-11-18 | 2004-06-17 | Nippon Telegr & Teleph Corp <Ntt> | 映像・音声品質客観評価方法及び装置 |
JP2007194893A (ja) * | 2006-01-19 | 2007-08-02 | Nippon Telegr & Teleph Corp <Ntt> | 映像品質評価装置および方法 |
JP2015520548A (ja) * | 2012-04-23 | 2015-07-16 | 華為技術有限公司Huawei Technologies Co.,Ltd. | マルチメディア品質を評価する方法及び装置 |
JP2015122638A (ja) * | 2013-12-24 | 2015-07-02 | 日本電信電話株式会社 | 品質推定装置、方法及びプログラム |
JP2015154234A (ja) * | 2014-02-14 | 2015-08-24 | 日本電信電話株式会社 | ユーザ体感品質推定装置、ユーザ体感品質推定方法及びプログラム |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019526190A (ja) * | 2016-06-29 | 2019-09-12 | テレフオンアクチーボラゲット エルエム エリクソン(パブル) | 適応マルチメディアストリーミングの品質推定 |
US11463742B2 (en) | 2016-06-29 | 2022-10-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Quality estimation of adaptive multimedia streaming |
WO2019216197A1 (ja) * | 2018-05-09 | 2019-11-14 | 日本電信電話株式会社 | エンゲージメント推定装置、エンゲージメント推定方法及びプログラム |
JP2019197996A (ja) * | 2018-05-09 | 2019-11-14 | 日本電信電話株式会社 | エンゲージメント推定装置、エンゲージメント推定方法及びプログラム |
JP7073894B2 (ja) | 2018-05-09 | 2022-05-24 | 日本電信電話株式会社 | エンゲージメント推定装置、エンゲージメント推定方法及びプログラム |
US11425457B2 (en) | 2018-05-09 | 2022-08-23 | Nippon Telegraph And Telephone Corporation | Engagement estimation apparatus, engagement estimation method and program |
US20220343485A1 (en) * | 2019-10-02 | 2022-10-27 | Nippon Telegraph And Telephone Corporation | Video quality estimation apparatus, video quality estimation method and program |
WO2023233631A1 (ja) * | 2022-06-02 | 2023-12-07 | 日本電信電話株式会社 | 映像品質推定装置、映像品質推定方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN108476317A (zh) | 2018-08-31 |
KR20180059890A (ko) | 2018-06-05 |
RU2693027C1 (ru) | 2019-07-01 |
CN108476317B (zh) | 2021-07-09 |
EP3393125A1 (en) | 2018-10-24 |
JP6662905B2 (ja) | 2020-03-11 |
US20180332326A1 (en) | 2018-11-15 |
US10869072B2 (en) | 2020-12-15 |
KR102000590B1 (ko) | 2019-07-16 |
JPWO2017104416A1 (ja) | 2018-08-30 |
EP3393125A4 (en) | 2019-08-21 |
EP3393125B1 (en) | 2021-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017104416A1 (ja) | オーディオビジュアル品質推定装置、オーディオビジュアル品質推定方法、及びプログラム | |
KR101465927B1 (ko) | 비디오 데이터 품질 평가 방법 및 장치 | |
Yang et al. | Content-adaptive packet-layer model for quality assessment of networked video services | |
JP4490374B2 (ja) | 映像品質評価装置および方法 | |
JP4802209B2 (ja) | 映像品質推定方法、装置およびプログラム | |
JP4861371B2 (ja) | 映像品質推定装置、方法、およびプログラム | |
WO2021181724A1 (ja) | 数理モデル導出装置、数理モデル導出方法及びプログラム | |
JP6162596B2 (ja) | 品質推定装置、方法及びプログラム | |
JP4787303B2 (ja) | 映像品質推定装置、方法、およびプログラム | |
WO2020170869A1 (ja) | エンゲージメント推定装置、エンゲージメント推定方法及びプログラム | |
US20230048428A1 (en) | A method for estimating bandwidth between a video server and a video client | |
JP5405915B2 (ja) | 映像品質推定装置、映像品質推定方法および映像品質推定装置の制御プログラム | |
JP2013046113A (ja) | 基本GoP長を用いた映像品質推定装置及び方法及びプログラム | |
JP2017204700A (ja) | 映像再生装置、映像再生方法および映像再生プログラム | |
JP7215209B2 (ja) | エンゲージメント推定装置、エンゲージメント推定方法及びプログラム | |
Chen et al. | Impact of packet loss distribution on the perceived IPTV video quality | |
JP7255704B2 (ja) | エンゲージメント推定装置、エンゲージメント推定方法及びプログラム | |
JP6660357B2 (ja) | 品質推定装置、品質推定方法及びプログラム | |
WO2022003902A1 (ja) | 視聴完了率推定装置、視聴完了率推定方法及びプログラム | |
JP2009194609A (ja) | 映像品質推定装置、方法、およびプログラム | |
JP2019121847A (ja) | 品質推定装置、品質推定方法及びプログラム | |
Moltchanov | Problems arising in evaluating perceived quality of media applications in packet networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16875400 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017555964 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20187011969 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15776425 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016875400 Country of ref document: EP |