EP2394246A1 - Procède de fusion de segments de programmes audiovisuels, dispositif, et produit programme d'ordinateur correspondant - Google Patents
Procède de fusion de segments de programmes audiovisuels, dispositif, et produit programme d'ordinateur correspondantInfo
- Publication number
- EP2394246A1 EP2394246A1 EP10707578A EP10707578A EP2394246A1 EP 2394246 A1 EP2394246 A1 EP 2394246A1 EP 10707578 A EP10707578 A EP 10707578A EP 10707578 A EP10707578 A EP 10707578A EP 2394246 A1 EP2394246 A1 EP 2394246A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- segment
- descriptors
- segments
- program
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
Definitions
- the present invention relates to the field of audiovisual content analysis.
- the present invention relates more particularly to a method for fusing previously segmented audiovisual contents.
- the background of the National Audiovisual Institute (INA) responsible for archiving French broadcasts increases by five hundred and forty thousand hours each year and in the end more than four million hours of programs are available .
- INA National Audiovisual Institute
- a current French viewer can choose between more than four hundred hours of content per day on the only digital terrestrial television channels.
- new needs and services have emerged such as the archiving of these data, carried out in France by the INA, the control of the broadcasts, in particular for the Superior Council of the Audiovisual, the freelance advertising or non-linear access to the desired content, that is to say without constraint of the broadcast time.
- All these services are based on an indexing of audiovisual streams, consisting of a segmentation of the streams to extract programs and inter-programs (advertising sequences in particular) broadcast continuously. These treatments are extremely expensive when done manually. Automatic techniques are needed to exploit the large number of audiovisual streams available. These automatic segmentation techniques use an analysis of the contents of the audiovisual streams or use the information on the programs provided by the television channels, which information may take the form of electronic program guides. Many different methods have been proposed for segmenting audiovisual streams. The invention uses segmented audiovisual streams.
- an audiovisual stream represents audio and video content broadcast continuously by a television channel or broadcaster of this type;
- a program is a program broadcast in the audiovisual stream. It may consist of several parts separated by advertising breaks.
- a program can be a movie, an episode of a series, a game, a newspaper, the weather, a clip, a magazine or other categories.
- an inter-program is an element diffused between two programs or in an advertising break. This can be an advertisement, a trailer for an upcoming program, a pub “jingle” (generic ad and end of commercial breaks), a channel or broadcaster logo, or a sponsor preceding the beginning or following the end of a program.
- Segmentation techniques have the particularity of segmenting a program into several segments. This poses a problem when one wishes to reconstitute the program in question for the needs of the aforementioned services.
- Segmentation techniques are generally based on the detection (step 101) of the inter-program areas 13 because the inter-programs are short sequences that share many common properties.
- the inter-programs are broadcast several times in the stream. These properties make cross programs much easier to detect than long programs (A, B, and C). These are heterogeneous (series, film, emissions, etc.) and do not generally share common properties.
- the portions of the stream (A, B, C) that separate the interspecific detected areas thus form segments that correspond to segments of program parts also referred to hereinafter as program segments.
- the audiovisual stream is then segmented (etapelO2) into three segments (A, B and C).
- the invention does not have these disadvantages of the prior art. Indeed, the invention relates to a method for merging segments of an audiovisual stream previously cut into a plurality of program segments to be merged. According to the invention, such a method comprises, for at least a first and at least a second segment of said plurality of segments, a step of calculating a set of descriptors and a step of obtaining at least one representative information a membership of said at least one first and at least one second segment to the same audiovisual program based on data representative of said previously calculated descriptors.
- the invention makes it possible to solve the problems that are not solved by the solutions of the prior art.
- the invention does not use the data provided by the electronic program guide to decide on the merger of two segments belonging to the audiovisual stream.
- the method of the invention calculates descriptors of segments. From these descriptors extracted from the two segments, the method of the invention comprises a step of obtaining the representative information of membership.
- the method of the invention comprises a step of obtaining the representative information of membership.
- said at least one first and at least one second segment are consecutive segments.
- said set of descriptors comprises: a first subset of at least one descriptor specific to said at least one first segment; a second subset of at least one descriptor specific to said at least one second segment.
- the invention makes it possible to take into account the similarities of the segments.
- the method of the invention makes it possible to maximize the probabilities of fusion between two segments of the same program.
- the invention makes it possible to somehow determine particular characteristics of these segments. These particular features can then be used to determine a difference between segments.
- a subset contains a defined number of descriptors that correspond to a determined number of characteristic measures of a segment.
- said set of descriptors comprises a subset of descriptors calculated using data belonging to said at least one first segment and auditing at least a second segment, said common descriptors.
- the invention makes it possible to take into account the similarities of the segments.
- the invention introduces specific descriptors, called common descriptors, which result from a calculation carried out on the data of the first and second segment.
- a common descriptor is the number of images or of a plane common to the two segments.
- said method comprises at least one step of calculating a distance separating a descriptor from said first subset of eigen descriptors and a corresponding descriptor of the same type from said second subset of eigen descriptors, delivering a vector of at least one distance.
- the invention makes it possible to create a set of distances between the descriptors of the same types of the first and second segments. These distances constitute a vector of distances. The smaller the distance between two descriptors, the more the characteristics of the two segments relating to this descriptor will be similar.
- said descriptors are of different types, said types belonging to the group comprising: the ratio between a number of key images of a segment and a duration of this segment; a three-dimensional color histogram in the RGB color space of the average color on all keyframes of a segment; a three-dimensional color histogram in the color space
- RGB of the intersection of colors on all keyframes of a segment the ratio of the number of faces detected on the segment and a duration of a segment; the average and standard deviation of the number of faces detected by keyframes of a segment; the maximum size of the faces detected on all the keyframes of a segment; - average and standard deviation of face size detected by keyframes a segment; the number of similar keyframe groups in a segment; the number of similar keyframe groups containing keyframes belonging to the at least one first segment and the at least one second segment audit; the average and standard deviation of the number of similar images in groups of similar images.
- said distances separating said descriptors belong to the group comprising: the absolute value of the difference; the Euclidean distance; the correlation distance according to the Pearson correlation coefficient; the distance from Chi-Square; the intersection distance which is the sum of the respective minimums between the respective values of two distributions; the distance from Bhattacharyya.
- said method comprises, prior to the merger, a learning phase during which a classifier learns to differentiate different membership classes of audiovisual programs.
- said obtaining step comprises: a step of transmitting said distance vector and / or said descriptors common to a classifier previously trained; a supervised classification step of said at least one first and at least one second segment as a function of said distances of said distance vector and / or said common descriptors.
- the invention makes it possible to merge the segments in an automated and simple manner while ensuring that the segments are correctly merged.
- the classifier can be a binary classifier SVM type to provide a decision of membership of said segments to the same audiovisual program.
- the invention also relates to a device for merging segments of an audiovisual stream previously cut into a plurality of program segments to be merged.
- such a device comprises, for at least a first and at least a second segment of said plurality of segments, means for calculating a set of descriptors and means for obtaining at least one representative information. a membership of said at least one first and at least one second segment to an identical audiovisual program based on data representative of said previously calculated descriptors.
- the invention also relates to a computer program product downloadable from a communication network and / or stored on a computer-readable medium and / or executable by a microprocessor, and comprising program code instructions for the computer. execution of the fusion process as described above. 4. LIST OF FIGURES
- FIG. 1 presents a synoptic of the general techniques of segmentation of an audiovisual flow
- Figure 2 generally illustrates the method of fusion of the invention
- FIG. 3 illustrates a mode of implementation of the fusion method of the invention for three consecutive segments
- FIG. 4 illustrates another mode of implementation of the fusion method according to the invention
- FIG. 5 illustrates another embodiment of the fusion process according to the invention
- FIG. 6 describes a fusion device according to the invention. 5.
- the invention proposes to merge the different segments forming a program using descriptors of these segments.
- these descriptors do not depend on data external to the stream or stream metadata, but on audiovisual data comprising the stream.
- the descriptors can therefore relate to both the video content of the stream and the audio content thereof.
- the invention does not exclude the use of metadata provided by the EPG or ETI when such data exist.
- the invention fully combines these techniques using EPG or ETI to significantly improve the accuracy of the fusions and to reduce the time required for the fusion.
- the general principle of the The invention thus relies on the calculation of descriptors for the segments that compose the stream, on the calculation of data associated with these descriptors and on the provision of these data and descriptors to a particular component that will provide a response as to the membership two segments to the same program.
- the steps of the method of the invention are presented. It is considered that the audiovisual stream has been segmented beforehand according to an approach for detecting suitable inter-program areas.
- the method of the invention uses a stream segmented into a plurality of program segments 20 consisting for example of segments A, B, and following.
- the method of the invention then performs a merging of the segments by: calculating 201 a set of descriptors 21.
- descriptors 21 are calculated for at least two segments of the audiovisual stream, said first and second segments. As is explained later, the calculated descriptors are of different types; - estimating 203 the belonging of the first and second segments to the same program using the data from these descriptors 21. This estimation step 203 can be performed using automatic classification means, such as classifiers. Other appropriate means can also be used to obtain an estimate of this membership.
- the descriptors that are implemented in the context of the invention are of two kinds: the clean descriptors and the common descriptors.
- a clean descriptor is a value, or a data structure comprising several values representing the result of a calculation carried out on a segment: it can for example be the duration of the segment, the number of images of this segment, the sound volume of the segment, a number of plans, a spectral analysis of this segment, etc. This is segment-specific data.
- the proper descriptors are therefore of different types. According to the invention, a specific number of eigen descriptors per segment is calculated, each own descriptor being of a particular type.
- a common descriptor is a value, or a data structure comprising several values representing the result of a calculation carried out on the two (or more) segments which one wishes to know if they belong to the same program. This is for example a number of identical images between the two segments, an estimate of an identity of a background sound, etc.
- the common descriptors are therefore also of different types. According to the invention, a determined number of common descriptors are calculated on the two (or more) segments which one wishes to know if they belong to the same program, each common descriptor being of a particular type. In at least one embodiment of the invention, the eigen descriptors of each of the two segments whose membership in the same program is to be tested are then used to determine distances. These are distances between two descriptors belonging to two given segments, for example consecutive. These distances make it possible to establish a proximity of the two segments with respect to a given type of descriptor, such as for example a color distribution. These distances can be expressed in the form of integer values, real values or vectors comprising several dimensions.
- a certain number of distances are calculated.
- the number of distances calculated between two segments may be greater or less than the number of descriptors calculated for these two segments.
- Distances separating the descriptors include: the absolute value of the difference; the Euclidean distance; the correlation distance according to the Pearson correlation coefficient (used for example between two color histograms); the Chi-Square distance (used for example between two color histograms); the intersection distance which is the sum of the respective minimums between the respective values of two distributions (used for example between two color histograms); the distance of Bhattacharyya (used for example between two histograms of colors).
- FIG. 3 shows the implementation of the method of the invention for three segments of an audiovisual stream: segments A, B and C are extracted from the audiovisual stream by a segmentation method. Descriptors (Ds ⁇ A, B ⁇ , Ds ⁇ B, C ⁇ ) are then calculated (steps 201, 202) for the segments: they can be descriptors specific to the segment (for example descriptors of A, B or C) or common descriptors (i.e., descriptors that use both A and B or A and C data).
- the descriptors (Ds ⁇ A, B ⁇ , Ds ⁇ B, C ⁇ ) are then provided to a classifier C1 which estimates (steps 203 and 204) the membership of the segments in the same program and decides on the separation (N) or the fusion (Y) of the two segments.
- the segments are consecutive and are compared in pairs, that is to say that the segment A is compared with the segment B (step 203) and the segment B with the segment C (step 204).
- classifier C1 with non-consecutive segment descriptors. For example, it would be quite relevant to provide the classifier C1 with data from the descriptors of A and C. If the classifier C1 concludes that A and C belong to the same program, then it will be easy to conclude that B also belongs to the same program as A and C. This reduces the calculation time needed to determine the membership of the segments to the programs.
- the classifier C1 uses the data from the descriptors to estimate the membership of the two segments in the same program and to decide on the separation (N) or the merger (Y) of the two segments to which these data belong.
- N separation
- Y merger
- step 201 descriptors for these three or four segments (Ds ⁇ A, B, C ⁇ ) and provide them together to the classifier.
- the classifier uses (step 203 ') for its part the data from the descriptors to decide on the separation (N) or the merger (Y) of the two segments to which these data belong.
- the two segments are not necessarily consecutive.
- the method of the invention is implemented in the same manner as above.
- Descriptors for segments A and C (Ds (A, C)) are calculated (step 201 ") and the classifier used (step 203") for its part the data from the descriptors to decide the separation (N) or the merger (Y) of the two segments to which these data belong. If, in the case of FIG. 5, classifier C1 decides to merge segments A and C, then it can be concluded that segments A, B and C belong to the same program.
- Such an approach makes it possible, in certain cases, to reduce the number of calculations required and therefore to increase the processing speed.
- the invention proposes a method for deciding whether two program segments, for example consecutive segments of an audiovisual stream, must or must not merge to form the same program.
- the method chooses to merge the segments by analyzing only the audiovisual content and the properties of the segments.
- an implementation of the method of the invention is presented by using several descriptors that make it possible to determine whether two consecutive segments of the same audiovisual stream belong to the same program.
- a binary classifier SVM type (of the English "Support Vector Machine") is used. Any other type of classifier can however be used.
- the binary classifier has the advantage of being simple and of being adapted to decision-making in the context of the invention since it renders a binary type response.
- a classifier is a mathematical function that associates a class of membership based on input data. Learning a classifier is a method of estimating mathematical function from a sample of examples of membership class associations. A classifier is said to be binary when it allows the determination of a binary result (of the yes / no type).
- the binary classifier makes it possible, from the data derived from the descriptors, to determine whether the two segments whose data from the descriptors are analyzed belong to the same audiovisual program. This determination is possible because, in a previous phase, using a set of segments for which the merger decision was manually taken, the binary classifier was trained to determine on the basis of the descriptors whether two consecutive segments should be or do not merge to form the same program. In one embodiment of the invention, it is also possible to use several classifiers. This type of approach may be of interest in a wide variety of program types that require differential analysis by classifiers with different learning outcomes.
- the descriptors considered for each segment are selected from their ability to characterize an audiovisual stream segment.
- the following clean descriptors are used.
- keyframes are identified for each segment using a keyframe detection method.
- a first descriptor is used for each segment: it is the number of key images of a segment divided by the duration of the segment.
- the main colors of the video segments make it possible to roughly differentiate the video segments. For example, parts of a dark film will differentiate from sporting events such as football matches or the green color of the lawn will predominate.
- two color histograms are used to characterize the segments: a histogram of the average colors is calculated by accumulating all the colors of each key image of a segment and is then normalized by the duration of the segment. This is the second descriptor of its own; a color intersection histogram is calculated by calculating the colors common to all key images in a segment. It is also normalized by the duration of the segment. This is the third descriptor of its own.
- the histogram correlation distance the "Chi-Square" distance and the histogram intersection distance are used. .
- the size and number of faces in a segment also makes it possible to distinguish short segments such as the weather containing only one person from longer segments such as the newspaper involving many people.
- This detection is performed on key images of the segment.
- the result of this detection provides, for a keyframe of a segment, enclosing rectangles for each detected face.
- An enclosing rectangle is a part of an image. For a given image, the number the position, and the size of the enclosing rectangles present on this image indicates the number, position and size of the faces detected.
- the segments are then described by the following four descriptors: the total number of faces detected divided by the duration of the segment; the mean and standard deviation of the average number of faces detected by key segment images; the maximum size of a face detected on all keyframes of the segment, ie the largest face size in the keyframes of the segment; the mean and standard deviation of the maximum face size detected by key frames of the segment;
- an identification of common points in two segments is carried out. For example, the repetition of many nearly identical pieces of a segment in another segment characterizes important common points between two segments. For example, the repetition of the shots with the presenter characterizes the game shows. This embodiment of the invention uses the identification of these repetitions to provide additional data to the classifier.
- the segments are described by the following values relating to the specific and common descriptors: - the total number of groups calculated on a segment; the average number of keyframes per group on a segment; the total number of groups containing images of both a first and a second segment; the average number of keyframes per group containing images of a first segment and a second segment.
- the method of the invention has been presented in the context of the implementation of a single binary classifier which makes it possible to determine whether segments belong to the same program.
- Other approaches are of course possible. They can be based on a general implementation of perceptron, of which the classifiers are part. They can also be based on any other approach that makes it possible to obtain information relating to the membership of the segments in the same audiovisual program according to the data of the previously calculated descriptors.
- Other optional features and benefits are possible. They can be based on a general implementation of perceptron, of which the classifiers are part. They can also be based on any other approach that makes it possible to obtain information relating to the membership of the segments in the same audiovisual program according to the data of the previously calculated descriptors.
- FIG. 6 an embodiment of a fusion device according to the invention is presented.
- Such a melting device comprises a memory 61, a processing unit 62 equipped for example with a microprocessor, and driven by the computer program 63, implementing the method according to the invention.
- the code instructions of the computer program 63 are for example loaded into a RAM memory before being executed by the processor of the processing unit 62.
- the processing unit 62 receives as input the stream audio visual cut into several segments.
- the microprocessor of the processing unit 62 implements the steps of the merger process, according to the instructions of the computer program 61 to decide on the membership of the different segments in the same program.
- the merging device comprises, in addition to the memory 61, for at least a first and at least a second segment of the plurality of segments, means for calculating a set of descriptors of different types and means for obtaining information representative of a membership segments to the same audiovisual program based on data representative of said previously calculated descriptors. These means are controlled by the microprocessor of the processing unit 62.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0950772 | 2009-02-06 | ||
PCT/FR2010/050104 WO2010089488A1 (fr) | 2009-02-06 | 2010-01-25 | Procède de fusion de segments de programmes audiovisuels, dispositif, et produit programme d'ordinateur correspondant |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2394246A1 true EP2394246A1 (fr) | 2011-12-14 |
Family
ID=41078147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP10707578A Ceased EP2394246A1 (fr) | 2009-02-06 | 2010-01-25 | Procède de fusion de segments de programmes audiovisuels, dispositif, et produit programme d'ordinateur correspondant |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP2394246A1 (fr) |
WO (1) | WO2010089488A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3188019B1 (fr) * | 2015-12-30 | 2019-09-18 | InterDigital CE Patent Holdings | Procédé pour sélectionner un contenu comprenant des données audiovisuelles et dispositif électronique correspondant, système, produit de programme lisible par ordinateur et support de stockage lisible par ordinateur |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6473095B1 (en) * | 1998-07-16 | 2002-10-29 | Koninklijke Philips Electronics N.V. | Histogram method for characterizing video content |
US6711587B1 (en) * | 2000-09-05 | 2004-03-23 | Hewlett-Packard Development Company, L.P. | Keyframe selection to represent a video |
JP4215681B2 (ja) * | 2004-05-26 | 2009-01-28 | 株式会社東芝 | 動画像処理装置及びその方法 |
US7756338B2 (en) * | 2007-02-14 | 2010-07-13 | Mitsubishi Electric Research Laboratories, Inc. | Method for detecting scene boundaries in genre independent videos |
-
2010
- 2010-01-25 EP EP10707578A patent/EP2394246A1/fr not_active Ceased
- 2010-01-25 WO PCT/FR2010/050104 patent/WO2010089488A1/fr active Application Filing
Non-Patent Citations (1)
Title |
---|
See references of WO2010089488A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2010089488A1 (fr) | 2010-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9510044B1 (en) | TV content segmentation, categorization and identification and time-aligned applications | |
US9414128B2 (en) | System and method for providing content-aware persistent advertisements | |
US7565016B2 (en) | Learning-based automatic commercial content detection | |
CN101395607B (zh) | 用于自动生成多个图像的概要的方法和设备 | |
US8706675B1 (en) | Video content claiming classifier | |
US20160014440A1 (en) | Video content analysis for automatic demographics recognition of users and videos | |
EP2104937B1 (fr) | Procede de creation d'un nouveau sommaire d'un document audiovisuel comportant deja un sommaire et des reportages et recepteur mettant en oeuvre le procede | |
EP1556794B1 (fr) | Procede de selection de germes pour le regroupement d'images-cles | |
Li et al. | Efficient video copy detection using multi-modality and dynamic path search | |
WO2010089488A1 (fr) | Procède de fusion de segments de programmes audiovisuels, dispositif, et produit programme d'ordinateur correspondant | |
Narwal et al. | A novel multi-modal neural network approach for dynamic and generic sports video summarization | |
Wang et al. | Visual saliency based aerial video summarization by online scene classification | |
WO2018114108A1 (fr) | Procede d'enregistrement d'un programme telediffuse a venir | |
Broilo et al. | Unsupervised event segmentation of news content with multimodal cues | |
EP2401700B1 (fr) | Traitement d'un flux de données numériques | |
Koźbiał et al. | Collection, analysis and summarization of video content | |
Zlitni et al. | A visual grammar approach for TV program identification | |
Glasberg et al. | Cartoon-recognition using visual-descriptors and a multilayer-perceptron | |
Min et al. | Near-duplicate video detection using temporal patterns of semantic concepts | |
US10713496B2 (en) | Method and system for hardware, channel, language and ad length agnostic detection of televised advertisements | |
Barbieri | Automatic summarization of narrative video | |
EP2097837B1 (fr) | Structuration d'un flux de données numeriques | |
Petit | Context-aware person recognition in TV programs | |
Brezeale et al. | Learning video preferences from video content | |
Manson et al. | Content-based video segment reunification for TV program extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20110824 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20120706 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20140116 |