EP2316082A1 - Procede d'identification d'un objet dans une archive video - Google Patents
Procede d'identification d'un objet dans une archive videoInfo
- Publication number
- EP2316082A1 EP2316082A1 EP09809332A EP09809332A EP2316082A1 EP 2316082 A1 EP2316082 A1 EP 2316082A1 EP 09809332 A EP09809332 A EP 09809332A EP 09809332 A EP09809332 A EP 09809332A EP 2316082 A1 EP2316082 A1 EP 2316082A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- identified
- semantic feature
- archive
- images
- video archive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012512 characterization method Methods 0.000 claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 6
- 230000002123 temporal effect Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 238000011524 similarity measure Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 238000001454 recorded image Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000016571 aggressive behavior Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000000280 densification Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 235000020281 long black Nutrition 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
- G06V20/47—Detecting features for summarising video content
Definitions
- the invention relates to the search for information in a video archive and relates more specifically to a method and a device for identifying an object in a video archive comprising a plurality of images acquired on a network of cameras. .
- the invention also relates to a computer program stored on a recording medium and adapted, when executed on a computer, to implement the method according to the invention.
- the main drawback of this representation comes from the fact that the access point to the information is unique and is constituted by the root of the hierarchical tree, hence problems in the search for information.
- the data can also be organized according to a network model in the form of a graph where the archived entities are connected to each other by means of logical pointers.
- Object-oriented databases are also known which are capable of storing a multitude of information in objects such as, for example, an individual form, a machine, a resource ... to which values and attributes are associated.
- a fundamental problem is that it is particularly difficult to quickly identify an object in a video archive of a database containing a large number of images, particularly when the there is very little information on the object sought. Such a situation arises for example when searching, from a simple report, an unidentified individual in a video surveillance archive containing thousands of hours of recording. In this context, it is currently necessary to manually view all recorded video archives.
- Another object of the invention is to enable a human operator to have access to structured visual summaries of the objects present in a heterogeneous video database.
- Another object of the invention is to provide the human operator with optimized tools for navigating the database through an interactive search strategy.
- a method of identifying an object in a video archive comprising a plurality of images acquired on a camera network, including a characterization phase of the object to be identified and a search phase.
- said object in said archive said phase characterizing method for defining for said object at least one extractable semantic feature of said video archive, even on low resolution images, and directly interpretable by an operator, said search phase consisting in filtering the images of said video archive according to the semantic feature defined above, to automatically extract from said archive the images containing an object having said semantic feature, to define a group of objects comprising all the objects present in the video archive having said semantic feature, and to measure the similarity of the object to be identified with any other object of the previously defined group as a function of visual characteristics and space-time constraints on the path of the object to be identified in the space covered by the camera network.
- the step of measuring similarity comprises the following steps:
- the method according to the invention further comprises a step of assigning to each similarity measure a likelihood coefficient.
- the method according to the invention comprises a step of merging the results of the steps of the similarity measurement so as to define a single unified measure of similarity, for defining a distance in the space of the objects to be identified.
- the method according to the invention comprises a dynamic structuring of this space of the objects of interest, by means of the distance defined above, so as to be able to browse interactively in the video archive according to a hierarchical tree.
- the invention applies in the search for a human person in which said object to be identified is a human person for whom there is only a summary report.
- the semantic feature of said human person consists of a visible physical characteristic and / or a visible accessory.
- the method according to the invention is implemented by a device for identifying an object in a video archive comprising a plurality of images acquired on a network of cameras, characterized in that it comprises a characterization module of the object to be identified and a search module of said object in said archive, said characterization module comprising means for defining for said object at least one extractable semantic feature of said video archive, even on low resolution images, and directly interpretable by an operator, said search module comprising means for filtering the images of said video archive according to the semantic feature defined above, means for automatically extracting from said archive the images containing an object having said semantic feature, means for defining a group of objects comprising all the objects present in the video archive having said semantic feature, and means for measuring the similarity of the object to be identified with any other object of the gr oupe defined above according to visual characteristics and spatio ⁇ time constraints on the path of the object to be identified in the space covered by the camera network.
- said similarity measuring means comprise: a first calculation module configured to estimate the compatibility of the semantic feature of the object to be identified with the semantic feature extracted from the images of the other objects of the group defined above, and / or, a second calculation module configured to estimate the spatio-temporal compatibility of the path of the object to be identified with the path of another object of the previously defined group having a semantic characteristic similar to that of the object to be identified.
- the method according to the invention is implemented in said device by a computer program stored on a recording medium and adapted, when it is executed on a computer, to identify an object in a video archive comprising a plurality of images acquired on a network of cameras, said computer program comprising instructions for performing a characterization phase of the object to be identified and instructions for carrying out a search phase of said object in said archive, said characterization phase consisting in defining for said object has at least one extractable semantic feature of said video archive, even on low resolution images, and directly interpretable by an operator, said search step of filtering the images of said video archive according to the semantic feature defined above, to extract automatically from said archive the images cont by having an object having said semantic feature, defining a group of objects having all the objects present in the video archive having said semantic feature, and measuring the similarity of the object to be identified with any other object of the group defined previously in function of visual characteristics and constraints on the spatio- time of the object to be identified in the space covered by the camera network.
- the goal is to quickly find images of the incident, if they exist, and find the complete route of the suspect in the area covered by the network of cameras to determine the path spatio-temporal and identify it.
- the conventional approach is to view the images taken by the cameras near the indicated location of the incident and at times similar to that indicated by the witnesses to identify the incident in the video archive filmed.
- the approach proposed by the present invention consists in exploiting the reports given by the witnesses to systematize the research of the suspect and to filter the data before optimizing the search for images in the CCTV archive.
- the description of the suspect provided by the witnesses is used to define semantic information about the suspect.
- the latter can for example be tall, very thin, wear a long black coat and sunglasses, with a beard and long hair.
- these characteristics some are exploitable by the method according to the invention and programmed directly into the system.
- This pretreatment comprises the following steps:
- the detection of the movements is carried out by modeling the scene by gaussian mixtures (the background being fixed), and the tracking is performed by means of a Kalman filter, and then completed. by local analysis using local descriptors of the SIFT or SURF type, for example, or even simpler and more punctual modeling, in order to solve ambiguities due to occlusions.
- the detection of the persons is obtained for example by detecting the faces by using the techniques based on cascades of classifiers such as Adaboost ® and Haar filters, then possibly going back to the complete body envelope by shape analyzes with, possibly, postulates on physiognomic ratios or detectors of individuals based on learning techniques.
- the specialized algorithms used to characterize each of the persons are, for example, classifiers capable of indicating whether an individual has long or short hair, beard or not, has a very rounded or rather elongated face, is overweight. or has a slender figure, etc.
- a measure of reliability of the response is provided for each of the extracted information.
- These characterizations (or descriptors extracted from the images) are directly interpretable by a human operator and can be directly related to the semantic information collected during a testimony. In addition, they are calculated even on low resolution images. It is indeed not necessary to have hundreds of pixels wide on a face to determine if a person is wearing glasses.
- the classifiers are obtained according to the method described below:
- - Descriptors images are extracted locally on the thumbnails extracted (for example, to determine if a person wears a beard, one is interested in the lower half of the mask of detection of the face); these descriptors can be, for example, histograms of colors, gradients, spatial distribution properties characterizing the textures, responses to filters (Gabor for example), etc .; - classifiers are then constructed by machine learning to indicate which faces have the characteristic "beard"; an alternative approach is to learn distance measurements specific to these characteristics and then use these specific distances to determine the proximity or difference between two faces on certain semantic aspects.
- the reliability measure can be provided directly by the classifier. It can also be modeled a posteriori by translating for example the previous outputs into probabilities.
- Spatio Temporal ⁇ compatibility constraints may be binary (a person can not be in two places at once) or fuzzy (floating confidence value, ie more or less likely). Thanks to these constraints, observations between several cameras can be matched, more or less complex and reliable relationships are established between all the entities of the database.
- the video surveillance archive is represented by a semantic database associated with each individual seen in at least one of the videos.
- the structuring of the semantic database consists of the following steps:
- This report transmitted by the witnesses is exploited.
- This report includes semantic characterizations, possibly with associated confidence measures based on the witnesses' memories and the consistency of the statements;
- the database is filtered from said semantic features while retaining only the individuals having these characteristic features by removing all the individuals that do not have these features;
- the structuring of the database can be dynamic. For this purpose, it suffices to add, delete or adapt semantic criteria for the hierarchical structuring can be updated to reflect the expectations of the operator. Thus, it is possible to qualify the reliability of body information or add new information on the shape of the face and the wearing of a cap. It is also possible to automatically propose new structuring to the user.
- the user can navigate the database efficiently according to the individuals and their characteristics and no longer according to the cameras and the scrolling of the time.
- the corresponding video sequence can be viewed; this designation makes it possible to specify more precisely the visual appearance, which makes it possible to complete the similarity measurements.
- it provides spatiotemporal information on the location of the individual.
- the already filtered database is again filtered to remove all individuals whose positions and acquisition dates do not correspond with the spatio-temporal constraints of the normal displacement of the designated individual; The remaining individuals are ordered according to a combination of semantic factors, appearance characteristics, and the probability that this is the designated individual, due to spatio-temporal constraints on movement (a distance that can be estimated, a probable speed that can be calculated and a defined maximum speed).
- the user can then browse this ordered list and very effectively and very quickly perform the tracking and back-tracking of the designated individual, by browsing the archive via the spatio constraints. ⁇ temporal, semantic properties and appearance criteria, without having to worry about the selection of cameras or the timestamp of data.
- images of a scene are recorded (step 2) by a camera network 4 comprising several cameras distributed geographically over a monitored area.
- step 6 a time range is selected during which the recorded images will be analyzed.
- phase T2 describes the operation by an operator 20 of the database constituted during the phases Tl to T3.
- step 22 the operator designates the time range of the filmed event.
- step 24 the operator provides, via a user interface, attributes of the desired individual.
- the system displays (step 26) the filtered images from the structured database generated in the previous steps.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0855737A FR2935498B1 (fr) | 2008-08-27 | 2008-08-27 | Procede d'identification d'un objet dans une archive video. |
PCT/EP2009/060960 WO2010023213A1 (fr) | 2008-08-27 | 2009-08-26 | Procede d'identification d'un objet dans une archive video |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2316082A1 true EP2316082A1 (fr) | 2011-05-04 |
Family
ID=40467086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP09809332A Ceased EP2316082A1 (fr) | 2008-08-27 | 2009-08-26 | Procede d'identification d'un objet dans une archive video |
Country Status (6)
Country | Link |
---|---|
US (1) | US8594373B2 (zh) |
EP (1) | EP2316082A1 (zh) |
CN (1) | CN102187336B (zh) |
FR (1) | FR2935498B1 (zh) |
IL (1) | IL211129A0 (zh) |
WO (1) | WO2010023213A1 (zh) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8239359B2 (en) * | 2008-09-23 | 2012-08-07 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
JP5476955B2 (ja) * | 2009-12-04 | 2014-04-23 | ソニー株式会社 | 画像処理装置および画像処理方法、並びにプログラム |
US8532390B2 (en) * | 2010-07-28 | 2013-09-10 | International Business Machines Corporation | Semantic parsing of objects in video |
US8515127B2 (en) | 2010-07-28 | 2013-08-20 | International Business Machines Corporation | Multispectral detection of personal attributes for video surveillance |
US10424342B2 (en) * | 2010-07-28 | 2019-09-24 | International Business Machines Corporation | Facilitating people search in video surveillance |
US9134399B2 (en) | 2010-07-28 | 2015-09-15 | International Business Machines Corporation | Attribute-based person tracking across multiple cameras |
GB2492450B (en) * | 2011-06-27 | 2015-03-04 | Ibm | A method for identifying pairs of derivative and original images |
US10242099B1 (en) * | 2012-04-16 | 2019-03-26 | Oath Inc. | Cascaded multi-tier visual search system |
GB2519348B (en) | 2013-10-18 | 2021-04-14 | Vision Semantics Ltd | Visual data mining |
CN104866538A (zh) * | 2015-04-30 | 2015-08-26 | 北京海尔广科数字技术有限公司 | 一种动态更新语义告警库的方法、网络及系统 |
US9912838B2 (en) * | 2015-08-17 | 2018-03-06 | Itx-M2M Co., Ltd. | Video surveillance system for preventing exposure of uninteresting object |
US11294949B2 (en) | 2018-09-04 | 2022-04-05 | Toyota Connected North America, Inc. | Systems and methods for querying a distributed inventory of visual data |
CN110647804A (zh) * | 2019-08-09 | 2020-01-03 | 中国传媒大学 | 一种暴力视频识别方法、计算机系统和存储介质 |
US20220147743A1 (en) * | 2020-11-09 | 2022-05-12 | Nvidia Corporation | Scalable semantic image retrieval with deep template matching |
CN112449249A (zh) * | 2020-11-23 | 2021-03-05 | 深圳市慧鲤科技有限公司 | 视频流处理方法及装置、电子设备及存储介质 |
FR3140725A1 (fr) * | 2022-10-10 | 2024-04-12 | Two - I | système de surveillance |
CN116303549A (zh) * | 2023-04-14 | 2023-06-23 | 北京合思信息技术有限公司 | 电子会计档案的查询方法、装置、服务器及存储介质 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020188602A1 (en) * | 2001-05-07 | 2002-12-12 | Eastman Kodak Company | Method for associating semantic information with multiple images in an image database environment |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69628282T2 (de) * | 1995-09-15 | 2004-03-11 | Interval Research Corp., Palo Alto | Verfahren zur kompression mehrerer videobilder |
US7127087B2 (en) * | 2000-03-27 | 2006-10-24 | Microsoft Corporation | Pose-invariant face recognition system and process |
EP1260934A3 (en) * | 2001-05-22 | 2004-04-14 | Matsushita Electric Industrial Co., Ltd. | Surveillance recording device and method |
US7683929B2 (en) * | 2002-02-06 | 2010-03-23 | Nice Systems, Ltd. | System and method for video content analysis-based detection, surveillance and alarm management |
CN100446558C (zh) * | 2002-07-02 | 2008-12-24 | 松下电器产业株式会社 | 视频产生处理装置和视频产生处理方法 |
JP4013684B2 (ja) * | 2002-07-23 | 2007-11-28 | オムロン株式会社 | 個人認証システムにおける不正登録防止装置 |
US20040095377A1 (en) * | 2002-11-18 | 2004-05-20 | Iris Technologies, Inc. | Video information analyzer |
US7606425B2 (en) * | 2004-09-09 | 2009-10-20 | Honeywell International Inc. | Unsupervised learning of events in a video sequence |
US20060274949A1 (en) * | 2005-06-02 | 2006-12-07 | Eastman Kodak Company | Using photographer identity to classify images |
US7519588B2 (en) * | 2005-06-20 | 2009-04-14 | Efficient Frontier | Keyword characterization and application |
WO2007140609A1 (en) * | 2006-06-06 | 2007-12-13 | Moreideas Inc. | Method and system for image and video analysis, enhancement and display for communication |
EP2062197A4 (en) * | 2006-09-15 | 2010-10-06 | Retica Systems Inc | MULTIMODAL BIOMETRIC SYSTEM AND METHOD FOR LARGE DISTANCES |
US20080140523A1 (en) * | 2006-12-06 | 2008-06-12 | Sherpa Techologies, Llc | Association of media interaction with complementary data |
CN101201822B (zh) * | 2006-12-11 | 2010-06-23 | 南京理工大学 | 基于内容的视频镜头检索方法 |
AU2007345938B2 (en) * | 2007-02-01 | 2011-11-10 | Briefcam, Ltd. | Method and system for video indexing and video synopsis |
US8229227B2 (en) * | 2007-06-18 | 2012-07-24 | Zeitera, Llc | Methods and apparatus for providing a scalable identification of digital video sequences |
JP4982410B2 (ja) * | 2008-03-10 | 2012-07-25 | 株式会社東芝 | 空間移動量算出装置及びその方法 |
US8804005B2 (en) * | 2008-04-29 | 2014-08-12 | Microsoft Corporation | Video concept detection using multi-layer multi-instance learning |
JP5476955B2 (ja) * | 2009-12-04 | 2014-04-23 | ソニー株式会社 | 画像処理装置および画像処理方法、並びにプログラム |
JP5505723B2 (ja) * | 2010-03-31 | 2014-05-28 | アイシン・エィ・ダブリュ株式会社 | 画像処理システム及び位置測位システム |
-
2008
- 2008-08-27 FR FR0855737A patent/FR2935498B1/fr active Active
-
2009
- 2009-08-26 CN CN200980133643.0A patent/CN102187336B/zh not_active Expired - Fee Related
- 2009-08-26 EP EP09809332A patent/EP2316082A1/fr not_active Ceased
- 2009-08-26 WO PCT/EP2009/060960 patent/WO2010023213A1/fr active Application Filing
- 2009-08-26 US US13/059,962 patent/US8594373B2/en active Active
-
2011
- 2011-02-08 IL IL211129A patent/IL211129A0/en not_active IP Right Cessation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020188602A1 (en) * | 2001-05-07 | 2002-12-12 | Eastman Kodak Company | Method for associating semantic information with multiple images in an image database environment |
Also Published As
Publication number | Publication date |
---|---|
US20120039506A1 (en) | 2012-02-16 |
IL211129A0 (en) | 2011-04-28 |
FR2935498A1 (fr) | 2010-03-05 |
CN102187336B (zh) | 2014-06-11 |
US8594373B2 (en) | 2013-11-26 |
WO2010023213A1 (fr) | 2010-03-04 |
FR2935498B1 (fr) | 2010-10-15 |
CN102187336A (zh) | 2011-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2316082A1 (fr) | Procede d'identification d'un objet dans une archive video | |
US20120148149A1 (en) | Video key frame extraction using sparse representation | |
EP3707676A1 (fr) | Procédé d'estimation de pose d'une caméra dans le référentiel d'une scène tridimensionnelle, dispositif, système de réalite augmentée et programme d'ordinateur associé | |
KR100956159B1 (ko) | 라이프로그 장치 및 정보 자동 태그 입력 방법 | |
EP2659672A2 (en) | Searching recorded video | |
EP3857512A1 (fr) | Procede, programme d'ordinateur et systeme de detection et localisation d'objet dans une scene tridimensionnelle | |
WO2019110914A1 (fr) | Extraction automatique d'attributs d'un objet au sein d'un ensemble d'images numeriques | |
CA2825506A1 (en) | Spectral scene simplification through background subtraction | |
FR3011960A1 (fr) | Procede d'identification a partir d'un modele spatial et spectral d'objet | |
Gandhimathi Alias Usha et al. | A novel method for segmentation and change detection of satellite images using proximal splitting algorithm and multiclass SVM | |
EP1543444A2 (fr) | Procede et dispositif de mesure de similarite entre images | |
EP0863488A1 (fr) | Procédé de détection de contours de relief dans une paire d'images stéréoscopiques | |
CN105930459B (zh) | 一种有效的基于内容的人体皮肤图像分类检索方法 | |
Möller et al. | Tracking sponge size and behaviour with fixed underwater observatories | |
WO2006032799A1 (fr) | Système d'indexation de vidéo de surveillance | |
FR2936627A1 (fr) | Procede d'optimisation de la recherche d'une scene a partir d'un flux d'images archivees dans une base de donnees video. | |
US8891870B2 (en) | Substance subtraction in a scene based on hyperspectral characteristics | |
EP0550101A1 (fr) | Procédé de recalage d'images | |
EP2149099B1 (fr) | Dispositif et methode de traitement d'images pour determiner une signature d'un film | |
FR3094815A1 (fr) | Procédé, programme d’ordinateur et système pour l’identification d’une instance d’objet dans une scène tridimensionnelle | |
EP4439484A1 (fr) | Procédé de classification de données multidimensionnelles fortement résolues | |
WO2024079119A1 (fr) | Système de surveillance | |
Chu et al. | Travel video scene detection by search | |
FR2872326A1 (fr) | Procede de detection d'evenements par videosurveillance | |
FR3142026A1 (fr) | Détection d’objets dans une image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20110218 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA RS |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: JURIE, FREDERIC Inventor name: STURZEL, MARC |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20161125 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: AIRBUS (SAS) |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20180323 |