EP3729236A1 - Assistant vocal - Google Patents
Assistant vocalInfo
- Publication number
- EP3729236A1 EP3729236A1 EP18833272.0A EP18833272A EP3729236A1 EP 3729236 A1 EP3729236 A1 EP 3729236A1 EP 18833272 A EP18833272 A EP 18833272A EP 3729236 A1 EP3729236 A1 EP 3729236A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- processor
- video data
- input
- output
- human gesture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0381—Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- the invention relates to the field of service provision, in particular by voice command.
- a mobile phone can serve as an interface to control a wireless speaker or a TV from another manufacturer / designer.
- voice-activated interfaces tend to replace touch screens, which in turn replace physical button remote controls.
- voice assistants such as systems known as “Google Home” (Google), “Siri” (Apple) or “Alexa” (Amazon).
- voice assistants are usually provided to activate only when a keyword or key phrase is spoken by the user. It is also theoretically possible to limit activation by recognizing only the voices of presumed legitimate users. However, such precautions are imperfect, especially when the perceived sound quality does not allow a good analysis of sounds, for example in a noisy environment. The keyword or key phrase may not be picked up by the microphone or may not be recognized among all the sounds picked up. In such cases, the triggering is impossible or erratic.
- the invention improves the situation.
- an assistance device comprising:
- At least one processor operatively coupled with a memory
- At least a first input connected to the processor and able to receive video data from at least one video sensor
- At least one second input connected to the processor and able to receive audio data originating from at least one microphone
- the processor being arranged for:
- an assistance system comprising such a device and at least one of the following organs:
- a video sensor connected or connectable to the first input
- a speaker connected or connectable to an output of the device.
- a method of assistance implemented by computer means, comprising:
- a computer program comprising instructions for implementing the method as defined herein when this program is executed by a processor.
- a non-transitory recording medium readable by a computer, on which is recorded such a program.
- Such objects allow a user to trigger the implementation of a voice command process by performing a gesture, for example by hand.
- the nuisance tripping and the absence of tripping usually resulting from a malfunction of the voice recognition process are avoided.
- the triggering of the voice command process is insensitive to ambient noise and involuntary voice commands.
- Gesture-based interfaces are less common than voice-activated interfaces, especially since it is considered less natural or less instinctive to speak to a machine through gestures than by voice. Consequently, the use of gestural commands is reserved for particular contexts rather than so-called "general public" and "domestic" uses.
- Such objects are particularly advantageous when combined with voice assistants.
- Gesture recognition for triggering speech recognition can be combined with voice recognition triggering (pronunciation of keywords)).
- the user can either make a gesture or pronounce a word (s) to activate the voice assistant.
- the gesture recognition trigger replaces the voice recognition trigger.
- the efficiency is further improved. This also makes it possible to neutralize the microphones outside the activation periods of the assistants, either by switching them off or by disconnecting them. Risks that microphones are used for unintended purposes are reduced, for example by a third party taking undue control of such voice assistants.
- the device may furthermore comprise an output driven by the processor and capable of transmitting commands to a sound broadcasting system.
- the processor may, in addition, be arranged to transmit a command to reduce the sound volume or interruption of the sound diffusion in case of detection of said at least one reference human gesture in the video data. This reduces the ambient noise and therefore facilitates subsequent audio analysis operations, including voice recognition, and thus improves the relevance and operation of services based on audio analysis.
- Audio data analysis can include voice command recognition. This makes it possible to provide interactive services to the user, in particular of the voice assistance type.
- the device may further comprise an output driven by the processor and adapted to transmit commands to a third party device.
- the processor may further be arranged to transmit a command to said output, the command being selected based on the results of voice command recognition.
- the processor may further be arranged to trigger the emission of a visual indicator and / or sound perceptible by a user in case of detection of said at least one reference human gesture in the video data. This allows the user to pronounce words / phrases to certain devices only when he knows that the audio analysis is effective, which avoids unnecessary repetition of certain commands.
- Triggering the issuance of an indicator may include: - the lighting of a light of the device,
- the aforementioned optional features may be transposed, independently of each other or in combination with each other, to computer-readable devices, systems, methods, computer programs and / or non-transient recording media.
- FIG. 1 shows a nonlimiting example of a device proposed according to one or more embodiments
- FIG. 2 shows a nonlimiting example of interactions implemented according to one or more embodiments.
- many specific details are presented to provide a more complete understanding. Nevertheless, those skilled in the art may realize that embodiments can be put into practice without these specific details. In other cases, well-known features are not described in detail to avoid unnecessarily complicating the description.
- FIG. 1 represents a device 1 of assistance available to a user 100.
- the device 1 comprises:
- At least one processor 3 operably coupled to a memory 5,
- the first input 10 is able to receive video data from at least one video sensor 11, for example a camera or a webcam.
- the first input 10 forms an interface between the video sensor and the device 1 and takes, for example, the form of a connector type HDMI (for "High-Definition Multimedia Interface").
- HDMI for "High-Definition Multimedia Interface”
- other types of video input may be provided, in addition to or instead of the HDMI connector.
- the device 1 may comprise a plurality of first inputs 10, in the form of several connectors of the same type or of different types.
- the processor 3 can receive several video streams as input. This allows, for example, to capture images in different rooms of a building or at different angles.
- the device 1 can, in addition, be made compatible with a variety of video sensors 11.
- the second input 20 is able to receive audio data coming from at least one microphone 21.
- the second input 20 forms an interface between the microphone and the device 1 and takes, for example, the form of a coaxial type connector (for example, "jack").
- a coaxial type connector for example, "jack"
- other types of audio input may be provided, in addition to or instead of the coaxial connector.
- the first input 10 and the second input 20 may have a common connector, able to receive both a video stream and an audio stream.
- HDMI connectors are, for example, connectors with this capability.
- HDMI connectors also have the advantage of being widespread on existing devices, including televisions. Thus, a single HDMI connector may allow the device 1 to be connected a TV equipped with both a microphone and a camera.
- These third devices can then be used to supply respectively a first input 10 and a second input 20 of the device 1.
- the device 1 may also comprise a plurality of second inputs 20, in the form of several connectors of the same type or of different types.
- the processor 3 can receive as input several audio streams, for example several microphones distributed in a room, which improves the subsequent speech recognition by known signal processing methods as such.
- the device 1 can, in addition, be made compatible with a variety of microphones 21.
- the device 1 furthermore comprises:
- the output 30 is able to transmit commands to a sound distribution system 50, for example a connected speaker, a high-fidelity installation ("Hi-Fi"), a television, a smartphone (or “smartphone”), a tablet or a computer.
- the sound diffusion system 50 comprises at least one loudspeaker 51.
- the device 1 furthermore comprises:
- the output 40 is capable of transmitting commands to at least one third-party device 60, for example a connected speaker, a Hi-Fi installation, a television set, a smartphone (or “smartphone”), a tablet or a computer.
- a third-party device 60 for example a connected speaker, a Hi-Fi installation, a television set, a smartphone (or “smartphone"), a tablet or a computer.
- the outputs 30, 40 may, for example, take the form of connectors of various types preferably selected to be compatible with third-party equipment.
- the connector of one of the outputs 30, 40 may, for example, be common with the connector of one of the inputs.
- the HDMI connectors allow the implementation of two-way audio transmissions (technology known under the acronym "ARC" for "Audio Return Charnel”).
- a second input 20 and an output 30 may have a common connector connected to equipment, such as a television, including both a microphone 21 and loudspeakers 51.
- the device 1 may also comprise a single output or more than two outputs in the form of several connectors of the same type or of different types.
- the processor 3 can output several commands, for example to control distinctly several third-party devices.
- inlets 10, 20 and outlets 30, 40 have been shown to take the form of one or more mechanical connectors.
- the device 1 can be connected to third devices by cables.
- at least some of the inputs / outputs may take the form of a wireless communication module.
- the device 1 further comprises at least one wireless communication module, so that the device 1 can be wirelessly connected to remote third-party devices, including devices as exemplified above. .
- the wireless communication modules are then connected to the processor 3 and controlled by the processor 3.
- the communication modules may, for example, include a short-distance communication module, for example based on radio waves such as those of Wifi type.
- Wireless local networks, especially domestic networks, are often implemented via a Wi-Fi network.
- the device 1 can integrate into an existing environment, including so-called "home automation" networks.
- the communication modules may, for example, include a short-distance communication module, for example of the Bluetooth® type.
- Communication means compatible with Bluetooth® type technology equip a large part of the recent equipment, especially the smartphones and the so-called "portable" speakers.
- the communication modules may, for example, include a Near Field Communication (NFC) module.
- NFC Near Field Communication
- the communication being effective only at distances of a few centimeters, the device 1 must be disposed in the immediate vicinity of relays or third party equipment to which it is desired to connect.
- the video sensor 11, the microphone 21 and the loudspeaker 51 of the sound diffusion system 50 are third-party devices (not integrated with the device 1). These devices can be connected to the processor 3 of the device 1 while being integrated with other devices, together or separately from each other.
- Such third-party devices include, for example, a television, a smartphone, a tablet or a computer.
- This equipment can also be connected to the processor 3 of the device 1 while being equipment independent of any other device.
- the device 1 can be considered as a multimedia box, or auxiliary device, intended to come connect or pair with at least one third-party device, for example a television.
- such a multimedia box is operational once connected to such a third party device. Lin such multimedia box can be included in a TV decoder (designated by the acronym STB for "Set Top Box”) or even in a game console.
- the device 1 furthermore comprises:
- At least one video sensor 11 connected to a first input 10; at least one microphone 21 connected to a second input 20; and or
- At least one loudspeaker 51 connected to an output 30 of the device 1.
- the device 1 comprises a combination of integrated equipment and inputs / outputs intended to connect to third-party devices and without corresponding integrated equipment.
- the device 1 further comprises at least one visual indicator, for example one or more LEDs.
- Such an indicator driven by the processor 3, can be activated so as to inform the user 100 about a state of the device 1.
- the state of such an indicator may vary, for example during pairing operations with equipment third and / or in case of activation or deactivation of the device 1 as will be described in more detail below.
- the device 1 can be considered as an at least partly autonomous device.
- the method described below and with reference to FIG. 2 can be implemented by the device 1 without it being necessary to connect it or to pair it with third-party devices.
- the device 1 further comprises a power source not shown, for example a power cord for mains connection and / or a battery.
- the device 1 comprises a single processor 3. In a variant, several processors can cooperate to implement the operations described herein.
- the processor 3, or data processing unit (CPU), is associated with the memory 5.
- the memory 5 comprises for example a random access memory (RAM), a read only memory (ROM), a cache memory and / or a flash memory , or any other storage medium capable of storing software code in the form of executable instructions by a processor or data structures accessible by a processor.
- the processor 3 is arranged for:
- the reference gesture or the reference gestures can be, for example, stored in the form of identification / identification criteria in the memory 5 and to which the processor 3 uses during the analysis of the video data.
- Such criteria can be set by default.
- such criteria can be modified by software updates and / or by training with the user 100 himself.
- the user 100 can select the key gestures or reference gestures for triggering the analysis of the audio data.
- both the triggering of the audio data analysis and the audio analysis itself are carried out by the device 1 (via a second input 20 and the processor 3) .
- the triggering is implemented by the device 1 while the audio analysis is implemented by a third device to which the device 1 is connected.
- the device 1 can operate in a so-called "autonomous" mode. sense where the device 1 itself provides the audio analysis and optionally subsequent operations.
- Such a device 1 can advantageously replace a voice assistant.
- the device 1 can also operate in a "backup" mode in the sense that the device 1 triggers the audio analysis by a third device, for example by transmitting an activation signal to the third device, such as those referenced 60 and connected at the exit 40.
- the processor 3 may, optionally, be arranged to implement G analysis of audio data in addition to triggering.
- the triggering of the audio analysis by detection of a gesture can be cumulated with a triggering of the audio analysis by the voice (pronunciation of one or several keywords).
- the audio analysis and the services derived therefrom can remain activatable, in parallel, by the voice alone regardless of the gestures (detected by a third device) as well as by the gestures independently of the voice (detected by the device 1) .
- the trigger can also be conditioned by the detection of a combination of the voice and the use of a reference gesture, simultaneously or successively.
- the triggering of the audio analysis by detection of a gesture can be exclusive of a triggering of the audio analysis by the voice.
- the device 1 can be arranged to make the voices, including that of user G 100, inoperative before triggering the audio analysis by a gesture.
- a device 1 in autonomous mode, or a system combining a device 1 booster with a third device can prohibit the triggering of G audio analysis by voice.
- Audio data analysis can include voice command recognition.
- Voice command recognition techniques are known as such, especially in the context of voice assistants.
- Figure 2 shows the interactions between different elements during the implementation of a method according to one embodiment.
- the user 100 performs a gesture (static or dynamic).
- the gesture is captured by a video sensor 11 connected to a first output 10 of a device 1.
- the processor 3 of the device 1 receives a video stream (or video data) including the capture of the reference gesture.
- the processor 3 may receive a substantially continuous video stream or, for example, only when motion is detected.
- the processor 3 implements an analysis operation of the received video data. Operations include attempts to identify one or more human reference gestures. If no reference gesture is detected, then the rest of the process is not triggered. Device 1 remains in standby.
- the processor 3 is therefore furthermore arranged to transmit a command for reducing the sound volume or interrupting the sound diffusion if at least one reference human gesture is detected in the video data.
- the order is, for example, transmitted via the output 30 and to the sound distribution system 50 including a loudspeaker 51 as shown in Figure 2.
- the transmission of such a command can be performed, in replacement or complement, via d other outputs of the device 1 such as the output 40 and to third-party equipment 60.
- the processor 3 is, furthermore, arranged to trigger the transmission of a visual and / or audible indicator perceptible by the user 100 in case of detection of at least one reference human gesture in the video data.
- the sending of the indicator is represented by the sending of an "OK" in FIG. 2.
- the triggering of the emission of an indicator may include:
- the processor 3 is arranged to receive audio data to be analyzed, in particular via a second output 20 and the microphone 21.
- the audio data comprises, for example, For example, a voice command pronounced by the user 100.
- the processor 3 may, in addition, be arranged to implement an audio analysis including a recognition of voice commands, then to transmit a command selected according to results of the recognition of voice commands, in particular via the outputs 30 and / or 40, and to respectively the sound broadcasting system 50 and / or a third party device 60.
- Device 1 has been presented in a functional state. Those skilled in the art will further understand that, in practice, the device 1 may take a temporarily inactive form, such as a system including various parts intended to cooperate with each other. Such a system may, for example, comprise a device 1 and at least one of a video sensor connectable to the first input 10, a microphone connectable to the second input 20 and a speaker 51 connectable to an output 30 of the device 1.
- a video sensor connectable to the first input 10
- a microphone connectable to the second input 20
- speaker 51 connectable to an output 30 of the device 1.
- the device 1 can be provided with a processing device including an operating system and programs, components, modules and / or applications in the form of software executed by the processor 3, which can be stored in a non-volatile memory such as memory 5.
- a processing device including an operating system and programs, components, modules and / or applications in the form of software executed by the processor 3, which can be stored in a non-volatile memory such as memory 5.
- the proposed methods and systems and devices for implementing the methods include various alternatives, modifications, and enhancements that will be apparent to the skilled person, being understood that these different variants, modifications and improvements are part of the scope of the invention, as defined by the protection sought.
- various aspects and features described above may be implemented together, or separately, or substituted for each other, and all of the various combinations and sub-combinations of aspects and features are within the scope of the invention. 'invention.
- some of the systems and equipment described above may not incorporate all of the modules and features described for the preferred embodiments.
- the invention is not limited to the examples of devices, systems, methods, recording media and programs described above, only by way of example, but encompasses all variants that the person skilled in the art might consider in the framework of the protection sought.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1762353A FR3075427A1 (fr) | 2017-12-18 | 2017-12-18 | Assistant vocal |
PCT/FR2018/053158 WO2019122578A1 (fr) | 2017-12-18 | 2018-12-07 | Assistant vocal |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3729236A1 true EP3729236A1 (fr) | 2020-10-28 |
Family
ID=61521657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18833272.0A Withdrawn EP3729236A1 (fr) | 2017-12-18 | 2018-12-07 | Assistant vocal |
Country Status (4)
Country | Link |
---|---|
US (1) | US20200379731A1 (fr) |
EP (1) | EP3729236A1 (fr) |
FR (1) | FR3075427A1 (fr) |
WO (1) | WO2019122578A1 (fr) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7302200B2 (ja) * | 2019-02-26 | 2023-07-04 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
CN113038873A (zh) * | 2019-05-17 | 2021-06-25 | 松下知识产权经营株式会社 | 信息处理方法、信息处理系统以及信息处理程序 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6243683B1 (en) * | 1998-12-29 | 2001-06-05 | Intel Corporation | Video control of speech recognition |
US8532871B2 (en) * | 2007-06-05 | 2013-09-10 | Mitsubishi Electric Company | Multi-modal vehicle operating device |
EP2555536A1 (fr) * | 2011-08-05 | 2013-02-06 | Samsung Electronics Co., Ltd. | Procédé pour commander un appareil électronique sur la base de la reconnaissance de mouvement et de reconnaissance vocale et appareil électronique appliquant celui-ci |
DE102012013503B4 (de) * | 2012-07-06 | 2014-10-09 | Audi Ag | Verfahren und Steuerungssystem zum Betreiben eines Kraftwagens |
KR20140086302A (ko) * | 2012-12-28 | 2014-07-08 | 현대자동차주식회사 | 음성과 제스처를 이용한 명령어 인식 장치 및 그 방법 |
JP2014153663A (ja) * | 2013-02-13 | 2014-08-25 | Sony Corp | 音声認識装置、および音声認識方法、並びにプログラム |
KR102160767B1 (ko) * | 2013-06-20 | 2020-09-29 | 삼성전자주식회사 | 제스처를 감지하여 기능을 제어하는 휴대 단말 및 방법 |
US10431211B2 (en) * | 2016-07-29 | 2019-10-01 | Qualcomm Incorporated | Directional processing of far-field audio |
KR102399809B1 (ko) * | 2017-10-31 | 2022-05-19 | 엘지전자 주식회사 | 전자 장치 및 그 제어 방법 |
-
2017
- 2017-12-18 FR FR1762353A patent/FR3075427A1/fr active Pending
-
2018
- 2018-12-07 WO PCT/FR2018/053158 patent/WO2019122578A1/fr unknown
- 2018-12-07 US US16/954,947 patent/US20200379731A1/en not_active Abandoned
- 2018-12-07 EP EP18833272.0A patent/EP3729236A1/fr not_active Withdrawn
Also Published As
Publication number | Publication date |
---|---|
FR3075427A1 (fr) | 2019-06-21 |
WO2019122578A1 (fr) | 2019-06-27 |
US20200379731A1 (en) | 2020-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11310765B2 (en) | System and method to silence other devices in response to an incoming audible communication | |
US10121465B1 (en) | Providing content on multiple devices | |
US10209951B2 (en) | Language-based muting during multiuser communications | |
US10516776B2 (en) | Volume adjusting method, system, apparatus and computer storage medium | |
CN105323648B (zh) | 字幕隐藏方法和电子装置 | |
EP2990943B1 (fr) | Procédé et système de commande de dispositif terminal intelligent | |
CN105814909B (zh) | 用于反馈检测的系统和方法 | |
US9799329B1 (en) | Removing recurring environmental sounds | |
EP2973543B1 (fr) | Fourniture de contenu sur plusieurs dispositifs | |
US10178185B2 (en) | Load-balanced, persistent connection techniques | |
US20130332168A1 (en) | Voice activated search and control for applications | |
KR102147329B1 (ko) | 영상 표시 기기 및 그의 동작 방법 | |
US20150149169A1 (en) | Method and apparatus for providing mobile multimodal speech hearing aid | |
KR102265931B1 (ko) | 음성 인식을 이용하는 통화 수행 방법 및 사용자 단말 | |
FR2997599A3 (fr) | Appareil de traitement d'image et procede de commande de celui-ci et systeme de traitement d'image | |
KR101874888B1 (ko) | 휴대 단말기의 이어폰 인식 방법 및 장치 | |
KR20110054609A (ko) | 블루투스 디바이스의 원격 제어 방법 및 장치 | |
US20150163610A1 (en) | Audio keyword based control of media output | |
CA3041198A1 (fr) | Controle de formation de faisceau d'un reseau de microphones | |
US20230186938A1 (en) | Audio signal processing device and operating method therefor | |
WO2019122578A1 (fr) | Assistant vocal | |
US9521235B2 (en) | Two-way mirroring system for sound data | |
US10062386B1 (en) | Signaling voice-controlled devices | |
US20200043486A1 (en) | Natural language processing while sound sensor is muted | |
US10212476B2 (en) | Image display apparatus and image displaying method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200519 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20220930 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20230211 |