CN111007972A - Intelligent glasses and control method thereof - Google Patents

Intelligent glasses and control method thereof Download PDF

Info

Publication number
CN111007972A
CN111007972A CN201911222481.XA CN201911222481A CN111007972A CN 111007972 A CN111007972 A CN 111007972A CN 201911222481 A CN201911222481 A CN 201911222481A CN 111007972 A CN111007972 A CN 111007972A
Authority
CN
China
Prior art keywords
user
menu
options
touch
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911222481.XA
Other languages
Chinese (zh)
Inventor
徐超
陈希
向文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutou Technology Hangzhou Co Ltd
Original Assignee
Yutou Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yutou Technology Hangzhou Co Ltd filed Critical Yutou Technology Hangzhou Co Ltd
Priority to CN201911222481.XA priority Critical patent/CN111007972A/en
Publication of CN111007972A publication Critical patent/CN111007972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a control method of intelligent glasses, which comprises the steps of showing a menu containing two or more options to a user through a display device on the intelligent glasses, receiving a first touch instruction of the user through a touch panel on the intelligent glasses, and executing one option in the menu based on one or more parameters of the first touch instruction. According to the invention, the user can realize a complex interaction function through the self-contained touch device of the glasses body of the intelligent glasses, and the user experience is improved.

Description

Intelligent glasses and control method thereof
Technical Field
The invention relates to the field of interaction methods of intelligent glasses, in particular to intelligent glasses and a control method thereof.
Background
With the development of Augmented Reality (AR) and Virtual Reality (VR) technologies, smart glasses based on AR or VR are also gradually starting to enter the market. Compared with a smart phone, the smart glasses do not have a large-area liquid crystal screen and user interaction, so that a user often needs to interact with the external controller and the smart glasses, the use is very inconvenient, and the user experience is influenced.
For example, the main problems with operating on smart glasses for traditional "confirm/cancel" selection popups and other menu schemes are as follows:
1. when the physical key operation is adopted: the operation steps are more, and the entity button influences the pleasing to the eye of intelligent glasses.
2. When adopting the external touch screen operation: the use of external equipment increases the cost, and because the page is not directly touched, auxiliary information such as cursor points and the like needs to be added, the requirement of hand-eye cooperation is high,
3. when the gesture recognition operation is employed: large operation action amplitude, great labor consumption, high requirement on the computing power of the intelligent glasses, low identification accuracy,
4. when a speech recognition operation is employed: the environment is limited greatly, and the method cannot be accurately identified in a noisy environment.
Disclosure of Invention
The invention aims to provide novel intelligent glasses and a control method thereof, and a user can realize a complex interaction function through a touch device carried by a glasses body of the intelligent glasses.
In one aspect, one or more embodiments of the present invention provide a method for controlling smart glasses, the method including: the method comprises the steps of displaying a menu containing two or more options to a user through a display device on the intelligent glasses, receiving a first touch instruction of the user through a touch panel on the intelligent glasses, and executing one option in the menu based on one or more parameters of the first touch instruction.
In some embodiments, before displaying the menu including the two or more options to the user through the display device on the smart glasses, the method further includes receiving a second touch instruction of the user through the touch panel, and displaying the menu including the two or more options to the user according to one or more parameters of the second touch instruction. Therefore, the user can display the menu through the touch panel on the intelligent glasses.
In some other embodiments, when the menu includes two options, the parameter of the first touch instruction includes: and pressing time, and executing one option in the menu according to whether the pressing time exceeds preset time. Therefore, the user can interact with the intelligent glasses conveniently through the touch menu, and the user experience is improved.
In some embodiments, when the menu includes more than two options, wherein the touch panel includes a pressure sensor, the parameter of the first touch instruction includes: and pressing time and pressure values, corresponding the more than two options to different pressure ranges, and executing one option in the menu according to whether the pressing time in the pressure range exceeds preset time. Therefore, the user can operate three or more options through the pressure sensor, user experience is improved, and operation steps are reduced.
Furthermore, the pressure value and/or the pressing time of the current pressing is displayed for the user through the display device, and the option corresponding to the current pressure value is highlighted. Therefore, the current pressure value and/or pressing time of the user can be prompted, the user can be helped to make interactive actions better, and the user experience is improved.
In another aspect, the present invention provides smart glasses, comprising: a display device, a touch panel, a processor, a memory, for storing computer instructions that, when executed by the processor, cause the processor to: the method comprises the steps of displaying a menu containing two or more options to a user through the display device, receiving a first touch instruction of the user through the touch panel, and executing one option in the menu based on one or more parameters of the first touch instruction.
Compared with other interaction modes, the method provided by the invention has the advantages of fewer operation steps, small operation amplitude, low requirement on hand-eye cooperation and small environmental limitation.
Drawings
Fig. 1 is a perspective view of smart glasses according to one or more embodiments of the present invention;
FIG. 2 is a flow diagram of a control method according to one or more embodiments of the invention;
FIG. 3 is a system framework diagram of a control method according to one or more embodiments of the invention;
FIG. 4 is a schematic diagram illustrating one embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating one embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the intelligent glasses and the control method thereof according to the present invention will be made with reference to the accompanying drawings and preferred embodiments.
As shown in fig. 1, fig. 1 is a diagram of smart glasses 10 according to one or more embodiments of the present invention. The smart glasses 10 may be augmented reality smart glasses or virtual reality smart glasses, or may be any smart glasses having an information display function and capable of interacting with user information. According to one embodiment of the present invention, the smart glasses 10 have one or more display devices 11. The display device 11 may be in different forms or configurations depending on the function of the smart glasses. For example, for virtual reality smart glasses, the display device 11 may be a common LED screen. For augmented reality smart glasses, the display device 11 may be a transflective display, for example, a transflective display composed of a Mi cro OLED and a free-form surface. The smart glasses 10 may further have a touch panel 12, which may be a capacitive or resistive touch panel, for receiving a touch instruction of a user, where the touch instruction may include: press, slide, click, etc. In one embodiment, the touch panel 12 may incorporate a pressure sensor for detecting the amount of pressure applied by the user. The smart eyewear 10 also typically contains a processor and memory (not shown) for implementing data processing and storage.
As shown in fig. 2, fig. 2 is a flowchart of a control method according to one or more embodiments of the present invention, the control method including: s1, showing a menu including two or more options to a user, S2, receiving a first touch command of the user, S3, executing one option in the menu based on one or more parameters of the first touch command.
In step S1, a menu including two or more options is presented to the user through the display device of the smart glasses. The way the menu is presented to the user may be presented in the form of a pop-up window. In some embodiments, the menu may include two options, such as "yes" and "no", "cancel" and "determine", etc., for decision operations in user interaction. In other embodiments, the menu may include more than three options for the user to select among multiple parallel content.
In step S2, a first touch instruction of the user is received through the touch panel on the smart glasses. The first touch instruction comprises operations of a user on the touch panel, such as pressing, sliding, clicking and the like. In the present invention, the "pressing" is an operation of continuously pressing the touch panel until the finger of the user is separated from the touch panel, and the pressing time is usually longer than a certain threshold, for example, 0.1 second. "clicking" can be divided into single clicking or multiple clicking. The user quickly touches the touch panel once and disengages in a short time by a single click. Usually, the touch time is less than a certain threshold, for example, 0.1 second is considered as a click rather than a press. "sliding" is the movement of the contact position on the touch panel plane by the user while maintaining contact with the touch panel.
In step S3, an option in the menu is executed based on one or more parameters of the first touch instruction. The parameters of the touch instruction may be different according to different touch instructions, for example, for a press instruction, the parameters include: press time, pressure value, etc.; for the "click" instruction, the parameters include: click times, click positions, etc.; for the "slide" instruction, the parameters include: sliding trajectory, sliding time, sliding rate, etc. The user can match the parameters of the touch instruction with the specific interactive instruction in advance, so that when the system receives the touch instruction of the user, one option in the menu is executed. The matching relation between the touch instruction and the interactive instruction is defined through an external device of the intelligent glasses, such as a smart phone. After the definition is completed, the accompany relationship can be transmitted back to the smart glasses by means of a data connection line and an ota (over the ai r).
Before step S1, a menu containing two or more options may be called up by receiving a second touch instruction of the user. Similar to the first touch command, the second touch command also includes pressing, sliding, clicking, and the like. The interactive instruction represented by the parameter of the second touch instruction may also be predefined by the user, for example, in some common cases, two consecutive "clicks" may be defined as an interactive instruction of the outgoing menu; in other scenarios, "press" may also be defined as an interactive instruction for the outgoing call menu.
When the presentation menu to the user contains two options, the parameters of the press time of the press instruction can be used for interaction. A threshold value for the pressing time may be preset, for example 1 second, 2 seconds or 3 seconds; whether the press time exceeds the threshold may be used as a way to distinguish between the two options. Generally speaking, when the two options are "yes" and "no", the pressing time exceeding the threshold is defined as an instruction for the user to select "yes", which is more consistent with the habit of user interaction. Of course, in some special scenarios, the pressing time exceeding the threshold may also be defined as an instruction for the user to select "no".
The pressure value parameter of the press command may be used for interaction when the menu to be presented to the user contains three or more options. At this time, the touch panel needs to include a pressure sensor. The pressure sensor is a device or apparatus capable of sensing a pressure signal and converting the pressure signal into an available output electrical signal according to a certain rule, for example, an existing capacitive sensor or piezoelectric pressure sensor may be used, which is not described in detail herein. Different pressure ranges can be defined to different options, and when the user maintains the pressure range for a certain preset time, the touch instruction of the user can be converted into an action for executing the option.
In some embodiments, the pressure value and/or the pressing time of the current pressing of the user can be displayed on the display device for the user to refer to. The option corresponding to the current pressure value can be highlighted or the user can be reminded in other modes, so that the user can be helped to make correct operation. Thereby, the user can be assisted in perceiving the change in pressure and making a correct selection. In addition, different menus and different operations corresponding to different pressure values can be displayed to the user in advance, so that the user can know the menu to be entered and the operation to be performed in advance, and the interaction efficiency of the user is improved.
Based on the control method of the intelligent glasses, a user can complete complex interaction actions only through the touch panel without the help of an external controller. For the interaction of the calling menu and the selection menu of the intelligent glasses, a user only needs one action (namely a pressing action) to complete, so that the operation of the user can be reduced, and the user experience is improved.
Fig. 3 is a system block diagram of a control method according to one or more embodiments of the present invention. The system comprises a touch panel 101, a display device 302, a processor 303 and a memory 304. The memory 304 is configured to store computer instructions that, when executed by the processor, are configured to present a menu comprising two or more options to a user via the display device, receive a first touch instruction from the user via the touch panel, and execute one of the options in the menu based on one or more parameters of the first touch instruction. Other features of the system are similar to those described in relation to figures 1 and 2 and will not be described again.
Two examples of the control method of the smart glasses disclosed in the present invention are further described below with reference to fig. 4 and 5.
Example one: "delete photo" menu function description
Defined on the photo display interface, when the user presses the touch panel for more than 2 seconds, the "delete photo" function menu is called out, as shown in fig. 4, which contains two options of "yes" and "no". When the function menu appears, the user continues to press the touch panel for a preset time, such as 2 seconds, 3 seconds or 4 seconds, the system determines that the user selects 'yes', deletes the photo, and enters a display interface of the next photo; and if the pressing time of the user does not reach the preset time, the system determines that the user selects 'no', exits from the function menu display interface, and returns to the display interface of the original photo.
Therefore, the user can complete the complex interactive function of deleting the photos by only pressing one action, and the user experience is improved.
Example two: "share photos" menu function description
The definition is on the photo display interface, and the user taps the touch panel to call out a "share photos" function menu, as shown in fig. 5, which includes three options of "microblog", "wechat", and "QQ". The total pressure range of the touch panel can be defined as 0-1.0, the range of the microblog option is 0.1-0.4, the range of the WeChat option is 0.4-0.7, and the range of the QQ option is 0.7-1.0. When the function menu appears, highlighting the currently selected option of the user according to the pressure of the user pressing the touch panel, and executing the sharing action when the user keeps the pressure in the pressure section for a preset time (for example, 2 seconds); and if the pressure of the user is detected to be in the range of 0-0.1 and reach the preset time (for example, 1 second), quitting the sharing interface and returning to the interface displayed by the original photo.
Therefore, the user can complete the complex interaction function of sharing the photos by only pressing one action, and the user experience is improved.
It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Furthermore, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more. The term "some" means one or more unless specifically stated otherwise. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed in this disclosure is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element should be construed as a means-plus-function unless the element is explicitly recited as using the phrase "means for … …".
Furthermore, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, the phrase "X employs A or B" is intended to mean any of the natural inclusive permutations. That is, the phrase "X employs A or B" is satisfied by either: x is A; b is used as X; or X employs A and B. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. The terms "first," "second," and the like are used solely to distinguish one element or noun from another, and do not denote any order, either temporally, or spatially.

Claims (10)

1. A method of controlling smart glasses, comprising,
presenting, via a display device on the smart glasses, a menu including two or more options to a user,
receiving a first touch instruction of a user through a touch panel on the intelligent glasses,
executing one option in the menu based on one or more parameters of the first touch instruction.
2. The method of claim 1, further comprising, prior to presenting a menu including two or more options to a user via a display device on smart glasses,
receiving a second touch instruction of the user through the touch panel,
and displaying the menu containing two or more options to a user according to one or more parameters of the second touch instruction.
3. The method of claim 1, when the menu contains two options, wherein,
the parameters of the first touch instruction include: the duration of the pressing is such that,
and executing one option in the menu according to whether the pressing time exceeds the preset time.
4. The method of claim 1, when the menu includes more than two options, wherein,
the touch panel comprises a pressure sensor, and the parameters of the first touch instruction comprise: the pressing time and the pressure value are set,
the two or more options are mapped to different pressure ranges,
and executing one option in the menu according to whether the pressing time in the pressure range exceeds the preset time.
5. The method of claim 4, further comprising
Displaying the pressure value and/or the pressing time of the current pressing to the user through the display device,
and highlighting the option corresponding to the current pressure value.
6. An intelligent glasses comprises
A display device for displaying the image of the object,
a touch panel having a touch-sensitive surface,
a processor for processing the received data, wherein the processor is used for processing the received data,
a memory for storing computer instructions that, when executed by the processor, cause the processor to:
through the display device, a menu including two or more options is presented to a user,
receiving a first touch instruction of a user through the touch panel,
executing one option in the menu based on one or more parameters of the first touch instruction.
7. Smart glasses according to claim 6, the computer instructions, when executed by the processor, causing the processor to, prior to presenting a menu comprising two or more options to a user via the display device,:
receiving a second touch instruction of the user through the touch panel,
and displaying the menu containing two or more options to a user according to one or more parameters of the second touch instruction.
8. Smart eyewear in accordance with claim 6, the computer instructions, when executed by the processor, further causing the processor to:
when the menu contains two options,
wherein the parameters of the first touch instruction include: the duration of the pressing is such that,
and executing one option in the menu according to whether the pressing time exceeds the preset time.
9. Smart glasses according to claim 6, the touch panel including a pressure sensor, the computer instructions, when executed by the processor, further causing the processor to,
when the menu contains more than two options,
the parameters of the first touch instruction include: the pressing time and the pressure value are set,
the two or more options are mapped to different pressure ranges,
and executing one option in the menu according to whether the pressing time in the pressure range exceeds the preset time.
10. Smart glasses according to claim 9, the computer instructions, when executed by the processor, further causing the processor to,
displaying the pressure value and/or the pressing time of the current pressing to the user through the display device,
and highlighting the option corresponding to the current pressure value.
CN201911222481.XA 2019-12-03 2019-12-03 Intelligent glasses and control method thereof Pending CN111007972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911222481.XA CN111007972A (en) 2019-12-03 2019-12-03 Intelligent glasses and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911222481.XA CN111007972A (en) 2019-12-03 2019-12-03 Intelligent glasses and control method thereof

Publications (1)

Publication Number Publication Date
CN111007972A true CN111007972A (en) 2020-04-14

Family

ID=70114945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911222481.XA Pending CN111007972A (en) 2019-12-03 2019-12-03 Intelligent glasses and control method thereof

Country Status (1)

Country Link
CN (1) CN111007972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022227464A1 (en) * 2021-04-27 2022-11-03 歌尔股份有限公司 Temple structure, preparation method therefor, and head-mounted display device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662573A (en) * 2012-03-24 2012-09-12 上海量明科技发展有限公司 Method and terminal for obtaining options by pressing
CN105068721A (en) * 2015-08-27 2015-11-18 广东欧珀移动通信有限公司 Operation menu display method and terminal
CN105224186A (en) * 2015-07-09 2016-01-06 北京君正集成电路股份有限公司 A kind of screen display method of intelligent glasses and intelligent glasses
US20160004306A1 (en) * 2010-07-23 2016-01-07 Telepatheye Inc. Eye-wearable device user interface and augmented reality method
CN107479691A (en) * 2017-07-06 2017-12-15 捷开通讯(深圳)有限公司 A kind of exchange method and its intelligent glasses and storage device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160004306A1 (en) * 2010-07-23 2016-01-07 Telepatheye Inc. Eye-wearable device user interface and augmented reality method
CN102662573A (en) * 2012-03-24 2012-09-12 上海量明科技发展有限公司 Method and terminal for obtaining options by pressing
CN105224186A (en) * 2015-07-09 2016-01-06 北京君正集成电路股份有限公司 A kind of screen display method of intelligent glasses and intelligent glasses
CN105068721A (en) * 2015-08-27 2015-11-18 广东欧珀移动通信有限公司 Operation menu display method and terminal
CN107479691A (en) * 2017-07-06 2017-12-15 捷开通讯(深圳)有限公司 A kind of exchange method and its intelligent glasses and storage device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022227464A1 (en) * 2021-04-27 2022-11-03 歌尔股份有限公司 Temple structure, preparation method therefor, and head-mounted display device

Similar Documents

Publication Publication Date Title
US20220100368A1 (en) User interfaces for improving single-handed operation of devices
US11941243B2 (en) Handwriting keyboard for screens
US9965074B2 (en) Device, method, and graphical user interface for transitioning between touch input to display output relationships
US11922518B2 (en) Managing contact information for communication applications
US10037138B2 (en) Device, method, and graphical user interface for switching between user interfaces
US10466861B2 (en) Adaptive user interfaces
US10318525B2 (en) Content browsing user interface
US9612697B2 (en) Touch control method of capacitive and electromagnetic dual-mode touch screen and handheld electronic device
US20120179963A1 (en) Multi-touch electronic device, graphic display interface thereof and object selection method of multi-touch display
CN111007972A (en) Intelligent glasses and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination