WO2019148904A1 - 一种放缩智能眼镜屏幕的方法与智能眼镜 - Google Patents

一种放缩智能眼镜屏幕的方法与智能眼镜 Download PDF

Info

Publication number
WO2019148904A1
WO2019148904A1 PCT/CN2018/111803 CN2018111803W WO2019148904A1 WO 2019148904 A1 WO2019148904 A1 WO 2019148904A1 CN 2018111803 W CN2018111803 W CN 2018111803W WO 2019148904 A1 WO2019148904 A1 WO 2019148904A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
display content
hand
screen display
screen
Prior art date
Application number
PCT/CN2018/111803
Other languages
English (en)
French (fr)
Inventor
刘天一
文凯
Original Assignee
北京亮亮视野科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京亮亮视野科技有限公司 filed Critical 北京亮亮视野科技有限公司
Publication of WO2019148904A1 publication Critical patent/WO2019148904A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • the invention relates to the field of smart glasses, in particular to a method for shrinking a smart glasses screen and a smart glasses.
  • Smart devices include smart wearable devices, such as smart watches and smart glasses, which not only have the basic functions of traditional watches and glasses, but also have various functions such as making calls, surfing the Internet and taking photos.
  • smart glasses In current smart glasses, smart glasses have the function of amplifying the screen due to the limitation of the glasses screen.
  • the current screen zoom or zoom depends on the user's eyeball or button control. Since the screen is small and the glasses themselves are small, the user cannot achieve precise control through the eyeball, and often has a misoperation, and the button is small and inconvenient to operate. Therefore, relying solely on eye movement or button control, it is very inconvenient for the user to zoom in or out of the screen during use.
  • the embodiment of the invention provides a method for zooming out the screen of the smart glasses and the smart glasses.
  • an embodiment of the present invention provides a method for scaling a smart glasses screen, the method including:
  • the screen of the smart glasses acquires display content
  • the sensing user limb motion includes collecting and identifying a user's hand motion in front of the screen.
  • the user's hand motion identifying the user in front of the screen includes,
  • the hand motion corresponding to the preset relative position information with the smallest error after comparison is selected as the user's hand motion.
  • the zooming in or reducing the screen display content includes
  • the screen display content is restored to a state before being enlarged or expanded according to the recognized second user hand motion or the user's hand motion disappearing.
  • the expanding the information of the hand position corresponding to the screen area includes expanding the detailed information of the screen display content or expanding the secondary menu of the screen display content.
  • the sensing user limb motion further includes sensing a virtual distance of the user's head relative to the display content of the screen.
  • the zooming in or reducing the screen display content includes: preset an initial value of the virtual distance;
  • the screen display content is restored to a state before zooming in or unfolding.
  • the expanding the information of the screen display content includes expanding the detailed information of the screen display content or expanding a secondary menu of the screen display content.
  • An embodiment of the present invention further provides a smart glasses, where the smart glasses include a controller and an image collector.
  • the image collector is configured to collect a user hand image in front of the smart glasses screen, and send the user hand image to the controller;
  • the controller is configured to identify a user's hand motion according to the user's hand image, and enlarge or reduce the screen display content according to the user's hand motion.
  • the embodiment of the invention further provides a smart glasses, the smart glasses including a controller and a depth detecting module,
  • the depth detecting module is configured to sense a virtual distance of a user's head relative to a relative position in the space, and send the virtual distance to the controller;
  • the controller is configured to enlarge or reduce the screen display content of the smart glasses according to the change of the virtual distance.
  • the depth detection module includes but is not limited to:
  • the invention enlarges or reduces the screen display content of the smart glasses by sensing the virtual distance of the user relative to the relative position in the space, thereby avoiding the user's erroneous operation and achieving the purpose of conveniently zooming the display content of the smart glasses screen.
  • FIG. 1 is a flow chart of a method for scaling a smart glasses screen according to an embodiment of the present invention
  • FIG. 3A, FIG. 3B and FIG. 3C are schematic diagrams showing a method for zooming out a screen according to an embodiment of the present invention
  • FIG. 4A, FIG. 4B and FIG. 4C are schematic diagrams showing another method of zooming out a screen according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a smart glasses according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another smart glasses according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for scaling a smart glasses screen and smart glasses.
  • FIG. 1 is a flowchart of a method for scaling a smart glasses screen according to an embodiment of the present invention, where the method includes:
  • Step S1 the screen of the smart glasses acquires display content
  • step S2 the user's limb motion is sensed, and the screen display content is enlarged or reduced.
  • the smart glasses display the display content to the user through the screen, and when the user wants to enlarge or reduce the screen display content, the smart glasses reduce the screen display content by sensing the user's body motion.
  • the sensing user limb motion includes acquiring and recognizing a user's hand motion in front of the screen.
  • the smart glasses collect and recognize the gestures made by the user in front of the screen of the smart glasses, and enlarge or reduce the screen display content.
  • the identifying a user's hand motion in front of the screen includes:
  • the hand motion corresponding to the preset relative position information with the smallest error after comparison is selected as the user's hand motion.
  • the preset feature image of the related gesture for enlarging or reducing the screen operation may be set and stored in the smart glasses, that is, multiple sets of hand key preset relative position information are set, and each set of preset relative position information corresponds to a gesture.
  • the smart glasses collect the hand feature image of the user in front of the screen, firstly determine whether the image is a hand according to the hand contour information, and then determine the position of the hand corresponding to the smart glasses screen according to the hand position information.
  • the key points in the hand are then determined in the hand feature image. As shown in Figure 2, the circles in the hand silhouette in Figure 2 represent the key points at different positions of the hand.
  • the relative position information of the key points in the user's hand feature image is obtained, and the plurality of sets of preset relative position information are matched and compared, and the hand action corresponding to the preset relative position information with the smallest error is the corresponding current hand image of the user.
  • User hand movements are obtained.
  • the enlarging or reducing the screen display content includes,
  • the screen display content is restored to a state before being enlarged or expanded according to the recognized second user hand motion or the user's hand motion disappearing.
  • the smart glasses can collect a plurality of user hand feature images for comparison, thereby determining the user's hand motion.
  • the screen display content is enlarged or expanded according to the first user's hand motion, and the first user's hand motion may be, for example, opening the palm or opening the index finger and the thumb; waiting for the user's hand movement Disappearing from the front of the screen, or collecting the second user's hand motion, restoring the screen display content to the state before zooming in or unfolding, the second user's hand motion may be, for example, a fist punch, the screen display content is restored to the original size, or expanded. The screen displays the content back.
  • the expanding the information of the hand position corresponding to the screen area comprises: expanding the detailed information of the screen display content or expanding the secondary menu of the screen display content.
  • the screen display content is enlarged or expanded according to the user's hand motion, wherein the expanded screen content may be the detailed information of the expanded corresponding content or the second Level menu.
  • the sensing user limb motion further includes sensing a virtual distance of the user's head relative to the screen display content.
  • the smart glasses enlarge or reduce the screen display content by sensing the virtual distance between the user's head and the screen display content.
  • the enlarging or reducing the screen display content includes: preset an initial value of the virtual distance;
  • the screen display content is restored to a state before zooming in or unfolding.
  • the initial value of the virtual distance is set according to the user's personal preference or the default value of the smart glasses.
  • the smart glasses sense that the user's head moves in a direction close to the screen display content, that is, the virtual distance of the user's head relative to the screen display content becomes smaller, the screen display content is enlarged or expanded; the smart glasses sense the user's head direction
  • the screen display content is reduced to the size before enlargement, or the expanded screen display content is retracted.
  • the head In order to prevent the smart glasses from misjudged the user's head movement, for example, when the user is walking, the head is also moved toward the screen, and not only the user's head moves, but the user's entire body is moving. At this time, it is judged whether the movement of the user's head is only in the direction of the line of sight of the user, and the length of time in which the user's head is close to the screen is not more than one second. Therefore, in view of walking, the head must have a movement perpendicular to the direction of the line of sight, and the duration is not shorter than 1 second. In this case, when the user's head moves, the display content of the smart glasses screen does not change, thereby It can avoid misjudgment of smart glasses.
  • the expanding the information of the screen display content includes expanding the detailed information of the screen display content or expanding the secondary menu of the screen display content.
  • the virtual distance between the user's head and the screen display content is increased, and the screen display content is enlarged or expanded.
  • the expanded screen content may be detailed information of the corresponding area content or a secondary menu thereof.
  • the screen display content is enlarged or reduced, thereby preventing the user from misoperation and achieving the purpose of conveniently zooming out the display content of the smart glasses screen.
  • FIG. 3A, FIG. 3B and FIG. 3C are schematic diagrams showing a method for zooming out a screen according to an embodiment of the present invention
  • FIG. 3A is a screen display content displayed in front of a user when wearing a smart glasses
  • FIG. 3B is a user. Place the hand in front of the smart glasses screen and open the palm. The smart glasses collect and recognize the user's hand movements, and enlarge the screen display content.
  • Figure 3C shows the screen display content of the smart glasses after the user puts the hand down. To the size before the enlargement.
  • the user can also enlarge the effect of displaying the content on the screen by opening the thumb and the index finger, or can achieve the purpose of restoring the size of the displayed content of the screen by re-shaping the opened hand.
  • the screen display content is enlarged or reduced, thereby preventing the user from misoperation and achieving the purpose of conveniently zooming out the display content of the smart glasses screen.
  • FIG. 4A, FIG. 4B and FIG. 4C are schematic diagrams showing another method for zooming out the screen according to an embodiment of the present invention
  • FIG. 4A is a screen display content displayed in front of the user when the user wears the smart glasses
  • FIG. 4B is a view The user's head is close to the screen to display the content, and the smart glasses sense that the virtual distance between the user's head and the screen display content becomes smaller, and the screen display content is expanded, wherein the expanded content is the detailed information of the original screen display content, and includes the same Dialog box
  • FIG. 4C shows that the user's head returns to the original position, that is, the virtual distance between the user's head and the screen display content is restored to the initial value, and the screen display content is restored to the state before the expansion.
  • the smart glasses need to determine that the moving direction of the user's head is parallel to the direction of the user's line of sight, and the smart glasses also need to determine the moving time of the user's head. This ensures that no misjudgment occurs, that is, the purpose of moving the user's head is not close to the screen, so that the screen display content of the smart glasses does not change when the display content is clearly seen.
  • the screen display content is enlarged or reduced, thereby preventing the user from misoperation and achieving the purpose of conveniently zooming out the display content of the smart glasses screen.
  • FIG. 5 is a schematic structural diagram of a smart glasses according to an embodiment of the present invention.
  • the smart glasses 10 in the figure include a controller 30 and an image collector 20.
  • the image collector 20 is configured to collect a user hand image in front of the screen of the smart glasses 10, and send the user hand image to the controller 30;
  • the controller 30 is configured to identify a user's hand motion according to the user's hand image, and enlarge or reduce the screen display content according to the user's hand motion.
  • the smart glasses 10 present the display content to the user through the screen.
  • the smart glasses 10 collect the user's hand motion through the image collector 20, and the controller 30 identifies the user. The hand moves to zoom out the screen display.
  • the image collector 20 collects the user's hand motion in front of the screen, and the controller 30 is used to identify the user's hand motion.
  • the identifying a user's hand motion in front of the screen includes:
  • the hand motion corresponding to the preset relative position information with the smallest error after comparison is selected as the user's hand motion.
  • the preset feature image of the related gesture for enlarging or reducing the screen operation may be set and stored in the controller 30, that is, multiple sets of hand key preset relative position information are set, and each set of preset relative position information corresponds to one gesture.
  • the image collector 20 collects the hand feature image of the user in front of the screen, and the controller 30 first determines whether the image is a hand according to the hand contour information, and then determines that the hand corresponds to the position in the smart glasses screen according to the hand position information.
  • the key points in the hand are then determined in the hand feature image. As shown in Figure 2, the circles in the hand silhouette in Figure 2 represent the key points at different positions of the hand.
  • the relative position information of the key points in the user's hand feature image is obtained, and the plurality of sets of preset relative position information are matched and compared, and the hand action corresponding to the preset relative position information with the smallest error is the corresponding current hand image of the user.
  • User hand movements are obtained.
  • the enlarging or reducing the screen display content includes,
  • the screen display content is restored to a state before being enlarged or expanded according to the recognized second user hand motion or the user's hand motion disappearing.
  • the image collector 20 collects a plurality of user hand feature images for comparison, thereby determining the user's hand motion.
  • the screen display content is enlarged or expanded according to the first user's hand motion, and the first user's hand motion may be, for example, opening the palm or opening the index finger and the thumb; waiting for the user's hand
  • the action disappears from the front of the screen, or the second user's hand motion is collected and recognized, and the screen display content is restored to the state before zooming in or unfolding, and the second user's hand motion can be, for example, a fist punch, and the screen display content is restored to the original size.
  • the expanded screen shows that the content is retracted.
  • the expanding the information of the hand position corresponding to the screen area comprises: expanding the detailed information of the screen display content or expanding the secondary menu of the screen display content.
  • the screen display content is enlarged or expanded according to the user's hand motion, wherein the expanded screen content may be the detailed information of the expanded corresponding content or the second Level menu.
  • the screen display content is enlarged or reduced, which can avoid the user's misoperation, and achieve the purpose of conveniently zooming out the display content of the smart glasses screen.
  • FIG. 6 is a schematic structural diagram of another smart glasses according to an embodiment of the present invention.
  • the smart glasses 10 shown in the figure include a controller 30 and a depth detecting module 40.
  • the depth detecting module 40 uses a SLAM (Simultaneous Localization and Mapping) technology to sense a virtual distance of a user's head relative to a relative position in the space, and sends the virtual distance to the control.
  • SLAM Simultaneous Localization and Mapping
  • the controller 30 is configured to enlarge or reduce the screen display content of the smart glasses according to the change of the virtual distance.
  • the smart glasses 10 select a certain target plane in the space as a relative position by using the depth detecting module 40, and present a display content to the user by forming a virtual screen between the smart glasses 10 and the target plane.
  • the distance of the virtual screen relative to the target plane is set to be constant.
  • the smart glasses 10 determine the virtual distance of the user's head and the target plane in the space through the depth detecting module 40, due to the virtual The distance of the screen relative to the target plane is fixed, and the controller 30 can determine the change of the distance of the user's head relative to the virtual screen according to the change of the virtual distance, thereby realizing the reduction of the screen display content.
  • the depth detecting module 40 can select a combination of a single camera and a gyroscope, or a dual camera, a time-of-flight camera, a structured optical camera, a full-pixel dual-core focusing camera, or the like, which supports the depth detection technology.
  • the virtual distance of the depth detection module 40 from the spatial target plane is detected and updated to obtain a real-time relative position of the user's head in space.
  • the smart glasses 10 sense the virtual distance between the user's head and the spatial target plane through the depth detecting module 40, and send the virtual distance to the controller 30.
  • the controller 30 determines that the user's head is relative to the virtual distance. The change in the distance of the virtual screen, zooming in or out on the screen display content.
  • the enlarging or reducing the screen display content includes: preset an initial value of the virtual distance;
  • the screen display content is restored to the state before zooming in or unfolding.
  • the initial value of the virtual distance is set by the controller 30 according to the user's personal preference or the smart glasses default value.
  • the controller 30 enlarges or expands the screen display content; the depth detecting module 40
  • the controller 30 reduces the screen display content to the size before the enlargement, or The expanded screen shows the content retracted.
  • the head In order to prevent the smart glasses 10 from misjudged the user's head movement, for example, when the user is walking, the head is also moved toward the screen, and at this time, not only the user's head moves, but the user's entire body is moving. At this time, it is judged whether the movement of the user's head is only in the direction of the line of sight of the user, and the length of time in which the user's head is close to the screen is not more than one second. Therefore, in view of walking, the head must have a movement perpendicular to the direction of the line of sight, and the duration is not shorter than 1 second. In this case, when the user's head moves, the display content of the smart glasses screen does not change, thereby It can avoid misjudgment of smart glasses. Among them, 1 second here is only an example.
  • the expanding the information of the screen display content includes expanding the detailed information of the screen display content or expanding the secondary menu of the screen display content.
  • the virtual distance between the user's head and the screen display content is increased, and the screen display content is enlarged or expanded.
  • the expanded screen content may be detailed information of the corresponding area content or a secondary menu thereof.
  • the screen display content is enlarged or reduced, thereby preventing the user from misoperation and achieving the purpose of conveniently zooming the display content of the smart glasses screen.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明提供了一种放缩智能眼镜屏幕的方法与智能眼镜,该方法包括,所述智能眼镜的屏幕获取显示内容;感应用户肢体动作,放大或缩小屏幕显示内容。本发明提供一种智能眼镜,该智能眼镜包括控制器与图像采集器,该图像采集器,用于采集该智能眼镜屏幕前方的用户手部图像,并将该用户手部图像发送至该控制器;该控制器,用于根据所述用户手部图像识别用户手部动作,并根据所述用户手部动作放大或缩小屏幕显示内容。本发明提供一种智能眼镜,该智能眼镜包括控制器与深度检测模块,该深度检测模块,用于感应用户头部相对于空间中相对位置的虚拟距离;该控制器,用于根据所述虚拟距离的变化,放大或缩小所述智能眼镜的屏幕显示内容。

Description

一种放缩智能眼镜屏幕的方法与智能眼镜 技术领域
本发明涉及智能眼镜领域,尤指一种放缩智能眼镜屏幕的方法与智能眼镜。
背景技术
随着科技的发展,智能设备逐渐进入人们的生活,由于智能设备的便捷、易操作及适用性强的特点,给人们的生活带来了极大的便利。智能设备中包括智能穿戴设备,例如智能手表及智能眼镜,不仅具有传统的手表及眼镜的基本功能,还具有打电话、上网及拍照等多种功能。
目前的智能眼镜中,由于眼镜屏幕的限制,智能眼镜多具有放大屏幕的功能。但目前的屏幕放大或缩小多依靠用户眼球或按键控制。由于屏幕较小且眼镜本身也较小,用户不能通过眼球达到精准的控制,往往出现误操作,且按键较小不便于操作。因此,仅依靠眼球运动或者按键控制,使得用户在使用过程中,对屏幕的放大或缩小十分不便捷。
发明内容
为了解决目前放缩智能眼镜不便捷的问题,本发明实施例提供一种放缩智能眼镜屏幕的方法与智能眼镜。
为了实现上述目的,本发明实施例提供一种放缩智能眼镜屏幕的方法,该方法包括,
所述智能眼镜的屏幕获取显示内容;
感应用户肢体动作,放大或缩小屏幕显示内容。
其中,所述感应用户肢体动作包括,采集并识别用户于所述屏幕 前方的用户手部动作。
其中,所述识别用户于所述屏幕前方的用户手部动作包括,
设置手部关键点的多组预设相对位置信息,并设置每组预设相对位置信息对应的手部动作;
采集用户手部特征图像,根据所述用户手部特征图像,确定手部位置信息及手部轮廓信息;
根据所述用户手部特征图像,确定手指、手掌及手腕的关键点;
生成所述关键点的相对位置信息,并与所述多组预设相对位置信息对比;
选择经对比后误差最小的预设相对位置信息对应的手部动作,作为用户手部动作。
其中,所述放大或缩小屏幕显示内容包括,
根据识别出的第一用户手部动作,放大所述屏幕显示内容或展开所述手部位置信息对应屏幕区域的信息;
根据识别出的第二用户手部动作或所述用户手部动作消失,恢复所述屏幕显示内容至放大或展开前的状态。
其中,所述展开所述手部位置信息对应屏幕区域的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的二级菜单。
进一步改进,所述感应用户肢体动作还包括,感应用户头部相对于所述屏幕显示内容的虚拟距离。
其中,所述放大或缩小屏幕显示内容包括,预设所述虚拟距离的初始值;
当用户头部相对于所述屏幕显示内容的虚拟距离变小,放大所述屏幕显示内容或展开所述屏幕显示内容的信息;
当用户头部相对于所述屏幕显示内容的虚拟距离恢复至所述初始值,恢复所述屏幕显示内容至放大或展开前的状态。
其中,所述展开所述屏幕显示内容的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的二级菜单。
本发明实施例还提供一种智能眼镜,所述智能眼镜包括控制器与图像采集器,
所述图像采集器,用于采集所述智能眼镜屏幕前方的用户手部图像,并将所述用户手部图像发送至所述控制器;
所述控制器,用于根据所述用户手部图像识别用户手部动作,并根据所述用户手部动作放大或缩小屏幕显示内容。
本发明实施例还提供一种智能眼镜,所述智能眼镜包括控制器与深度检测模块,
所述深度检测模块,采用SLAM技术,用于感应用户头部相对于空间中相对位置的虚拟距离,并将所述虚拟距离发送至所述控制器;
所述控制器,用于根据所述虚拟距离的变化,放大或缩小所述智能眼镜的屏幕显示内容。
其中,所述深度检测模块包括但不限于:
单摄像头和陀螺仪、双摄像头、飞行时间摄像头、结构光摄像头或全像素双核对焦摄像头。。
本发明通过感应用户相对于空间中相对位置的虚拟距离,放大或缩小智能眼镜的屏幕显示内容,由此可以避免用户误操作,达到便捷放缩智能眼镜屏幕显示内容的目的。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例一种放缩智能眼镜屏幕的方法的流程图;
图2为本发明实施例一种用户手部特征图像;
图3A、图3B与图3C为本发明实施例一种放缩屏幕方法的示意图;
图4A、图4B与图4C为本发明实施例另一种放缩屏幕方法的示意图;
图5为本发明实施例一种智能眼镜的结构示意图;
图6为本发明实施例另一种智能眼镜的结构示意图。
具体实施方式
本发明实施例提供了一种放缩智能眼镜屏幕的方法与智能眼镜。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
如图1所示为本发明实施例一种放缩智能眼镜屏幕的方法的流程图,图中所示的方法包括,
步骤S1,所述智能眼镜的屏幕获取显示内容;
步骤S2,感应用户肢体动作,放大或缩小屏幕显示内容。
在本实施中,智能眼镜通过屏幕,将显示内容呈现给用户,当用户想要放大或缩小屏幕显示内容时,智能眼镜通过感应用户的肢体动作,来放缩屏幕显示内容。
作为本发明的一个实施例,所述感应用户肢体动作包括,采集并识别用户于所述屏幕前方的用户手部动作。
其中,智能眼镜采集并且识别用户在智能眼镜的屏幕前方做出的手势,放大或缩小屏幕显示内容。
在本实施中,所述识别用户于所述屏幕前方的用户手部动作包括,
设置手部关键点的多组预设相对位置信息,并设置每组预设相对位置信息对应的手部动作;
采集用户手部特征图像,根据所述用户手部特征图像,确定手部位置信息及手部轮廓信息;
根据所述用户手部特征图像,确定手指、手掌及手腕的关键点;
生成所述关键点的相对位置信息,并与所述多组预设相对位置信息对比;
选择经对比后误差最小的预设相对位置信息对应的手部动作,作为用户手部动作。
其中,可在智能眼镜中设置并存储放大或缩小屏幕操作的相关手势的预设特征图像,即,设置多组手部关键点预设相对位置信息,每一组预设相对位置信息对应一手势。智能眼镜采集用户于屏幕前的手部特征图像,首先根据手部轮廓信息判断该图像是否为手部,再根据手部位置信息判断手部对应于智能眼镜屏幕中的位置。然后在该手部特征图像中确定手部中的关键点,如图2所示,图2中手部轮廓内的圆圈代表了手部不同位置的关键点。由此得到用户手部特征图像中关键点的相对位置信息,与多组预设相对位置信息进行匹配对比,误差最小的预设相对位置信息对应的手部动作即为当前用户手部特征图像对应的用户手部动作。
在本实施例中,所述放大或缩小屏幕显示内容包括,
根据识别出的第一用户手部动作,放大所述屏幕显示内容或展开所述手部位置信息对应屏幕区域的信息;
根据识别出的第二用户手部动作或所述用户手部动作消失,恢复所述屏幕显示内容至放大或展开前的状态。
其中,由于手部动作为动态过程,智能眼镜可采集多张用户手部特征图像进行比对,由此确定用户手部动作。智能眼镜识别出用户手部动作后,根据第一用户手部动作,放大或展开屏幕显示内容,第一用户手部动作可以例如为张开手掌或是张开食指与拇指;待用户手部动作从屏幕前消失,或者采集到第二用户手部动作,恢复所述屏幕显示内容至放大或展开前的状态,第二用户手部动作可以例如为握拳,屏幕显示内容恢复原大小,或者展开的屏幕显示内容收回。
在本实施例中,所述展开所述手部位置信息对应屏幕区域的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的 二级菜单。
其中,根据用户手部位置信息,判断出用户手部对应于屏幕位置后,再依据用户手部动作放大或展开屏幕显示内容,其中,展开屏幕内容可以为展开对应区域内容的详细信息或者其二级菜单。
作为本发明的一个实施例,所述感应用户肢体动作还包括,感应用户头部相对于所述屏幕显示内容的虚拟距离。
其中,智能眼镜通过感应用户头部与屏幕显示内容的虚拟距离,放大或缩小屏幕显示内容。
在本实施例中,所述放大或缩小屏幕显示内容包括,预设所述虚拟距离的初始值;
当用户头部相对于所述屏幕显示内容的虚拟距离变小,放大所述屏幕显示内容或展开所述屏幕显示内容的信息;
当用户头部相对于所述屏幕显示内容的虚拟距离恢复至所述初始值,恢复所述屏幕显示内容至放大或展开前的状态。
其中,依据用户个人喜好或智能眼镜默认值,来设置虚拟距离的初始值。智能眼镜感测到用户头部向靠近屏幕显示内容的方向移动时,即,用户头部相对于屏幕显示内容的虚拟距离变小时,放大或展开屏幕显示内容;智能眼镜感测到用户头部向远离屏幕显示内容的方向移动时,即,用户头部相对于屏幕显示内容的虚拟距离恢复至初始值时,缩小屏幕显示内容至放大前的大小,或者将展开的屏幕显示内容收回。
为了防止智能眼镜误判用户的头部动作,例如,用户在行走时,头部也是向着屏幕方向移动,这时不仅仅用户头部移动,而是用户整个躯体在移动。此时,要判断用户头部的移动是不是仅仅发生在与用户视线水平的方向,并且要判断发生在该移动方向上的时长,往往用户头部靠近屏幕的移动时长不会超过1秒。因此,鉴于行走时,头部必然存在垂直于视线方向的移动,并且时长不会短于1秒,在此种情况下,用户头部移动时,智能眼镜屏幕显示内容不会发生改变,由此可以避免智能眼镜发生误判。
在本实施例中,所述展开所述屏幕显示内容的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的二级菜单。
其中,判断用户头部与屏幕显示内容的虚拟距离变大,放大或展开屏幕显示内容,展开屏幕内容可以为展开对应区域内容的详细信息或者其二级菜单。
通过本发明实施例中的方法,智能眼镜感应用户的肢体动作后,放大或缩小屏幕显示内容,由此可以避免用户误操作,达到便捷放缩智能眼镜屏幕显示内容的目的。
如图3A、图3B与图3C所示为本发明实施例一种放缩屏幕方法的示意图,图3A所示为用户佩戴智能眼镜时,眼前所显示的屏幕显示内容;图3B所示为用户将手放置于智能眼镜屏幕前方,并张开手掌,智能眼镜采集并识别出用户的手部动作,将屏幕显示内容放大;图3C所示为用户将手放下后,智能眼镜的屏幕显示内容恢复至放大之前的大小。
在本实施例中,用户还可以通过张开拇指与食指达到放大屏幕显示内容的效果,也可以通过将张开的手重新握拳达到恢复屏幕显示内容大小的目的。
通过本发明实施例中的方法,智能眼镜感应用户的手部动作后,放大或缩小屏幕显示内容,由此可以避免用户误操作,达到便捷放缩智能眼镜屏幕显示内容的目的。
如图4A、图4B与图4C所示为本发明实施例另一种放缩屏幕方法的示意图,图4A所示为用户佩戴智能眼镜时,眼前所显示的屏幕显示内容;图4B所示为用户头部靠近屏幕显示内容,智能眼镜感测到用户头部与屏幕显示内容间的虚拟距离变小,将屏幕显示内容展开,其中,展开的内容为原屏幕显示内容的详细信息,并包括其对话框;图4C所示为用户头部回到原位,即用户头部与屏幕显示内容的虚拟距离恢复到初始值,屏幕显示内容恢复到展开前的状态。
在本实施例中,智能眼镜需判断用户头部的移动方向与用户视线方向平行,同时智能眼镜还需判断用户头部的移动时长。以此保证不 发生误判,即在用户头部移动的目的并非靠近屏幕,以看清其显示内容时,智能眼镜的屏幕显示内容不发生变化。
通过本发明实施例中的方法,智能眼镜感应用户的头部动作后,放大或缩小屏幕显示内容,由此可以避免用户误操作,达到便捷放缩智能眼镜屏幕显示内容的目的。
如图5所示为本发明实施例一种智能眼镜的结构示意图,图中的智能眼镜10包括控制器30与图像采集器20,
所述图像采集器20,用于采集所述智能眼镜10屏幕前方的用户手部图像,并将所述用户手部图像发送至所述控制器30;
所述控制器30,用于根据所述用户手部图像识别用户手部动作,并根据所述用户手部动作放大或缩小屏幕显示内容。
在本实施中,智能眼镜10通过屏幕,将显示内容呈现给用户,当用户想要放大或缩小屏幕显示内容时,智能眼镜10通过图像采集器20采集用户手部动作,控制器30识别该用户手部动作,从而放缩屏幕显示内容。
作为本发明的一个实施例,图像采集器20采集用户于所述屏幕前方的用户手部动作,控制器30用于识别该用户手部动作。
在本实施中,所述识别用户于所述屏幕前方的用户手部动作包括,
设置手部关键点的多组预设相对位置信息,并设置每组预设相对位置信息对应的手部动作;
采集用户手部特征图像,根据所述用户手部特征图像,确定手部位置信息及手部轮廓信息;
根据所述用户手部特征图像,确定手指、手掌及手腕的关键点;
生成所述关键点的相对位置信息,并与所述多组预设相对位置信息对比;
选择经对比后误差最小的预设相对位置信息对应的手部动作,作为用户手部动作。
其中,可在控制器30中设置并存储放大或缩小屏幕操作的相关 手势的预设特征图像,即,设置多组手部关键点预设相对位置信息,每一组预设相对位置信息对应一手势。图像采集器20采集用户于屏幕前的手部特征图像,控制器30首先根据手部轮廓信息判断该图像是否为手部,再根据手部位置信息判断手部对应于智能眼镜屏幕中的位置。然后在该手部特征图像中确定手部中的关键点,如图2所示,图2中手部轮廓内的圆圈代表了手部不同位置的关键点。由此得到用户手部特征图像中关键点的相对位置信息,与多组预设相对位置信息进行匹配对比,误差最小的预设相对位置信息对应的手部动作即为当前用户手部特征图像对应的用户手部动作。
在本实施例中,所述放大或缩小屏幕显示内容包括,
根据识别出的第一用户手部动作,放大所述屏幕显示内容或展开所述手部位置信息对应屏幕区域的信息;
根据识别出的第二用户手部动作或所述用户手部动作消失,恢复所述屏幕显示内容至放大或展开前的状态。
其中,由于手部动作为动态过程,图像采集器20采集多张用户手部特征图像进行比对,由此确定用户手部动作。控制器30识别出用户手部动作后,根据第一用户手部动作,放大或展开屏幕显示内容,第一用户手部动作可以例如为张开手掌或是张开食指与拇指;待用户手部动作从屏幕前消失,或者采集并识别到第二用户手部动作,恢复所述屏幕显示内容至放大或展开前的状态,第二用户手部动作可以例如为握拳,屏幕显示内容恢复原大小,或者展开的屏幕显示内容收回。
在本实施例中,所述展开所述手部位置信息对应屏幕区域的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的二级菜单。
其中,根据用户手部位置信息,判断出用户手部对应于屏幕位置后,再依据用户手部动作放大或展开屏幕显示内容,其中,展开屏幕内容可以为展开对应区域内容的详细信息或者其二级菜单。
通过本发明实施例中的智能眼镜,感应用户的手部动作后,放大或缩小屏幕显示内容,可以避免用户误操作,达到便捷放缩智能眼镜 屏幕显示内容的目的。
如图6为本发明实施例另一种智能眼镜的结构示意图,图中所示智能眼镜10包括控制器30与深度检测模块40,
所述深度检测模块40,采用SLAM(Simultaneous Localization and Mapping,即时定位与地图构建)技术,用于感应用户头部相对于空间中相对位置的虚拟距离,并将所述虚拟距离发送至所述控制器30;
所述控制器30,用于根据所述虚拟距离的变化,放大或缩小所述智能眼镜的屏幕显示内容。在本实施中,智能眼镜10通过深度检测模块40选取空间中某一个目标平面作为相对位置,并通过在该智能眼镜10与该目标平面之间形成一虚拟屏幕,将显示内容呈现给用户,该虚拟屏幕相对于目标平面的距离设定为固定不变,当用户想要放大或缩小屏幕显示内容时,智能眼镜10通过深度检测模块40判断用户头部与空间中目标平面的虚拟距离,由于虚拟屏幕相对于目标平面距离固定不变,控制器30根据该虚拟距离的变化,从而可判断用户头部相对于虚拟屏幕的距离的变化,实现放缩屏幕显示内容。
作为本发明的一个实施例,深度检测模块40可选用单摄像头和陀螺仪的组合,或双摄像头、飞行时间摄像头、结构光摄像头、全像素双核对焦摄像头等任一支持深度检测技术的摄像头,实时检测并更新深度检测模块40与空间目标平面的虚拟距离,从而获取用户头部在空间中的实时相对位置。
其中,智能眼镜10通过深度检测模块40感应用户头部与空间目标平面的虚拟距离,并将该虚拟距离发送至控制器30,该控制器30根据该虚拟距离的变化,判断用户头部相对于虚拟屏幕的距离的变化,放大或缩小屏幕显示内容。
在本实施例中,所述放大或缩小屏幕显示内容包括,预设所述虚拟距离的初始值;
当用户头部相对于所述屏幕显示内容的虚拟距离变小,放大所述屏幕显示内容或展开所述屏幕显示内容的信息;
当用户头部相对于所述屏幕显示内容的虚拟距离恢复至所述初 始值,恢复所述屏幕显示内容至放大或展开前的状态。
其中,依据用户个人喜好或智能眼镜默认值,通过控制器30来设置虚拟距离的初始值。深度检测模块40感测到用户头部向靠近屏幕显示内容的方向移动时,即,用户头部相对于屏幕显示内容的虚拟距离变小时,控制器30放大或展开屏幕显示内容;深度检测模块40感测到用户头部向远离屏幕显示内容的方向移动时,即,用户头部相对于屏幕显示内容的虚拟距离恢复至初始值时,控制器30缩小屏幕显示内容至放大前的大小,或者将展开的屏幕显示内容收回。
为了防止智能眼镜10误判用户的头部动作,例如,用户在行走时,头部也是向着屏幕方向移动,这时不仅仅用户头部移动,而是用户整个躯体在移动。此时,要判断用户头部的移动是不是仅仅发生在与用户视线水平的方向,并且要判断发生在该移动方向上的时长,往往用户头部靠近屏幕的移动时长不会超过1秒。因此,鉴于行走时,头部必然存在垂直于视线方向的移动,并且时长不会短于1秒,在此种情况下,用户头部移动时,智能眼镜屏幕显示内容不会发生改变,由此可以避免智能眼镜发生误判。其中,这里的1秒仅为举例。
在本实施例中,所述展开所述屏幕显示内容的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的二级菜单。
其中,判断用户头部与屏幕显示内容的虚拟距离变大,放大或展开屏幕显示内容,展开屏幕内容可以为展开对应区域内容的详细信息或者其二级菜单。
通过本发明实施例中的智能眼镜,感应用户头部与屏幕显示内容的虚拟距离变化后,放大或缩小屏幕显示内容,由此可以避免用户误操作,达到便捷放缩智能眼镜屏幕显示内容的目的。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读取存储介质中,比如ROM/RAM、磁碟、光盘等。
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体 实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (11)

  1. 一种放缩智能眼镜屏幕的方法,其特征在于,所述方法包括,
    所述智能眼镜的屏幕获取显示内容;
    感应用户肢体动作,放大或缩小屏幕显示内容。
  2. 根据权利要求1所述的方法,其特征在于,所述感应用户肢体动作包括,采集并识别用户于所述屏幕前方的用户手部动作。
  3. 根据权利要求2所述的方法,其特征在于,所述识别用户于所述屏幕前方的用户手部动作包括,
    设置手部关键点的多组预设相对位置信息,并设置每组预设相对位置信息对应的手部动作;
    采集用户手部特征图像,根据所述用户手部特征图像,确定手部位置信息及手部轮廓信息;
    根据所述用户手部特征图像,确定手指、手掌及手腕的关键点;
    生成所述关键点的相对位置信息,并与所述多组预设相对位置信息对比;
    选择经对比后误差最小的预设相对位置信息对应的手部动作,作为用户手部动作。
  4. 根据权利要求3所述的方法,其特征在于,所述放大或缩小屏幕显示内容包括,
    根据识别出的第一用户手部动作,放大所述屏幕显示内容或展开所述手部位置信息对应屏幕区域的信息;
    根据识别出的第二用户手部动作或所述用户手部动作消失,恢复所述屏幕显示内容至放大或展开前的状态。
  5. 根据权利要求4所述的方法,其特征在于,所述展开所述手部位置信息对应屏幕区域的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的二级菜单。
  6. 根据权利要求1所述的方法,其特征在于,所述感应用户肢体动作还包括,感应用户头部相对于所述屏幕显示内容的虚拟距离。
  7. 根据权利要求6所述的方法,其特征在于,所述放大或缩小屏幕显示内容包括,预设所述虚拟距离的初始值;
    当用户头部相对于所述屏幕显示内容的虚拟距离变小,放大所述屏幕显示内容或展开所述屏幕显示内容的信息;
    当用户头部相对于所述屏幕显示内容的虚拟距离恢复至所述初始值,恢复所述屏幕显示内容至放大或展开前的状态。
  8. 根据权利要求7所述的方法,其特征在于,所述展开所述屏幕显示内容的信息包括,展开所述屏幕显示内容的详细信息或展开所述屏幕显示内容的二级菜单。
  9. 一种智能眼镜,其特征在于,所述智能眼镜包括控制器与图像采集器,
    所述图像采集器,用于采集所述智能眼镜屏幕前方的用户手部图像,并将所述用户手部图像发送至所述控制器;
    所述控制器,用于根据所述用户手部图像识别用户手部动作,并根据所述用户手部动作放大或缩小屏幕显示内容。
  10. 一种智能眼镜,其特征在于,所述智能眼镜包括控制器与深度检测模块,
    所述深度检测模块,采用SLAM技术,用于感应用户头部相对于空间中相对位置的虚拟距离,并将所述虚拟距离发送至所述控制器;
    所述控制器,用于根据所述虚拟距离的变化,放大或缩小所述智能眼镜的屏幕显示内容。
  11. 根据权利要求10所述的智能眼镜,其特征在于,所述深度检测模块包括但不限于:
    单摄像头和陀螺仪、双摄像头、飞行时间摄像头、结构光摄像头或全像素双核对焦摄像头。
PCT/CN2018/111803 2018-01-30 2018-10-25 一种放缩智能眼镜屏幕的方法与智能眼镜 WO2019148904A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810089021.3A CN110096926A (zh) 2018-01-30 2018-01-30 一种放缩智能眼镜屏幕的方法与智能眼镜
CN201810089021.3 2018-01-30

Publications (1)

Publication Number Publication Date
WO2019148904A1 true WO2019148904A1 (zh) 2019-08-08

Family

ID=67442720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111803 WO2019148904A1 (zh) 2018-01-30 2018-10-25 一种放缩智能眼镜屏幕的方法与智能眼镜

Country Status (2)

Country Link
CN (1) CN110096926A (zh)
WO (1) WO2019148904A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033341B (zh) * 2021-03-09 2024-04-19 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060001647A1 (en) * 2004-04-21 2006-01-05 David Carroll Hand-held display device and method of controlling displayed content
CN101788876A (zh) * 2009-01-23 2010-07-28 英华达(上海)电子有限公司 自动缩放调整的方法及其系统
CN107168637A (zh) * 2017-07-23 2017-09-15 刘慧� 一种通过缩放手势进行显示缩放的智能终端

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150346813A1 (en) * 2014-06-03 2015-12-03 Aaron Michael Vargas Hands free image viewing on head mounted display
CN104793749B (zh) * 2015-04-30 2018-11-30 小米科技有限责任公司 智能眼镜及其控制方法、装置
US20170061696A1 (en) * 2015-08-31 2017-03-02 Samsung Electronics Co., Ltd. Virtual reality display apparatus and display method thereof
CN105242776A (zh) * 2015-09-07 2016-01-13 北京君正集成电路股份有限公司 一种智能眼镜的控制方法及智能眼镜
CN105867608A (zh) * 2015-12-25 2016-08-17 乐视致新电子科技(天津)有限公司 虚拟现实头盔的功能菜单翻页方法、装置及头盔
CN106204431B (zh) * 2016-08-24 2019-08-16 中国科学院深圳先进技术研究院 智能眼镜的显示方法及装置
CN107544673A (zh) * 2017-08-25 2018-01-05 上海视智电子科技有限公司 基于深度图信息的体感交互方法和体感交互系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060001647A1 (en) * 2004-04-21 2006-01-05 David Carroll Hand-held display device and method of controlling displayed content
CN101788876A (zh) * 2009-01-23 2010-07-28 英华达(上海)电子有限公司 自动缩放调整的方法及其系统
CN107168637A (zh) * 2017-07-23 2017-09-15 刘慧� 一种通过缩放手势进行显示缩放的智能终端

Also Published As

Publication number Publication date
CN110096926A (zh) 2019-08-06

Similar Documents

Publication Publication Date Title
JP7483084B2 (ja) ユーザインタフェースカメラ効果
US8866781B2 (en) Contactless gesture-based control method and apparatus
KR101688355B1 (ko) 다수의 지각 감지 입력의 상호작용
KR101947034B1 (ko) 휴대 기기의 입력 장치 및 방법
US11360551B2 (en) Method for displaying user interface of head-mounted display device
KR101844390B1 (ko) 사용자 인터페이스 제어를 위한 시스템 및 기법
CN112585566B (zh) 用于与具有内置摄像头的设备进行交互的手遮脸输入感测
JP6711817B2 (ja) 情報処理装置、その制御方法、プログラム、及び記憶媒体
US9916043B2 (en) Information processing apparatus for recognizing user operation based on an image
JP6314251B2 (ja) 操作入力装置、操作入力方法及びプログラム
US20150304615A1 (en) Projection control apparatus and projection control method
JP6177482B1 (ja) タッチパネル入力装置、タッチジェスチャ判定装置、タッチジェスチャ判定方法、及びタッチジェスチャ判定プログラム
JP2016177658A (ja) 仮想入力装置、入力方法、およびプログラム
CN108369451B (zh) 信息处理装置、信息处理方法及计算机可读存储介质
JP2012238293A (ja) 入力装置
US10185399B2 (en) Image processing apparatus, non-transitory computer-readable recording medium, and image processing method
WO2019148904A1 (zh) 一种放缩智能眼镜屏幕的方法与智能眼镜
JP5558899B2 (ja) 情報処理装置、その処理方法及びプログラム
CN106547339B (zh) 计算机设备的控制方法和装置
JP5676959B2 (ja) 情報処理装置及びその制御方法
JP2021009552A (ja) 情報処理装置、情報処理方法およびプログラム
JP6079418B2 (ja) 入力装置および入力プログラム
US11995899B2 (en) Pointer-based content recognition using a head-mounted device
TWI697827B (zh) 控制系統及其控制方法
JP2016122295A (ja) 入力方法、入力プログラムおよび入力装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18903155

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/11/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18903155

Country of ref document: EP

Kind code of ref document: A1