WO2017219643A1 - 输入文字的3d效果生成、输入文字的3d显示方法和系统 - Google Patents

输入文字的3d效果生成、输入文字的3d显示方法和系统 Download PDF

Info

Publication number
WO2017219643A1
WO2017219643A1 PCT/CN2016/113227 CN2016113227W WO2017219643A1 WO 2017219643 A1 WO2017219643 A1 WO 2017219643A1 CN 2016113227 W CN2016113227 W CN 2016113227W WO 2017219643 A1 WO2017219643 A1 WO 2017219643A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour feature
dimensional
input text
rendering
feature information
Prior art date
Application number
PCT/CN2016/113227
Other languages
English (en)
French (fr)
Inventor
张强
Original Assignee
广州视睿电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视睿电子科技有限公司 filed Critical 广州视睿电子科技有限公司
Publication of WO2017219643A1 publication Critical patent/WO2017219643A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of multimedia technologies, and in particular, to a 3D display method and system for inputting text 3D effects and inputting characters.
  • the above text When the text is input into the corresponding software or the input position of the client, the above text is usually displayed in a two-dimensional plane including the input position; if the 3D (three-dimensional) effect of the text is previewed, the above text is required. After the input is completed, it is selected, and then 3D processing is performed by using a specific software to obtain a 3D effect of the text, so that the 3D effect generation efficiency is low, and it is difficult to generate a 3D effect at the time of text input, thereby affecting the display of the input text. effect.
  • a method for generating a 3D effect of inputting text includes the following steps:
  • a 3D effect generation system for inputting text comprising:
  • a reading module for reading contour feature information of the input text
  • mapping module configured to map the contour feature information to a two-dimensional plane, and sample a point corresponding to the contour feature information to obtain a contour feature point set corresponding to the input character
  • Establishing a module configured to establish a three-dimensional space coordinate system on a coordinate system corresponding to the two-dimensional plane, and respectively set a third-dimensional coordinate of each point of the contour feature point set on the three-dimensional space coordinate system, to obtain the input a spatial pattern of the text; wherein the coordinate axis perpendicular to the two-dimensional plane is a third three-dimensional coordinate axis;
  • a rendering module configured to render the spatial graphic according to the rendering parameter to obtain a 3D rendering of the input text.
  • the method and system for generating a 3D effect of the input text may map the contour feature information of the input character to a two-dimensional plane, sample the point corresponding to the contour feature information, obtain a contour feature point set, and set each point of the contour feature point set.
  • the third-dimensional coordinates are used to obtain the spatial pattern corresponding to the input text, and then the spatial image is rendered according to the rendering parameters to form a 3D rendering of the input text, which can improve the generation efficiency of the 3D effect, and can also be performed on the input text.
  • the generation of the corresponding 3D renderings realizes the display of the input text in 3D form, further improving the display effect of the input text.
  • a 3D display method for inputting text comprising the following steps:
  • the 3D rendering is displayed on the display interface.
  • a 3D display system for inputting text including:
  • a reading module for reading contour feature information of the input text
  • mapping module configured to map the contour feature information to a two-dimensional plane, and sample a point corresponding to the contour feature information to obtain a contour feature point set corresponding to the input character
  • Establishing a module configured to establish a three-dimensional space coordinate system on a coordinate system corresponding to the two-dimensional plane, and respectively set a third-dimensional coordinate of each point of the contour feature point set on the three-dimensional space coordinate system, to obtain the input a spatial pattern of the text; wherein the coordinate axis perpendicular to the two-dimensional plane is a third three-dimensional coordinate axis;
  • a rendering module configured to render the spatial graphic according to a rendering parameter, to obtain a 3D rendering of the input text
  • a display module configured to display the 3D rendering diagram on the display interface.
  • the 3D display method and system for inputting text can generate a corresponding 3D rendering image when text is input, and display a 3D rendering of the input text on the display interface, thereby realizing 3D display of the input text, and can improve input text. Display efficiency of 3D display.
  • FIG. 1 is a flow chart of a method for generating a 3D effect of input text according to an embodiment
  • FIG. 2 is a schematic diagram of mapping of input text outline feature information in a two-dimensional plane according to an embodiment
  • Figure 3 is a 3D rendering of an input text of an embodiment
  • FIG. 4 is a schematic structural diagram of a 3D effect generation system for inputting text according to an embodiment
  • FIG. 5 is a flow chart of a 3D display method of input text in an embodiment
  • FIG. 6 is a schematic structural diagram of a 3D display system for inputting characters according to an embodiment.
  • FIG. 1 is a flowchart of a method for generating a 3D effect of input text according to an embodiment, which includes the following steps:
  • the input text is the input text, and may be a text input in real time when various office software (word or excel, etc.) is used, or may be a text input to an edit box of the related application software.
  • the above text is an image or symbol carrying a language, and may include Chinese characters, letters or numbers, and the like.
  • the outline feature information of the input text may include a graphic corresponding to the input text.
  • a schematic diagram of mapping the contour feature information to the two-dimensional plane can be as shown in FIG. 2, and the graphic of the input text can be read from the two-dimensional plane.
  • the contour feature point set may represent an outline or a graphic of the input text, that is, according to the dot drawing in the contour feature point set, the outline or graphic of the input text may be obtained.
  • the above steps may set two coordinate axes in the coordinate system corresponding to the two-dimensional plane as the first dimensional coordinate axis and the second dimensional coordinate axis respectively, according to the origin of the coordinate system corresponding to the two-dimensional plane, and simultaneously perpendicular to the first
  • the straight line of the dimensional coordinate axis and the second dimensional coordinate axis sets the third three-dimensional coordinate axis to obtain a corresponding three-dimensional space coordinate system.
  • the spatial pattern corresponding to the input text can be obtained, and the third-dimensional coordinates of each point in the contour feature point set can be the same.
  • the contour feature points are concentrated in each point of the point.
  • the three-dimensional coordinates can be set to a value from 0-50 pixels.
  • S40 Render the spatial graphic according to the rendering parameter to obtain a 3D rendering of the input text.
  • the spatial figure may be decomposed into a plurality of plane geometric figures (such as a triangle or a quadrangle, etc.), and then the respective planar geometric figures are respectively rendered correspondingly.
  • the above rendering parameters may include information such as a color parameter or a lighting parameter.
  • the method for generating a 3D effect of the input text may map the contour feature information of the input character to a two-dimensional plane, and sample the point corresponding to the contour feature information to obtain a contour feature.
  • the point set sets the third-dimensional coordinates of each point in the contour feature point set to obtain the spatial pattern corresponding to the input text, and then renders the space graphic according to the rendering parameter to form a 3D rendering of the input text, which can improve the generation of the 3D effect.
  • Efficiency which can generate corresponding 3D renderings for the text in the input, realize the display of the input text in 3D form, and further improve the display effect of the input text.
  • the step of rendering the spatial graphic according to the rendering parameter to obtain the 3D rendering of the input text may further include:
  • the 3D rendering diagram is a spatial pattern corresponding to the input text, and in order to improve the consistency of the 3D rendering diagram, the outline of the 3D rendering diagram is more vivid, and the contour feature information can be pasted to the 3D rendering diagram, so that the input text is
  • the contour feature information completely covers each point of the contour feature point set in the 3D rendering image (the line end of the spatial line segment corresponding to each point of the contour feature point set), and realizes the fitting of the contour feature information and the contour feature point set to further improve Enter the 3D display of the text.
  • the character graphics in the contour feature information are respectively pasted to the front and back sides of the 3D rendering, so that the contour feature information in FIG. 3 respectively covers the respective lines of the front and back of the corresponding 3D rendering. So that the above 3D renderings can express the corresponding input text more clearly from the front and back.
  • the process of sampling the points corresponding to the contour feature information to obtain the contour feature point set corresponding to the input text may include:
  • a point on the boundary of the contour feature information is selected; wherein a line formed by sequentially connecting the selected points coincides with a boundary of the contour feature information;
  • the points on the boundary of the corresponding contour feature information are first selected to ensure that the contour corresponding to the subsequent contour feature point set is consistent with the contour of the input text, and then the non-boundary portion of the contour feature information is randomly taken, which can further ensure the generated The integrity of the set of feature points.
  • the process of sampling the points corresponding to the contour feature information to obtain the contour feature point set corresponding to the input text may include:
  • the distance between the two adjacent points can be set according to the font size of the input text, for example, the input text width or the height of 1%. Starting from a certain position of the outline of the character in the contour feature information, the distances are set to the above-mentioned distances in each direction to generate a set of contour feature points corresponding to the contour feature information, so that the contour feature point set can represent the contour of the input text. Or graphics.
  • the three-dimensional coordinate system is established on the coordinate system corresponding to the two-dimensional plane, and the third-dimensional coordinates of each point in the contour feature point set on the three-dimensional space coordinate system are respectively set.
  • the process of inputting a spatial graphic of a text may include:
  • a straight line passing through a coordinate system corresponding to the two-dimensional plane, and a line perpendicular to the first dimensional coordinate axis and the second dimensional coordinate axis respectively is set as a third three-dimensional coordinate axis;
  • the third-dimensional coordinates of each point in the contour feature point set are set as preset pixel values, and a spatial pattern of the input text is obtained.
  • the preset pixel value may be set according to the font size of the input text, for example, set to a value of 0-50 pixels.
  • spatial line segments corresponding to the respective points on the two-dimensional plane can be obtained, and one end of each of the spatial line segments is on the two-dimensional plane, and the other end is in the third-dimensional coordinate.
  • the above-described set of spatial line segments form a spatial pattern of input text.
  • the process of rendering the spatial graphic according to the rendering parameter may include:
  • Reading a plurality of points on both sides of the spatial pattern wherein, one side of the spatial pattern is on a two-dimensional plane, and the other side of the spatial pattern is on a surface corresponding to a preset pixel value;
  • the triangle is rendered according to rendering parameters.
  • the rendering is performed according to the triangle formed by each point on both sides of the spatial graphic, which can ensure the accuracy of the spatial graphic rendering.
  • the rendering parameter includes a color parameter and a lighting parameter; and the process of rendering the triangle according to the rendering parameter may include:
  • the brightness of each brushed triangle is adjusted according to the illumination parameter.
  • the color parameter and the illumination parameter may respectively include color information and illumination information (luminance information) set in advance according to display requirements of the input text.
  • FIG. 4 is a schematic structural diagram of a 3D effect generation system for inputting text according to an embodiment, including:
  • the reading module 10 is configured to read contour feature information of the input text
  • the mapping module 20 is configured to map the contour feature information to a two-dimensional plane, and sample points corresponding to the contour feature information to obtain a contour feature point set corresponding to the input character;
  • the establishing module 30 is configured to establish a three-dimensional space coordinate system on a coordinate system corresponding to the two-dimensional plane, and respectively set a third-dimensional coordinate of each point of the contour feature point set on the three-dimensional space coordinate system, to obtain the Entering a spatial graphic of the text; wherein the coordinate axis perpendicular to the two-dimensional plane is a third three-dimensional coordinate axis;
  • the rendering module 40 is configured to render the spatial graphic according to the rendering parameter to obtain a 3D rendering of the input text.
  • the 3D effect generation system for inputting characters provided by the present invention corresponds to the 3D effect generation method of input characters provided by the present invention, and the technical features and beneficial effects described in the embodiment of the 3D effect generation method of the input characters are applicable. In the embodiment of the 3D effect generation system for inputting characters, it is hereby declared.
  • FIG. 5 is a flowchart of a 3D display method for inputting text according to an embodiment, which includes the following steps:
  • the input text is the input text, and may be a text input in real time when various office software (word or excel, etc.) is used, or may be a text input to an edit box of the related application software.
  • the above text is an image or symbol carrying a language, and may include Chinese characters, letters or numbers, and the like.
  • the outline feature information of the input text may include a graphic corresponding to the input text.
  • a schematic diagram of mapping the contour feature information to the two-dimensional plane can be as shown in FIG. 2, and the graphic of the input text can be read from the two-dimensional plane.
  • the contour feature point set may represent an outline or a graphic of the input text, that is, according to the dot drawing in the contour feature point set, the outline or graphic of the input text may be obtained.
  • step S20 may include:
  • a point on the boundary of the contour feature information is selected; wherein a line formed by sequentially connecting the selected points coincides with a boundary of the contour feature information;
  • the points on the boundary of the corresponding contour feature information are first selected to ensure that the contour corresponding to the subsequent contour feature point set is consistent with the contour of the input text, and then the non-boundary portion of the contour feature information is randomly taken, which can further ensure the generated The integrity of the set of feature points.
  • step S20 may include:
  • the distance between the two adjacent points can be set according to the font size of the input text, for example, the input text width or the height of 1%. Starting from a certain position of the outline of the character in the contour feature information, the distances are set to the above-mentioned distances in each direction to generate a set of contour feature points corresponding to the contour feature information, so that the contour feature point set can represent the contour of the input text. Or graphics.
  • the above steps may set two coordinate axes in the coordinate system corresponding to the two-dimensional plane as the first dimensional coordinate axis and the second dimensional coordinate axis respectively, according to the origin of the coordinate system corresponding to the two-dimensional plane, and simultaneously perpendicular to the first
  • the straight line of the dimensional coordinate axis and the second dimensional coordinate axis sets the third three-dimensional coordinate axis to obtain a corresponding three-dimensional space coordinate system.
  • the spatial pattern corresponding to the input text can be obtained, and the third-dimensional coordinates of each point in the contour feature point set can be the same.
  • the contour feature points are concentrated in each point of the point.
  • the three-dimensional coordinates can be set to a value from 0-50 pixels.
  • step S30 may include:
  • a straight line passing through a coordinate system corresponding to the two-dimensional plane, and a line perpendicular to the first dimensional coordinate axis and the second dimensional coordinate axis respectively is set as a third three-dimensional coordinate axis;
  • the third-dimensional coordinates of each point in the contour feature point set are set as preset pixel values, and a spatial pattern of the input text is obtained.
  • the spatial figure may be decomposed into a plurality of plane geometric figures (such as a triangle or a quadrangle, etc.), and then the respective planar geometric figures are respectively rendered correspondingly.
  • the above rendering parameters may include information such as a color parameter, a lighting parameter, or a brightness parameter.
  • the process of rendering the spatial graphic according to the rendering parameter may include:
  • Reading a plurality of points on both sides of the spatial pattern wherein, one side of the spatial pattern is on a two-dimensional plane, and the other side of the spatial pattern is on a surface corresponding to a preset pixel value;
  • the triangle is rendered according to rendering parameters.
  • the 3D rendering image may be displayed in a display area corresponding to the input text, or may be displayed at an input position of the 3D rendering image, so that the user who inputs the text can obtain the 3D effect of the input text in time, and perform rendering parameters according to the 3D effect thereof ( Adjustment of color parameters or lighting parameters, etc.).
  • the step of reading the contour feature information of the input text may further include:
  • the process of displaying the 3D rendering in the display interface may include:
  • the 3D rendering is sent to the input location for display.
  • the 3D display method for inputting characters can generate a corresponding 3D rendering image when text is input, and display a 3D rendering image of the input text on the display interface, thereby realizing 3D display of the input text, thereby improving the input text. Perform display efficiency of 3D display.
  • FIG. 6 is a schematic structural diagram of a 3D display system for inputting text according to an embodiment, including:
  • the reading module 10 is configured to read contour feature information of the input text
  • the mapping module 20 is configured to map the contour feature information to a two-dimensional plane, and sample points corresponding to the contour feature information to obtain a contour feature point set corresponding to the input character;
  • the establishing module 30 is configured to establish a three-dimensional space coordinate system on a coordinate system corresponding to the two-dimensional plane, and respectively set a third-dimensional coordinate of each point of the contour feature point set on the three-dimensional space coordinate system, to obtain the Entering a spatial graphic of the text; wherein the coordinate axis perpendicular to the two-dimensional plane is a third three-dimensional coordinate axis;
  • a rendering module 40 configured to render the spatial graphic according to a rendering parameter, to obtain a 3D rendering of the input text
  • the display module 50 is configured to display the 3D rendering image on the display interface.
  • the 3D display system for inputting text described above corresponds to the corresponding 3D display method of input text.
  • the technical features set forth in the embodiment of the 3D display method for inputting text and the advantageous effects thereof are all applicable to the embodiment of the 3D display system for inputting characters, and are hereby declared.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种输入文字的3D效果生成、输入文字的3D显示方法和系统,上述输入文字的3D效果生成方法,包括:读取输入文字的轮廓特征信息(S10);将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集(S20);在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形(S30);其中,与所述二维平面垂直的坐标轴为第三维坐标轴;根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图(S40)。其可以针对输入中的文字进行相应3D效果图的生成,实现输入文字3D形式的显示,进一步提高了输入文字的显示效果。

Description

输入文字的3D效果生成、输入文字的3D显示方法和系统 技术领域
本发明涉及多媒体技术领域,特别是涉及一种输入文字的3D效果生成、输入文字的3D显示方法和系统。
背景技术
随着计算机、平板电脑或者智能手机等智能终端工具的普及,人们在工作和生活上对于上述智能终端工具中的办公软件或者相关客户端等各类应用的依赖度越来越高。文字的输入和显示是上述各种办公软件或者相关客户端的基本功能。
将文字输入相应软件或者客户端的输入位置时,上述文字通常是在包括上述输入位置的二维平面中显示;若对文字的3D(three-dimensional,三维)效果进行预览等操作,需要待上述文字输入完成后,对其进行选中,再利用特定的软件进行3D处理,得到文字的3D效果,使3D效果的生成效率低,且难以在文字输入时进行3D效果的生成,从而影响输入文字的显示效果。
发明内容
基于此,有必要针对传统方案使输入文字的3D效果生成效率低的技术问题,提供一种输入文字的3D效果生成、输入文字的3D显示方法和系统。
一种输入文字的3D效果生成方法,包括如下步骤:
读取输入文字的轮廓特征信息;
将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图。
一种输入文字的3D效果生成系统,包括:
读取模块,用于读取输入文字的轮廓特征信息;
映射模块,用于将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
建立模块,用于在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
渲染模块,用于根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图。
上述输入文字的3D效果生成方法和系统,可以将输入文字的轮廓特征信息映射至二维平面,对上述轮廓特征信息对应的点进行采样,得到轮廓特征点集,设置上述轮廓特征点集中各个点的第三维坐标,以得到输入文字对应的空间图形,再根据渲染参数对上述空间图形进行渲染,形成输入文字的3D效果图,可以提高3D效果的生成效率,其还可以针对输入中的文字进行相应3D效果图的生成,实现输入文字3D形式的显示,进一步提高了输入文字的显示效果。
一种输入文字的3D显示方法,包括如下步骤:
读取输入文字的轮廓特征信息;
将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图;
在显示界面显示所述3D效果图。
一种输入文字的3D显示系统,包括:
读取模块,用于读取输入文字的轮廓特征信息;
映射模块,用于将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
建立模块,用于在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
渲染模块,用于根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图;
显示模块,用于在显示界面显示所述3D效果图。
上述输入文字的3D显示方法和系统,可以在文字输入时进行相应3D效果图的生成,并在显示界面显示上述输入文字的3D效果图,实现输入中的文字的3D显示,可以提高输入文字进行3D显示的显示效率。
附图说明
图1为一个实施例的输入文字的3D效果生成方法流程图;
图2为一个实施例的输入文字轮廓特征信息在二维平面的映射示意图;
图3为一个实施例的输入文字3D效果图;
图4为一个实施例的输入文字的3D效果生成系统结构示意图;
图5为一个实施例的输入文字的3D显示方法流程图;
图6为一个实施例的输入文字的3D显示系统结构示意图。
具体实施方式
下面结合附图对本发明的输入文字的3D效果生成、输入文字的3D显示方法和系统的具体实施方式作详细描述。
参考图1,图1所示为一个实施例的输入文字的3D效果生成方法流程图,包括如下步骤:
S10,读取输入文字的轮廓特征信息;
上述输入文字为输入中的文字,其可以是各种办公软件(word或者excel等)被使用时,实时输入的文字,也可以是向有关应用软件的编辑框输入的文字。上述文字为承载语言的图像或者符号,可以包括汉字、字母或者数字等等。输入文字的轮廓特征信息可以包括输入文字所对应的图形。
S20,将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
将轮廓特征信息映射至二维平面的示意图可以如图2所示,从上述二维平面便可以读取到输入文字的图形。上述轮廓特征点集可以表征输入文字的轮廓或者图形,即根据上述轮廓特征点集中的点描图,可以得到输入文字的轮廓或者图形。
S30,在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
上述步骤可以将二维平面对应的坐标系中的两个坐标轴分别设置为第一维坐标轴和第二维坐标轴,根据经过二维平面对应的坐标系的原点,且同时垂直于第一维坐标轴和第二维坐标轴的直线设置第三维坐标轴,以得到相应的三维空间坐标系。设置轮廓特征点集中各个点的第三维坐标后,可以得到输入文字对应的空间图形,上述轮廓特征点集中各个点的第三维坐标可以是相同的,通常情况下,轮廓特征点集中各个点的第三维坐标可以设置为0-50像素中的某个值。
S40,根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图。
上述步骤S40中,可以将空间图形分解为多个平面几何图形(如三角形或四边形等),再分别对上述各个平面几何图形进行相应的渲染。上述渲染参数可以包括颜色参数或者光照参数等信息,利用上述渲染参数对空间图形进行渲染后,便可以得到输入文字对应的3D效果图。
本实施例提供的输入文字的3D效果生成方法,可以将输入文字的轮廓特征信息映射至二维平面,对上述轮廓特征信息对应的点进行采样,得到轮廓特征 点集,设置上述轮廓特征点集中各个点的第三维坐标,以得到输入文字对应的空间图形,再根据渲染参数对上述空间图形进行渲染,形成输入文字的3D效果图,可以提高3D效果的生成效率,其可以针对输入中的文字进行相应3D效果图的生成,实现输入文字3D形式的显示,进一步提高了输入文字的显示效果。
在一个实施例中,上述根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图的步骤后还可以包括:
将所述轮廓特征信息粘贴至所述3D效果图上,使所述轮廓特征信息贴合所述轮廓特征点集。
上述3D效果图为输入文字对应的空间图形,为了提高上述3D效果图的一致性,使上述3D效果图的轮廓更为鲜明,可以将轮廓特征信息粘贴至所述3D效果图,使输入文字的轮廓特征信息完全覆盖所述3D效果图中轮廓特征点集的各个点(轮廓特征点集各个点对应的空间线段的线端),实现轮廓特征信息与轮廓特征点集的贴合,以进一步提高输入文字的3D显示效果。优选地,可以如图3所示,将上述轮廓特征信息中的文字图形分别粘贴至3D效果图的正面和反面,使图3中的轮廓特征信息分别覆盖相应3D效果图正面和反面的各个线端,以便上述3D效果图可以更为清晰地从正面和反面表达相应的输入文字。
在一个实施例中,上述对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集的过程可以包括:
在所述二维平面中,选取所述轮廓特征信息边界上的点;其中,依次连接所选取的点后形成的连线与轮廓特征信息的边界重合;
在轮廓特征信息的非边界部分随机选取若干个点;
根据所选取的点生成所述轮廓特征信息对应的轮廓特征点集。
本实施例首先选取相应轮廓特征信息边界上的点,以保证后续轮廓特征点集对应的轮廓与输入文字的轮廓一致,再对轮廓特征信息的非边界部分进行随机取点,可以进一步保证所生成的轮廓特征点集的完整性。
在一个实施例中,上述对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集的过程可以包括:
在所述轮廓特征信息上均匀选取多个点,其中,每两个相邻点之间的距离 相等;
根据所选取的点生成所述轮廓特征信息对应的轮廓特征点集。
上述两个相邻点之间的距离可以根据输入文字的字体大小进行设置,比如设置为输入文字宽度或者高度的1%等值。从轮廓特征信息中文字轮廓的某个位置开始,向各个方向每隔上述设置的距离进行取点,以生成轮廓特征信息对应的轮廓特征点集,使上述轮廓特征点集可以表征输入文字的轮廓或者图形。
在一个实施例中,上述在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形的过程可以包括:
将所述二维平面对应的两个坐标轴分别设置为三维空间的第一维坐标轴和第二维坐标轴;
将经过所述二维平面对应的坐标系原点,且分别垂直所述第一维坐标轴和第二维坐标轴的直线设置为第三维坐标轴;
将所述轮廓特征点集中各个点的第三维坐标设置为预设像素值,得到所述输入文字的空间图形。
上述预设像素值可以根据输入文字的字体大小进行设置,比如设置为0-50像素中的某个值。将各个点的第三维坐标设置为预设像素值后,便可以得到二维平面上各个点对应的空间线段,这些空间线段的一端均在所述二维平面上,另一端均在第三维坐标为预设像素值所对应的面上。上述所以空间线段的集合形成输入文字的空间图形。
作为一个实施例,上述根据渲染参数对所述空间图形进行渲染的过程可以包括:
分别读取所述空间图形两面的多个点;其中,所述空间图形的一面在二维平面上,所述空间图形的另一面在第三维坐标为预设像素值所对应的面上;
连接同一面上任意相邻的两个点与所述两个点中任一点在另一面上对应的点,得到多个三角形;
根据渲染参数对所述三角形进行渲染。
本实施例根据空间图形两面中的各个点形成的三角形进行渲染,可以保证空间图形渲染的准确性。
作为一个实施例,上述渲染参数包括颜色参数和光照参数;所述根据渲染参数对所述三角形进行渲染的过程可以包括:
根据所述颜色参数分别对各个三角形进行刷色;
根据所述光照参数调整各个刷色后的三角形的亮度。
上述颜色参数和光照参数可以分别包括根据输入文字的显示需求预先设置的颜色信息和光照信息(亮度信息)。利用上述颜色参数分别对各个三角形进行刷色,再根据显示需求进行光照参数的调整,可以进一步提高输入文字的3D效果。
参考图4,图4所示为一个实施例的输入文字的3D效果生成系统结构示意图,包括:
读取模块10,用于读取输入文字的轮廓特征信息;
映射模块20,用于将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
建立模块30,用于在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
渲染模块40,用于根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图。
本发明提供的输入文字的3D效果生成系统与本发明提供的输入文字的3D效果生成方法一一对应,在所述输入文字的3D效果生成方法的实施例阐述的技术特征及其有益效果均适用于输入文字的3D效果生成系统的实施例中,特此声明。
参考图5,图5所示为一个实施例的输入文字的3D显示方法流程图,包括如下步骤:
S10,读取输入文字的轮廓特征信息;
上述输入文字为输入中的文字,其可以是各种办公软件(word或者excel等)被使用时,实时输入的文字,也可以是向有关应用软件的编辑框输入的文字。上述文字为承载语言的图像或者符号,可以包括汉字、字母或者数字等等。输入文字的轮廓特征信息可以包括输入文字所对应的图形。
S20,将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
将轮廓特征信息映射至二维平面的示意图可以如图2所示,从上述二维平面便可以读取到输入文字的图形。上述轮廓特征点集可以表征输入文字的轮廓或者图形,即根据上述轮廓特征点集中的点描图,可以得到输入文字的轮廓或者图形。
在一个实施例中,上述步骤S20可以包括:
在所述二维平面中,选取所述轮廓特征信息边界上的点;其中,依次连接所选取的点后形成的连线与轮廓特征信息的边界重合;
在轮廓特征信息的非边界部分随机选取若干个点;
根据所选取的点生成所述轮廓特征信息对应的轮廓特征点集。
本实施例首先选取相应轮廓特征信息边界上的点,以保证后续轮廓特征点集对应的轮廓与输入文字的轮廓一致,再对轮廓特征信息的非边界部分进行随机取点,可以进一步保证所生成的轮廓特征点集的完整性。
在另一个实施例中,上述步骤S20可以包括:
在所述轮廓特征信息上均匀选取多个点,其中,每两个相邻点之间的距离相等;
根据所选取的点生成所述轮廓特征信息对应的轮廓特征点集。
上述两个相邻点之间的距离可以根据输入文字的字体大小进行设置,比如设置为输入文字宽度或者高度的1%等值。从轮廓特征信息中文字轮廓的某个位置开始,向各个方向每隔上述设置的距离进行取点,以生成轮廓特征信息对应的轮廓特征点集,使上述轮廓特征点集可以表征输入文字的轮廓或者图形。
S30,在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所 述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
上述步骤可以将二维平面对应的坐标系中的两个坐标轴分别设置为第一维坐标轴和第二维坐标轴,根据经过二维平面对应的坐标系的原点,且同时垂直于第一维坐标轴和第二维坐标轴的直线设置第三维坐标轴,以得到相应的三维空间坐标系。设置轮廓特征点集中各个点的第三维坐标后,可以得到输入文字对应的空间图形,上述轮廓特征点集中各个点的第三维坐标可以是相同的,通常情况下,轮廓特征点集中各个点的第三维坐标可以设置为0-50像素中的某个值。
在一个实施例中,上述步骤S30可以包括:
将所述二维平面对应的两个坐标轴分别设置为三维空间的第一维坐标轴和第二维坐标轴;
将经过所述二维平面对应的坐标系原点,且分别垂直所述第一维坐标轴和第二维坐标轴的直线设置为第三维坐标轴;
将所述轮廓特征点集中各个点的第三维坐标设置为预设像素值,得到所述输入文字的空间图形。
S40,根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图;
上述步骤S40中,可以将空间图形分解为多个平面几何图形(如三角形或四边形等),再分别对上述各个平面几何图形进行相应的渲染。上述渲染参数可以包括颜色参数、光照参数或者亮度参数等信息,利用上述渲染参数对空间图形进行渲染后,便可以得到输入文字对应的3D效果图。
在一个实施例中,上述根据渲染参数对所述空间图形进行渲染的过程可以包括:
分别读取所述空间图形两面的多个点;其中,所述空间图形的一面在二维平面上,所述空间图形的另一面在第三维坐标为预设像素值所对应的面上;
连接同一面上任意相邻的两个点与所述两个点中任一点在另一面上对应的 点,得到多个三角形;
根据渲染参数对所述三角形进行渲染。
S50,在显示界面显示所述3D效果图。
上述3D效果图可以在输入文字对应的显示区域显示,也可以在上述3D效果图的输入位置显示,以便输入上述文字的用户可以及时获取输入文字的3D效果,并根据其3D效果进行渲染参数(颜色参数或者光照参数等)的调节。
在一个实施例中,上述读取输入文字的轮廓特征信息的步骤后还可以包括:
获取所述输入文字在显示界面的输入位置;
所述在显示界面显示所述3D效果图的过程可以包括:
将所述3D效果图发送至所述输入位置进行显示。
本发明提供的输入文字的3D显示方法,可以在文字输入时进行相应3D效果图的生成,并在显示界面显示上述输入文字的3D效果图,实现输入中的文字的3D显示,可以提高输入文字进行3D显示的显示效率。
参考图6,图6所示为一个实施例的输入文字的3D显示系统结构示意图,包括:
读取模块10,用于读取输入文字的轮廓特征信息;
映射模块20,用于将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
建立模块30,用于在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
渲染模块40,用于根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图;
显示模块50,用于在显示界面显示所述3D效果图。
上述输入文字的3D显示系统与相应的输入文字的3D显示方法一一对应, 在所述输入文字的3D显示方法的实施例阐述的技术特征及其有益效果均适用于输入文字的3D显示系统的实施例中,特此声明。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种输入文字的3D效果生成方法,其特征在于,包括如下步骤:
    读取输入文字的轮廓特征信息;
    将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
    在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
    根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图。
  2. 根据权利要求1所述的输入文字的3D效果生成方法其特征在于,所述根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图的步骤后还包括:
    将所述轮廓特征信息粘贴至所述3D效果图上,使所述轮廓特征信息贴合所述轮廓特征点集。
  3. 根据权利要求1所述的输入文字的3D效果生成方法,其特征在于,所述对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集的过程包括:
    在所述二维平面中,选取所述轮廓特征信息边界上的点;其中,依次连接所选取的点后形成的连线与轮廓特征信息的边界重合;
    在轮廓特征信息的非边界部分随机选取若干个点;
    根据所选取的点生成所述轮廓特征信息对应的轮廓特征点集。
  4. 根据权利要求1所述的输入文字的3D效果生成方法,其特征在于,所述对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集的过程包括:
    在所述轮廓特征信息上均匀选取多个点,其中,每两个相邻点之间的距离相等;
    根据所选取的点生成所述轮廓特征信息对应的轮廓特征点集。
  5. 根据权利要求1所述的输入文字的3D效果生成方法,其特征在于,所 述在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形的过程包括:
    将所述二维平面对应的两个坐标轴分别设置为三维空间的第一维坐标轴和第二维坐标轴;
    将经过所述二维平面对应的坐标系原点,且分别垂直所述第一维坐标轴和第二维坐标轴的直线设置为第三维坐标轴;
    将所述轮廓特征点集中各个点的第三维坐标设置为预设像素值,得到所述输入文字的空间图形。
  6. 根据权利要求5所述的输入文字的3D效果生成方法,其特征在于,所述根据渲染参数对所述空间图形进行渲染的过程包括:
    分别读取所述空间图形两面的多个点;其中,所述空间图形的一面在二维平面上,所述空间图形的另一面在第三维坐标为预设像素值所对应的面上;
    连接同一面上任意相邻的两个点与所述两个点中任一点在另一面上对应的点,得到多个三角形;
    根据渲染参数对所述三角形进行渲染。
  7. 根据权利要求6所述的输入文字的3D效果生成方法,其特征在于,所述渲染参数包括颜色参数和光照参数;所述根据渲染参数对所述三角形进行渲染的过程包括:
    根据所述颜色参数分别对各个三角形进行刷色;
    根据所述光照参数调整各个刷色后的三角形的亮度。
  8. 一种输入文字的3D效果生成系统,其特征在于,包括:
    读取模块,用于读取输入文字的轮廓特征信息;
    映射模块,用于将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
    建立模块,用于在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标, 得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
    渲染模块,用于根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图。
  9. 一种输入文字的3D显示方法,其特征在于,包括如下步骤:
    读取输入文字的轮廓特征信息;
    将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
    在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
    根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图;
    在显示界面显示所述3D效果图。
  10. 一种输入文字的3D显示系统,其特征在于,包括:
    读取模块,用于读取输入文字的轮廓特征信息;
    映射模块,用于将所述轮廓特征信息映射至二维平面,对所述轮廓特征信息对应的点进行采样,得到所述输入文字对应的轮廓特征点集;
    建立模块,用于在所述二维平面对应的坐标系上建立三维空间坐标系,并分别设置所述轮廓特征点集中各个点在所述三维空间坐标系上的第三维坐标,得到所述输入文字的空间图形;其中,与所述二维平面垂直的坐标轴为第三维坐标轴;
    渲染模块,用于根据渲染参数对所述空间图形进行渲染,得到所述输入文字的3D效果图;
    显示模块,用于在显示界面显示所述3D效果图。
PCT/CN2016/113227 2016-06-23 2016-12-29 输入文字的3d效果生成、输入文字的3d显示方法和系统 WO2017219643A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610478856.9A CN106204702A (zh) 2016-06-23 2016-06-23 输入文字的3d效果生成、输入文字的3d显示方法和系统
CN201610478856.9 2016-06-23

Publications (1)

Publication Number Publication Date
WO2017219643A1 true WO2017219643A1 (zh) 2017-12-28

Family

ID=57462107

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113227 WO2017219643A1 (zh) 2016-06-23 2016-12-29 输入文字的3d效果生成、输入文字的3d显示方法和系统

Country Status (2)

Country Link
CN (1) CN106204702A (zh)
WO (1) WO2017219643A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782517A (zh) * 2019-10-10 2020-02-11 北京地平线机器人技术研发有限公司 点云标注方法、装置、存储介质及电子设备
CN111651959A (zh) * 2020-04-17 2020-09-11 福建天泉教育科技有限公司 一种3d字体的实现方法及终端

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204702A (zh) * 2016-06-23 2016-12-07 广州视睿电子科技有限公司 输入文字的3d效果生成、输入文字的3d显示方法和系统
CN112528596A (zh) * 2020-12-01 2021-03-19 北京达佳互联信息技术有限公司 文字特效的渲染方法、装置、电子设备及存储介质
CN113409429A (zh) * 2021-06-24 2021-09-17 广州光锥元信息科技有限公司 一种生成3d文字的方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085292A1 (en) * 2012-09-21 2014-03-27 Intel Corporation Techniques to provide depth-based typeface in digital documents
CN104809940A (zh) * 2015-05-14 2015-07-29 广东小天才科技有限公司 几何立体图形投影装置及投影方法
CN105096387A (zh) * 2015-07-16 2015-11-25 青岛科技大学 一种二维草图智能三维化处理方法
CN105513054A (zh) * 2015-11-26 2016-04-20 北京市计算中心 基于三维扫描的拓印方法
CN106204702A (zh) * 2016-06-23 2016-12-07 广州视睿电子科技有限公司 输入文字的3d效果生成、输入文字的3d显示方法和系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122502B (zh) * 2011-03-15 2013-04-10 深圳芯邦科技股份有限公司 一种三维字体显示方法以及相关装置
CN104778741A (zh) * 2014-01-14 2015-07-15 北大方正集团有限公司 二维图形转换为三维图形的方法和装置
CN104267880A (zh) * 2014-10-24 2015-01-07 福建星网视易信息系统有限公司 一种3d界面显示手写轨迹的方法及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085292A1 (en) * 2012-09-21 2014-03-27 Intel Corporation Techniques to provide depth-based typeface in digital documents
CN104809940A (zh) * 2015-05-14 2015-07-29 广东小天才科技有限公司 几何立体图形投影装置及投影方法
CN105096387A (zh) * 2015-07-16 2015-11-25 青岛科技大学 一种二维草图智能三维化处理方法
CN105513054A (zh) * 2015-11-26 2016-04-20 北京市计算中心 基于三维扫描的拓印方法
CN106204702A (zh) * 2016-06-23 2016-12-07 广州视睿电子科技有限公司 输入文字的3d效果生成、输入文字的3d显示方法和系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782517A (zh) * 2019-10-10 2020-02-11 北京地平线机器人技术研发有限公司 点云标注方法、装置、存储介质及电子设备
CN110782517B (zh) * 2019-10-10 2023-05-05 北京地平线机器人技术研发有限公司 点云标注方法、装置、存储介质及电子设备
CN111651959A (zh) * 2020-04-17 2020-09-11 福建天泉教育科技有限公司 一种3d字体的实现方法及终端
CN111651959B (zh) * 2020-04-17 2023-02-28 福建天泉教育科技有限公司 一种3d字体的实现方法及终端

Also Published As

Publication number Publication date
CN106204702A (zh) 2016-12-07

Similar Documents

Publication Publication Date Title
US10861232B2 (en) Generating a customized three-dimensional mesh from a scanned object
JP7386153B2 (ja) 照明をシミュレートするレンダリング方法及び端末
WO2017219643A1 (zh) 输入文字的3d效果生成、输入文字的3d显示方法和系统
CN108230435B (zh) 采用立方图纹理的图形处理
US10540789B2 (en) Line stylization through graphics processor unit (GPU) textures
CN106530388A (zh) 一种基于二维图像的3d打印装置及其三维建模方法
CN110555903B (zh) 图像处理方法和装置
JP2019114176A (ja) 情報処理装置、情報処理プログラム及び情報処理方法。
CN109448088B (zh) 渲染立体图形线框的方法、装置、计算机设备和存储介质
CN112242004A (zh) 一种基于光照渲染的ar场景的虚拟雕刻方法及系统
CN103729190A (zh) 在移动终端上多种媒介统一解析显示的方法
CN111107264A (zh) 图像处理方法、装置、存储介质以及终端
CN111651033B (zh) 一种人脸的驱动显示方法、装置、电子设备和存储介质
CN103839217A (zh) 一种水印图片的实现方法
TWI536317B (zh) 立體圖文產生方法
Xu et al. Using isophotes and shadows to interactively model normal and height fields
US20180275852A1 (en) 3d printing application
TWI476618B (zh) 依據圖形文字(國字)產生地標建築或雕塑
CN109003225A (zh) 一种多宫格图片处理方法和装置以及一种电子设备
CN107862740B (zh) 文字三维模型边缘凸起方法
US20100128032A1 (en) Rendering apparatus for cylindrical object and rendering method therefor
CN113409429A (zh) 一种生成3d文字的方法及装置
CN110992249A (zh) 一种将矢量平面图形转换为3d工程模型的方法
JP2006113800A (ja) 画像処理方法、画像処理装置及び画像処理プログラム
CN116912439A (zh) 在三维地形地质图上精确标注多元信息的方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906180

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.05.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16906180

Country of ref document: EP

Kind code of ref document: A1