WO2021077982A1 - 标记点的识别方法、装置、设备及存储介质 - Google Patents

标记点的识别方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021077982A1
WO2021077982A1 PCT/CN2020/117606 CN2020117606W WO2021077982A1 WO 2021077982 A1 WO2021077982 A1 WO 2021077982A1 CN 2020117606 W CN2020117606 W CN 2020117606W WO 2021077982 A1 WO2021077982 A1 WO 2021077982A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
pixel
pixel point
moving image
thread group
Prior art date
Application number
PCT/CN2020/117606
Other languages
English (en)
French (fr)
Inventor
吴昆临
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Publication of WO2021077982A1 publication Critical patent/WO2021077982A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present invention relates to the field of motion capture technology, and in particular to a method, device, equipment and storage medium for identifying a mark point.
  • motion capture technology has a wide range of applications.
  • the so-called motion capture refers to the setting of trackers (such as reflective marking points) on the key parts of the moving object, and the data of the three-dimensional space coordinates are obtained by capturing the position of the tracker and then processing by the computer.
  • trackers such as reflective marking points
  • the data is recognized by the computer, it can be used in animation production, gait analysis, biomechanics, ergonomics and other fields.
  • the motion trajectory of the reflective marking point is generally captured to realize the motion posture recognition and trajectory tracking of the target object.
  • the optical motion capture system needs to rely on the two-dimensional coordinates of the marked points projected to the pixels of two or more surrounding cameras, and calculate the three-dimensional position coordinates of the marked points based on these two-dimensional coordinates, specifically through the camera posture and pixels
  • the two-dimensional coordinates of the point are calculated and the projection trajectory is calculated. If there are two trajectories crossing, this crossing point is the position of the mark point.
  • the combination of all two pixel points needs to calculate whether their projection trajectories cross, and the number of pixels The more, the greater the amount of calculation required.
  • the problem with this is that when the number of marker points or the number of cameras is large, the calculation of the three-dimensional position coordinates of each marker point becomes very time-consuming due to the increase in the number of pixels.
  • the main purpose of the present invention is to provide a method, device, equipment and storage medium for identifying marker points, aiming to improve the recognition efficiency of motion capture marker points and reduce the time-consuming calculation.
  • the present invention provides a method for identifying a mark point.
  • the method includes the following steps:
  • the position of the intersection point is acquired, and the position of the intersection point is taken as the position of the motion capture mark point.
  • the step of judging whether there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object through threads executed in parallel includes:
  • the step of judging whether there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object through the i threads executed in parallel includes:
  • each thread executing in parallel read the coordinate data of the pixel i and the corresponding camera pose from the shared memory of the i-th thread group, and read the pixel from the preset global memory The coordinate data of point j and the corresponding camera pose;
  • the method further includes:
  • the coordinate data of the pixel point i and the corresponding camera posture read from the global memory are written into the shared memory of the i-th thread group.
  • the method further includes:
  • the method further includes:
  • the present invention also provides a marking point identification device, which includes:
  • the acquisition module is used to acquire a moving image of an object collected by a camera, and number the pixels in the moving image of the object from 0 to n-1, where n is the number of pixels in the moving image of the object;
  • the creation module is used to call the graphics processor and create n-1 thread groups in the graphics processor;
  • the judging module is used for judging whether there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object through the threads executed in parallel in the created i-th thread group, where i, j are pixels The number of the point, 1 ⁇ i ⁇ n-1, 0 ⁇ j ⁇ i;
  • the recognition module is configured to, if there is a track intersection between the pixel point i and the pixel point j in the moving image of the object, obtain the position of the intersection point, and use the position of the intersection point as the position of the motion capture mark point.
  • the present invention also provides a marking point recognition device, the device comprising: a memory, a processor, and a marking point recognition program stored on the memory and running on the processor , When the program for identifying the marker is executed by the processor, the steps of the method for identifying the marker as described above are implemented.
  • the present invention also provides a storage medium, the storage medium stores a marking point recognition program, and the marking point recognition program is executed by a processor to realize the marking point recognition as described above. Method steps.
  • the present invention acquires a moving image of an object collected by a camera, and numbers the pixels in the moving image of the object from 0 to n-1, where n is the number of pixels in the moving image of the object; calling a graphics processor , Create n-1 thread groups in the image processor; in the created i-th thread group, through threads executed in parallel, it is determined whether the pixel point i and the pixel point j in the moving image of the object are There is a trajectory crossing, where i and j are the number of the pixel, 1 ⁇ i ⁇ n-1, 0 ⁇ j ⁇ i; if there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object, Then, the position of the intersection is obtained, and the position of the intersection is used as the position of the motion capture mark point.
  • the present invention divides the graphics processor into a plurality of thread groups, and each thread group is used to calculate whether there is a trajectory crossing between a pixel in the moving image of the object and all the pixels with a smaller number, because each thread group Threads can perform calculations in parallel, so it can quickly calculate the position of the motion capture marker point.
  • the present invention can improve the recognition efficiency of the motion capture marker point and reduce the calculation cost. Time.
  • FIG. 1 is a schematic diagram of a device structure of a hardware operating environment involved in a solution of an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for identifying a mark point according to the present invention
  • FIG. 3 is a schematic diagram of modules of an embodiment of the marking point identification device of the present invention.
  • Fig. 1 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present invention.
  • the marking point identification device may be a computer or a server.
  • the device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • FIG. 1 does not constitute a limitation on the device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different component arrangements.
  • the memory 1005 which is a computer storage medium, may include an operating system, a network communication module, a user interface module, and a marking point recognition program.
  • the network interface 1004 is mainly used to connect to a back-end server and communicate with the back-end server;
  • the user interface 1003 is mainly used to connect to a client (user side) to communicate with the client;
  • the processor 1001 can be used to call a marking point recognition program stored in the memory 1005, and perform operations in each embodiment of the marking point recognition method described below.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for identifying a mark point according to the present invention, and the method includes:
  • Step S10 Obtain the moving image of the object collected by the camera, and number the pixels in the moving image of the object from 0 to n-1, where n is the number of pixels in the moving image of the object;
  • this embodiment proposes a method for marker point recognition based on a Graphics Processing Unit (GPU).
  • GPU Graphics Processing Unit
  • the description is made by taking the device that executes the marking point identification method as a server as an example.
  • the camera deployed in the motion capture space can collect the moving image of the object and send it to the server.
  • the server receives the moving image of the object collected by the camera, and numbers the pixels in the moving image of the object from 0 to n-1, where n is the number of pixels in the moving image of the object, that is, each pixel is numbered as 0, 1, ..., n-1.
  • Step S20 call the graphics processor, and create n-1 thread groups in the image processor
  • the server calls the graphics processor GPU, and creates n-1 thread groups (a type of management thread composed of threads) in the graphics processor, where each thread group is used to calculate the motion of the object Whether there is a trajectory crossing between a pixel in the image (except the pixel with the number 0) and all the pixels with a number less than the pixel.
  • n-1 thread groups a type of management thread composed of threads
  • Step S30 in the created i-th thread group, determine whether there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object through the threads executed in parallel, where i, j are the numbers of the pixels, 1 ⁇ i ⁇ n-1, 0 ⁇ j ⁇ i;
  • the server can calculate whether there is a trajectory crossing between the pixel point numbered 1 and the pixel point numbered 0 in the moving image of the object through threads executing in parallel.
  • the second thread group In, through the parallel execution of threads, calculate whether there is a trajectory crossing between the pixel number 2 and the pixels number 0, 1 in the moving image of the object,..., and so on, you can calculate the object The pairwise intersection of all pixels in the moving image.
  • the advantage of the GPU is that it can generate a large number of threads for parallel computing, thereby reducing the total computing time.
  • step S40 is executed to obtain the position of the intersection point, and use the position of the intersection point as the position of the motion capture marker point.
  • the server determines that there is a trajectory intersection between the pixel point i and the pixel point j in the moving image of the object, the position of the intersection point is obtained, and the position of the intersection point is used as the position of the motion capture mark point, thereby realizing the movement image of the object To identify the marked points in the.
  • the position of the intersection can be expressed in the form of three-dimensional coordinates, and the specific manner of obtaining the three-dimensional position coordinates of the intersection can be referred to related prior art, which will not be repeated here.
  • each thread group is used to calculate whether there is a trajectory crossing between a pixel in the moving image of the object and all the pixels with a smaller number.
  • the threads in each thread group can perform calculations in parallel, so the position of the motion capture marker can be quickly calculated.
  • this embodiment can improve the recognition of the motion capture marker Efficiency, reducing computational time.
  • the above-mentioned step S30 may include: creating i parallelly executed threads in the i-th thread group; judging between the pixel point i and the pixel point j in the moving image of the object through the i-th thread group executing in parallel Whether there is a trajectory crossing.
  • the server can create a thread in the first thread group, and determine whether there is a trajectory intersection between pixel 1 and pixel 0 in the moving image of the object by executing the thread, and create 2 in the second thread group. Through these two threads, it is judged whether there is a trajectory crossing between pixel point 2 and pixel point 0, pixel point 1 in the moving image of the object,..., and so on, the moving image of the object can be calculated The pairwise intersection of all pixels in.
  • the step of judging whether there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object through i threads executed in parallel may further include: in each thread executed in parallel, from the i-th thread Read the coordinate data of pixel i and the corresponding camera pose from the shared memory of the group, and read the coordinate data of pixel j and the corresponding camera pose from the preset global memory; according to the data read from the shared memory The coordinate data of the pixel point i and the corresponding camera posture, and the coordinate data of the pixel point j read from the global memory and the corresponding camera posture, determine whether there is a track crossing between the pixel point i and the pixel point j.
  • shared memory For the thread group in the GPU, there will be a small amount of shared memory, which can only be accessed by threads in the thread group, and the global memory can be accessed by threads in all thread groups.
  • shared memory reads and writes faster, allowing threads in the same thread group to perform data interaction, but the capacity is smaller; all data can only be stored in global memory and can only be used temporarily during calculations Shared memory.
  • the server reads the coordinate data of pixel i and the corresponding camera posture from the shared memory of thread group i in each thread executing in parallel, and reads from the preset global Read the coordinate data of pixel j and the corresponding camera pose in the memory, and then, according to the coordinate data of pixel i read from the shared memory and the corresponding camera pose, and the pixel read from the global memory
  • the coordinate data of j and the corresponding camera pose determine whether there is a trajectory crossing between the pixel point i and the pixel point j. Since the coordinate data of the pixel point i and the corresponding camera pose are directly read from the shared memory, the calculation efficiency is further improved and the calculation time is reduced.
  • the step of judging whether there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object through threads executing in parallel it may also include: reading the coordinate data of the pixel point i and the corresponding data from the global memory The camera pose; the coordinate data of pixel i read from the global memory and the corresponding camera pose are written into the shared memory of the i-th thread group.
  • the server may first read the coordinate data of pixel i and the corresponding camera pose from the global memory, and then Write the read coordinate data of pixel i and the corresponding camera posture into the shared memory of the i-th thread group. This provides a prerequisite guarantee for the subsequent direct reading of the coordinate data of the pixel point i and the corresponding camera posture from the shared memory.
  • step S30 it may further include: respectively defining an atomic variable with an initial value of 0 in the shared memory of each thread group; After the step of using the position as the position of the motion capture marker point, it further includes: adding one to the value of the atomic variable defined in the shared memory of the i-th thread group.
  • the atomic variable refers to a variable that only one thread is allowed to operate on it at the same time, and the atomic variable can ensure the accuracy of the data when multiple threads write at the same time.
  • atomic variables are used to record the number of motion capture markers calculated in the i-th thread group. Since the reading and writing of atomic variables is very time-consuming, atomic variables can be defined in shared memory to improve reading Write efficiency.
  • an atomic variable with an initial value of 0 can be defined in the shared memory of each thread group, and each time the server determines that there is a trajectory intersection between the pixel point i and the pixel point j in the moving image of the object, the intersection point is obtained The position of the intersection is used as the position of the motion capture marker, and the value of the atomic variable defined in the shared memory of the i-th thread group is increased by one.
  • FIG. 3 is a schematic diagram of a module of an embodiment of a marking point identification device of the present invention.
  • the marking point identification device includes:
  • the acquisition module is used to acquire a moving image of an object collected by a camera, and number the pixels in the moving image of the object from 0 to n-1, where n is the number of pixels in the moving image of the object;
  • the creation module is used to call the graphics processor and create n-1 thread groups in the graphics processor;
  • the judging module is used for judging whether there is a trajectory crossing between the pixel point i and the pixel point j in the moving image of the object through the threads executed in parallel in the created i-th thread group, where i, j are pixels The number of the point, 1 ⁇ i ⁇ n-1, 0 ⁇ j ⁇ i;
  • the recognition module is configured to, if there is a track intersection between the pixel point i and the pixel point j in the moving image of the object, obtain the position of the intersection point, and use the position of the intersection point as the position of the motion capture mark point.
  • judgment module is also used for:
  • judgment module is also used for:
  • each thread executing in parallel read the coordinate data of the pixel i and the corresponding camera pose from the shared memory of the i-th thread group, and read the pixel from the preset global memory The coordinate data of point j and the corresponding camera pose;
  • the device further includes:
  • a reading module for reading the coordinate data of the pixel i and the corresponding camera posture from the global memory
  • the writing module is configured to write the coordinate data of the pixel point i and the corresponding camera posture read from the global memory into the shared memory of the i-th thread group.
  • the device further includes:
  • the definition module is used to define an atomic variable with an initial value of 0 in the shared memory of each thread group;
  • the recording module is used for adding one to the value of the atomic variable defined in the shared memory of the i-th thread group if there is a trajectory intersection between the pixel point i and the pixel point j in the moving image of the object.
  • the invention also provides a storage medium.
  • the storage medium of the present invention stores a marking point recognition program, and when the marking point recognition program is executed by a processor, the steps of the marking point recognition method as described above are realized.
  • the method implemented when the marking point recognition program running on the processor is executed can refer to the various embodiments of the marking point recognition method of the present invention, which will not be repeated here.
  • the technical solution of the present invention can be embodied in the form of a software product in essence or a part that contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disk, optical disk), including several instructions to make a terminal device (can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种标记点的识别方法、识别装置、设备和一种存储介质,所述方法包括:获取摄像机采集到的物体运动图像,将所述物体运动图像中的像素点从0至n-1进行编号,其中n为所述物体运动图像中的像素点的个数(S10);调用图形处理器,在所述图像处理器中创建n-1个线程组(S20);在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j为像素点的编号,1≤i≤n-1,0≤j<i(S30);若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置(S40)。所述方法提高了运动捕捉标记点的识别效率,减少了计算耗时。

Description

标记点的识别方法、装置、设备及存储介质 技术领域
本发明涉及动作捕捉技术领域,尤其涉及标记点的识别方法、装置、设备及存储介质。
背景技术
目前,在虚拟现实(virtual reality,VR)游戏或一些体感游戏中,动作捕捉技术有着广泛的应用。所谓动作捕捉,是指在运动物体的关键部位设置跟踪器(如反光标记点),通过捕捉跟踪器位置,再经过计算机处理后得到三维空间坐标的数据。当数据被计算机识别后,可以应用在动画制作,步态分析,生物力学,人机工程等领域。
在现有的光学运动捕捉系统中,一般通过捕捉反光标记点的运动轨迹以实现目标对象的运动姿态识别与轨迹跟踪。在此过程中,光学运动捕捉系统需要依赖标记点投射到周围两个以上相机的像素点的二维坐标,根据这些二维坐标计算出标记点的三维位置坐标,具体是通过相机的姿态和像素点的二维坐标,计算出投影轨迹,如果有两条轨迹交叉,这个交叉点便是标记点的位置,所有两个像素点的组合都需要计算它们的投影轨迹是否有交叉,像素点的数量越多,所需的计算量也就越大。如此存在的问题是:当标记点数量或相机数量较多时,由于像素点的数量增大,计算每个标记点的三维位置坐标会变得非常耗时。
发明内容
本发明的主要目的在于提出一种标记点的识别方法、装置、设备及存储介质,旨在提高运动捕捉标记点的识别效率,减少计算耗时。
为实现上述目的,本发明提供一种标记点的识别方法,所述方法包括如下步骤:
获取摄像机采集到的物体运动图像,将所述物体运动图像中的像素点从0至n-1进行编号,其中n为所述物体运动图像中的像素点的个数;
调用图形处理器,在所述图像处理器中创建n-1个线程组;
在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j为像素点的编号,1≤i≤n-1,0≤j<i;
若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置。
可选地,所述在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤包括:
在第i个线程组中创建i个并行执行的线程;
通过所述i个并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉。
可选地,所述通过所述i个并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤包括:
在每个并行执行的线程中,从所述第i个线程组的共享内存中读取所述像素点i的坐标数据及对应的相机姿态,并从预设的全局内存中读取所述像素点j的坐标数据及对应的相机姿态;
根据从所述共享内存中读取到的所述像素点i的坐标数据及对应的相机姿态,以及从所述全局内存中读取到的所述像素点j的坐标数据及对应的相机姿态,判断所述像素点i与所述像素点j之间是否存在轨迹交叉。
可选地,所述通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤之前,还包括:
从所述全局内存中读取所述像素点i的坐标数据及对应的相机姿态;
将从所述全局内存中读取到的所述像素点i的坐标数据及对应的相机姿态写入所述第i个线程组的共享内存中。
可选地,所述在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤之前,还包括:
分别在每个线程组的共享内存中定义一个初始值为0的原子变量;
所述获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置的步骤之后,还包括:
将所述第i个线程组的共享内存中定义的原子变量的值加一。
此外,为实现上述目的,本发明还提供一种标记点的识别装置,所述装置包括:
获取模块,用于获取摄像机采集到的物体运动图像,将所述物体运动图像中的像素点从0至n-1进行编号,其中n为所述物体运动图像中的像素点的个数;
创建模块,用于调用图形处理器,在所述图像处理器中创建n-1个线程组;
判断模块,用于在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j为像素点的编号,1≤i≤n-1,0≤j<i;
识别模块,用于若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置。
此外,为实现上述目的,本发明还提供一种标记点的识别设备,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的标记点的识别程序,所述标记点的识别程序被所述处理器执行时实现如上所述的标记点的识别方法的步骤。
此外,为实现上述目的,本发明还提供一种存储介质,所述存储介质上存储有标记点的识别程序,所述标记点的识别程序被处理器执行时实现如上所述的标记点的识别方法的步骤。
本发明获取摄像机采集到的物体运动图像,将所述物体运动图像中的像素点从0至n-1进行编号,其中n为所述物体运动图像中的像素点的个数;调用图形处理器,在所述图像处理器中创建n-1个线程组;在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j为像素点的编号,1≤i≤n-1,0≤j<i;若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置。本发明通过将图形 处理器划分为多个线程组,每个线程组用于计算物体运动图像中的一个像素点与所有编号更小的像素点之间是否存在轨迹交叉,由于每个线程组中的线程可以并行执行计算,因此能够快速计算出运动捕捉标记点的位置,在像素点数量较多时,相比于仅通过CPU进行计算,本发明能够提高运动捕捉标记点的识别效率,减少计算耗时。
附图说明
图1是本发明实施例方案涉及的硬件运行环境的设备结构示意图;
图2为本发明标记点的识别方法第一实施例的流程示意图;
图3为本发明标记点的识别装置一实施例的模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
如图1所示,图1是本发明实施例方案涉及的硬件运行环境的设备结构示意图。
本发明实施例标记点的识别设备可以是计算机或服务器。
如图1所示,该设备可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
本领域技术人员可以理解,图1中示出的设备结构并不构成对设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及标记点的识别程序。
在图1所示的终端中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的标记点的识别程序,并执行下述标记点的识别方法各个实施例中的操作。
基于上述硬件结构,提出本发明标记点的识别方法各个实施例。
参照图2,图2为本发明标记点的识别方法第一实施例的流程示意图,方法包括:
步骤S10,获取摄像机采集到的物体运动图像,将物体运动图像中的像素点从0至n-1进行编号,其中n为物体运动图像中的像素点的个数;
在现有的光学运动捕捉系统中,一般通过捕捉反光标记点(是指表面覆盖有特殊反光材料的标记物,常见的形状有球形和半球形等,常用于运动对象的捕捉)的运动轨迹以实现目标对象的运动姿态识别与轨迹跟踪。为快速识别出标记点的位置,本实施例提出一种基于图形处理器(Graphics Processing Unit,GPU)进行标记点识别的方法。
在本实施例中,以执行标记点识别方法的设备为服务器为例进行说明。首先,可以由动捕空间中部署的摄像机采集物体运动图像并发送至服务器,服务器接收摄像机采集到的物体运动图像,并将该物体运动图像中的像素点从0至n-1进行编号,其中n为物体运动图像中的像素点的个数,即,分别将每个像素点编号为0,1,...,n-1。
步骤S20,调用图形处理器,在图像处理器中创建n-1个线程组;
在对像素点进行编号后,服务器调用图形处理器GPU,并在该图形处理器中创建n-1个线程组(由线程组成的管理线程的类),其中每个线程组用于计算物体运动图像中的一个像素点(编号为0的像素点除外)与所有编号小于该像素点的像素点之间是否存在轨迹交叉。
步骤S30,在创建的第i个线程组中,通过并行执行的线程,判断物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j为像素点 的编号,1≤i≤n-1,0≤j<i;
具体地,服务器可以在第1个线程组中,通过并行执行的线程,计算物体运动图像中编号为1的像素点与编号为0的像素点之间是否存在轨迹交叉,在第2个线程组中,通过并行执行的线程,计算物体运动图像中编号为2的像素点与编号为0、1的像素点之间是否存在轨迹交叉,......,以此类推,可以计算出物体运动图像中的所有像素点的两两交叉情况。
需要说明的是,相对于中央处理器CPU,GPU的优势在于可以产生大量的线程做并行计算,从而减少总计算耗时。
若物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则执行步骤S40,获取交叉点的位置,将交叉点的位置作为运动捕捉标记点的位置。
当服务器判断物体运动图像中的像素点i与像素点j之间存在轨迹交叉时,获取交叉点的位置,将该交叉点的位置作为运动捕捉标记点的位置,由此实现了对物体运动图像中的标记点进行识别。其中,交叉点的位置可以以三维坐标的方式进行表示,获取交叉点的三维位置坐标的具体方式可以参照相关现有技术,此处不做赘述。
在本实施例中,通过将图形处理器划分为多个线程组,每个线程组用于计算物体运动图像中的一个像素点与所有编号更小的像素点之间是否存在轨迹交叉,由于每个线程组中的线程可以并行执行计算,因此能够快速计算出运动捕捉标记点的位置,在像素点数量较多时,相比于仅通过CPU进行计算,本实施例能够提高运动捕捉标记点的识别效率,减少计算耗时。
进一步地,基于本发明标记点的识别方法第一实施例,提出本发明标记点的识别方法第二实施例。
在本实施例中,上述步骤S30可以包括:在第i个线程组中创建i个并行执行的线程;通过i个并行执行的线程,判断物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉。
具体地,服务器可以在第1个线程组中创建1个线程,通过执行该线程判断物体运动图像中的像素点1与像素点0之间是否存在轨迹交叉,在第2个线程组中创建2个线程,通过这2个线程分别判断物体运动图像中的像素点2与像素点0、像素点1之间是否存在轨迹交叉,......,以此类推,可以计 算出物体运动图像中的所有像素点的两两交叉情况。
进一步地,通过i个并行执行的线程,判断物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤可以进一步包括:在每个并行执行的线程中,从第i个线程组的共享内存中读取像素点i的坐标数据及对应的相机姿态,并从预设的全局内存中读取像素点j的坐标数据及对应的相机姿态;根据从共享内存中读取到的像素点i的坐标数据及对应的相机姿态,以及从全局内存中读取到的像素点j的坐标数据及对应的相机姿态,判断像素点i与像素点j之间是否存在轨迹交叉。
对于GPU中的线程组,其会有少量的共享内存,该共享内存仅能够被该线程组中的线程访问,而全局内存可以被所有线程组中的线程访问。共享内存和全局内存的差异在于,共享内存读写速度更快,可以让同一个线程组之中的线程做数据交互,但是容量较小;所有数据只能存储在全局内存,计算时才能暂时使用共享内存。
基于上述特点,对于一个线程组,由于在判断物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉时,其中的每个线程都需要用到像素点i的坐标数据及对应的相机姿态,因此该像素点i的坐标数据及对应的相机姿态可以首先写入该线程组的共享内存,以供每个线程直接从共享内存中读取,从而提高数据读取效率。
具体实施时,在第i个线程组中,服务器在每个并行执行的线程中,从线程组i的共享内存中读取像素点i的坐标数据及对应的相机姿态,并从预设的全局内存中读取像素点j的坐标数据及对应的相机姿态,然后,根据从共享内存中读取到的像素点i的坐标数据及对应的相机姿态,以及从全局内存中读取到的像素点j的坐标数据及对应的相机姿态,判断像素点i与像素点j之间是否存在轨迹交叉。由于像素点i的坐标数据及对应的相机姿态是直接从共享内存中读取的,因此进一步提高了计算效率,减少了计算耗时。
进一步地,通过并行执行的线程,判断物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤之前,还可以包括:从全局内存中读取像素点i的坐标数据及对应的相机姿态;将从全局内存中读取到的像素点i的坐标数据及对应的相机姿态写入第i个线程组的共享内存中。
在本实施例中,在计算物体运动图像中的像素点i与像素点j之间是否存 在轨迹交叉之前,服务器可以首先从全局内存中读取像素点i的坐标数据及对应的相机姿态,然后将读取到的像素点i的坐标数据及对应的相机姿态写入第i个线程组的共享内存中。如此为后续直接从共享内存中读取像素点i的坐标数据及对应的相机姿态提供了前提保证。
进一步地,基于本发明标记点的识别方法第一、第二实施例,提出本发明标记点的识别方法第三实施例。
在本实施例中,在上述步骤S30之前,还可以包括:分别在每个线程组的共享内存中定义一个初始值为0的原子变量;对应地,在获取交叉点的位置,将交叉点的位置作为运动捕捉标记点的位置的步骤之后,还包括:将第i个线程组的共享内存中定义的原子变量的值加一。
其中,原子变量指的是同一时刻只允许一个线程对其进行操作的变量,原子变量可以在多线程同时写入的情况下保证数据的准确性。在本实施例中,原子变量用于记录第i个线程组中计算得到的运动捕捉标记点的数量,由于原子变量的读写非常耗时,所以可以在共享内存中定义原子变量,以提高读写效率。
具体地,可以在每个线程组的共享内存中定义一个初始值为0的原子变量,当服务器每次判定物体运动图像中的像素点i与像素点j之间存在轨迹交叉时,获取交叉点的位置,将交叉点的位置作为运动捕捉标记点的位置,并将第i个线程组的共享内存中定义的原子变量的值加一。
通过上述方式,实现了对运动捕捉标记点的位置和数量进行记录。
本发明还提供一种标记点的识别装置。参照图3,图3为本发明标记点的识别装置一实施例的模块示意图。本实施例中,所述标记点的识别装置包括:
获取模块,用于获取摄像机采集到的物体运动图像,将所述物体运动图像中的像素点从0至n-1进行编号,其中n为所述物体运动图像中的像素点的个数;
创建模块,用于调用图形处理器,在所述图像处理器中创建n-1个线程组;
判断模块,用于在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j 为像素点的编号,1≤i≤n-1,0≤j<i;
识别模块,用于若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置。
进一步地,所述判断模块还用于:
在第i个线程组中创建i个并行执行的线程;
通过所述i个并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉。
进一步地,所述判断模块还用于:
在每个并行执行的线程中,从所述第i个线程组的共享内存中读取所述像素点i的坐标数据及对应的相机姿态,并从预设的全局内存中读取所述像素点j的坐标数据及对应的相机姿态;
根据从所述共享内存中读取到的所述像素点i的坐标数据及对应的相机姿态,以及从所述全局内存中读取到的所述像素点j的坐标数据及对应的相机姿态,判断所述像素点i与所述像素点j之间是否存在轨迹交叉。
进一步地,所述装置还包括:
读取模块,用于从所述全局内存中读取所述像素点i的坐标数据及对应的相机姿态;
写入模块,用于将从所述全局内存中读取到的所述像素点i的坐标数据及对应的相机姿态写入所述第i个线程组的共享内存中。
进一步地,所述装置还包括:
定义模块,用于分别在每个线程组的共享内存中定义一个初始值为0的原子变量;
记录模块,用于若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则将所述第i个线程组的共享内存中定义的原子变量的值加一。
上述各程序模块实现的方法及有益效果可参照本发明标记点的识别方法实施例,此处不再赘述。
本发明还提供一种存储介质。
本发明存储介质上存储有标记点的识别程序,所述标记点的识别程序被 处理器执行时实现如上所述的标记点的识别方法的步骤。
其中,在所述处理器上运行的标记点的识别程序被执行时所实现的方法可参照本发明标记点的识别方法各个实施例,此处不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (10)

  1. 一种标记点的识别方法,其特征在于,所述方法包括如下步骤:
    获取摄像机采集到的物体运动图像,将所述物体运动图像中的像素点从0至n-1进行编号,其中n为所述物体运动图像中的像素点的个数;
    调用图形处理器,在所述图像处理器中创建n-1个线程组;
    在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j为像素点的编号,1≤i≤n-1,0≤j<i;
    若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置。
  2. 如权利要求1所述的标记点的识别方法,其特征在于,所述在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤包括:
    在第i个线程组中创建i个并行执行的线程;
    通过所述i个并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉。
  3. 如权利要求2所述的标记点的识别方法,其特征在于,所述通过所述i个并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤包括:
    在每个并行执行的线程中,从所述第i个线程组的共享内存中读取所述像素点i的坐标数据及对应的相机姿态,并从预设的全局内存中读取所述像素点j的坐标数据及对应的相机姿态;
    根据从所述共享内存中读取到的所述像素点i的坐标数据及对应的相机姿态,以及从所述全局内存中读取到的所述像素点j的坐标数据及对应的相机姿态,判断所述像素点i与所述像素点j之间是否存在轨迹交叉。
  4. 如权利要求3所述的标记点的识别方法,其特征在于,所述通过并行 执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤之前,还包括:
    从所述全局内存中读取所述像素点i的坐标数据及对应的相机姿态;
    将从所述全局内存中读取到的所述像素点i的坐标数据及对应的相机姿态写入所述第i个线程组的共享内存中。
  5. 如权利要求1至4中任一项所述的标记点的识别方法,其特征在于,所述在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉的步骤之前,还包括:
    分别在每个线程组的共享内存中定义一个初始值为0的原子变量;
    所述获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置的步骤之后,还包括:
    将所述第i个线程组的共享内存中定义的原子变量的值加一。
  6. 一种标记点的识别装置,其特征在于,所述装置包括:
    获取模块,用于获取摄像机采集到的物体运动图像,将所述物体运动图像中的像素点从0至n-1进行编号,其中n为所述物体运动图像中的像素点的个数;
    创建模块,用于调用图形处理器,在所述图像处理器中创建n-1个线程组;
    判断模块,用于在创建的第i个线程组中,通过并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉,其中,i,j为像素点的编号,1≤i≤n-1,0≤j<i;
    识别模块,用于若所述物体运动图像中的像素点i与像素点j之间存在轨迹交叉,则获取交叉点的位置,将所述交叉点的位置作为运动捕捉标记点的位置。
  7. 如权利要求6所述的标记点的识别装置,其特征在于,所述判断模块还用于:
    在第i个线程组中创建i个并行执行的线程;
    通过所述i个并行执行的线程,判断所述物体运动图像中的像素点i与像素点j之间是否存在轨迹交叉。
  8. 如权利要求7所述的标记点的识别装置,其特征在于,所述判断模块还用于:
    在每个并行执行的线程中,从所述第i个线程组的共享内存中读取所述像素点i的坐标数据及对应的相机姿态,并从预设的全局内存中读取所述像素点j的坐标数据及对应的相机姿态;
    根据从所述共享内存中读取到的所述像素点i的坐标数据及对应的相机姿态,以及从所述全局内存中读取到的所述像素点j的坐标数据及对应的相机姿态,判断所述像素点i与所述像素点j之间是否存在轨迹交叉。
  9. 一种标记点的识别设备,其特征在于,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的标记点的识别程序,所述标记点的识别程序被所述处理器执行时实现如权利要求1至5中任一项所述的标记点的识别方法的步骤。
  10. 一种存储介质,其特征在于,所述存储介质上存储有标记点的识别程序,所述标记点的识别程序被处理器执行时实现如权利要求1至5中任一项所述的标记点的识别方法的步骤。
PCT/CN2020/117606 2019-10-21 2020-09-25 标记点的识别方法、装置、设备及存储介质 WO2021077982A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911000266.5 2019-10-21
CN201911000266.5A CN110796701B (zh) 2019-10-21 2019-10-21 标记点的识别方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021077982A1 true WO2021077982A1 (zh) 2021-04-29

Family

ID=69439458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117606 WO2021077982A1 (zh) 2019-10-21 2020-09-25 标记点的识别方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (2) CN110796701B (zh)
WO (1) WO2021077982A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610959A (zh) * 2021-07-12 2021-11-05 深圳市瑞立视多媒体科技有限公司 一种基于图形处理器的三维重建方法及装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796701B (zh) * 2019-10-21 2022-06-07 深圳市瑞立视多媒体科技有限公司 标记点的识别方法、装置、设备及存储介质
CN113679402B (zh) * 2020-05-18 2024-05-24 西门子(深圳)磁共振有限公司 介入治疗中的图像呈现方法及系统、成像系统和存储介质
CN111767912B (zh) * 2020-07-02 2023-09-05 深圳市瑞立视多媒体科技有限公司 标记点识别方法、装置、设备和存储介质
CN112525109A (zh) * 2020-12-18 2021-03-19 成都立鑫新技术科技有限公司 一种基于gpu测量物体运动姿态角的方法
CN116350352B (zh) * 2023-02-23 2023-10-20 北京纳通医用机器人科技有限公司 手术机器人标志位识别定位方法、装置及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1876412A1 (en) * 2005-04-15 2008-01-09 The University of Tokyo Motion capture system and method for three-dimensional reconfiguring of characteristic point in motion capture system
CN102201122A (zh) * 2011-05-16 2011-09-28 大连大学 一种运动捕捉的数据降噪方法、系统及运动捕捉系统
CN104732560A (zh) * 2015-02-03 2015-06-24 长春理工大学 基于动作捕捉系统的虚拟摄像机拍摄方法
CN106600627A (zh) * 2016-12-07 2017-04-26 成都通甲优博科技有限责任公司 一种基于标志点的刚体运动捕捉方法及系统
CN110134532A (zh) * 2019-05-13 2019-08-16 浙江商汤科技开发有限公司 一种信息交互方法及装置、电子设备和存储介质
CN110796701A (zh) * 2019-10-21 2020-02-14 深圳市瑞立视多媒体科技有限公司 标记点的识别方法、装置、设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751485B (zh) * 2015-03-20 2017-05-24 安徽大学 一种基于gpu自适应的前景提取方法
CN104881848A (zh) * 2015-05-14 2015-09-02 西安电子科技大学 一种基于cuda的低照度图像增强并行优化方法
CN108830132B (zh) * 2018-04-11 2022-01-11 深圳市瑞立视多媒体科技有限公司 一种用于光学运动捕捉的球体布点方法及捕捉球、系统
CN108830861A (zh) * 2018-05-28 2018-11-16 上海大学 一种混合光学运动捕捉方法及系统
CN109857543A (zh) * 2018-12-21 2019-06-07 中国地质大学(北京) 一种基于多节点多gpu计算的流线模拟加速方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1876412A1 (en) * 2005-04-15 2008-01-09 The University of Tokyo Motion capture system and method for three-dimensional reconfiguring of characteristic point in motion capture system
CN102201122A (zh) * 2011-05-16 2011-09-28 大连大学 一种运动捕捉的数据降噪方法、系统及运动捕捉系统
CN104732560A (zh) * 2015-02-03 2015-06-24 长春理工大学 基于动作捕捉系统的虚拟摄像机拍摄方法
CN106600627A (zh) * 2016-12-07 2017-04-26 成都通甲优博科技有限责任公司 一种基于标志点的刚体运动捕捉方法及系统
CN110134532A (zh) * 2019-05-13 2019-08-16 浙江商汤科技开发有限公司 一种信息交互方法及装置、电子设备和存储介质
CN110796701A (zh) * 2019-10-21 2020-02-14 深圳市瑞立视多媒体科技有限公司 标记点的识别方法、装置、设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610959A (zh) * 2021-07-12 2021-11-05 深圳市瑞立视多媒体科技有限公司 一种基于图形处理器的三维重建方法及装置

Also Published As

Publication number Publication date
CN114820689A (zh) 2022-07-29
CN110796701B (zh) 2022-06-07
CN110796701A (zh) 2020-02-14

Similar Documents

Publication Publication Date Title
WO2021077982A1 (zh) 标记点的识别方法、装置、设备及存储介质
US10540772B2 (en) Feature trackability ranking, systems and methods
CN107633526B (zh) 一种图像跟踪点获取方法及设备、存储介质
US20190206078A1 (en) Method and device for determining pose of camera
US9245193B2 (en) Dynamic selection of surfaces in real world for projection of information thereon
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
JP6372149B2 (ja) 表示制御装置、表示制御方法および表示制御プログラム
KR102512828B1 (ko) 이벤트 신호 처리 방법 및 장치
CN112506340B (zh) 设备控制方法、装置、电子设备及存储介质
JP6425847B1 (ja) 画像処理装置、画像処理方法およびプログラム
CN102884492A (zh) 增强现实的指向装置
CN108717709A (zh) 图像处理系统及图像处理方法
CN115661371B (zh) 三维对象建模方法、装置、计算机设备及存储介质
KR20210046217A (ko) 복수 영역 검출을 이용한 객체 탐지 방법 및 그 장치
CN111523387B (zh) 手部关键点检测的方法、设备和计算机设备
CN112882576A (zh) Ar交互方法、装置、电子设备及存储介质
US11222125B1 (en) Biometric recognition attack test methods, apparatuses, and devices
CN110910478B (zh) Gif图生成方法、装置、电子设备及存储介质
CN112016609B (zh) 一种图像聚类方法、装置、设备及计算机存储介质
CN110097061A (zh) 一种图像显示方法及装置
CN114510142B (zh) 基于二维图像的手势识别方法及其系统和电子设备
JP2005228150A (ja) 画像照合装置
CN115830196B (zh) 虚拟形象处理方法及装置
Brunetto et al. Interactive RGB-D SLAM on mobile devices
RU2775162C1 (ru) Система и способ отслеживания движущихся объектов по видеоданным

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20880030

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20880030

Country of ref document: EP

Kind code of ref document: A1