WO2024027634A1 - 跑动距离估算方法、装置、电子设备及存储介质 - Google Patents

跑动距离估算方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024027634A1
WO2024027634A1 PCT/CN2023/110189 CN2023110189W WO2024027634A1 WO 2024027634 A1 WO2024027634 A1 WO 2024027634A1 CN 2023110189 W CN2023110189 W CN 2023110189W WO 2024027634 A1 WO2024027634 A1 WO 2024027634A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection object
target detection
target
coordinate system
target site
Prior art date
Application number
PCT/CN2023/110189
Other languages
English (en)
French (fr)
Inventor
王杰
兰荣华
孔繁昊
Original Assignee
京东方科技集团股份有限公司
成都京东方智慧科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 成都京东方智慧科技有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2024027634A1 publication Critical patent/WO2024027634A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • Embodiments of the present invention relate to the field of display technology, and in particular to a running distance estimation method, device, electronic equipment and storage medium.
  • Running distance is an important indicator in sports.
  • running distance estimation in sports scenes mainly uses the following methods: 1) using competition-level thermal camera tracking, which is very costly; 2) using wearable devices for running Distance estimation, this method requires wearing corresponding equipment during exercise, which will affect the exercise experience.
  • Embodiments of the present invention provide a running distance estimation method, device, electronic device and storage medium, which are used to solve the problem that the existing running distance estimation method is high in cost and affects the sports experience.
  • the present invention is implemented as follows:
  • an embodiment of the present invention provides a running distance estimation method, including:
  • the total running distance of the target detection object is obtained.
  • determining the first position coordinate of the target detection object in the image coordinate system according to the detection frame of the target detection object includes:
  • the average of the two horizontal axis coordinate values corresponding to the bottom edge of the detection frame of the target detection object is used as the horizontal axis coordinate of the first position coordinate
  • the vertical axis value corresponding to the bottom edge of the detection frame of the target detection object is used as the horizontal axis coordinate of the first position coordinate.
  • the axis coordinate value serves as the vertical axis coordinate of the first position coordinate, wherein the vertical axis extends along the height direction of the target detection object.
  • determining the second position coordinates of the target detection object in the target site world coordinate system based on the first position coordinates and the perspective transformation matrix between the image coordinate system and the target site world coordinate system includes: :
  • the second three-dimensional homogeneous coordinates are converted into non-homogeneous coordinates as the second position coordinates.
  • the first position coordinate is (x, y)
  • the first three-dimensional homogeneous coordinate coordinate is (x, y, w)
  • w is 1.
  • determining the second position coordinates of the target detection object in the target site world coordinate system based on the first position coordinates and the perspective transformation matrix between the image coordinate system and the target site world coordinate system before Also includes:
  • a perspective transformation matrix from the image coordinate system to the world coordinate system of the target site is determined.
  • the first video image is the first N video images collected by the camera device, N is an integer greater than or equal to 1;
  • the first video image is a video image extracted every preset period from the video images collected by the camera device.
  • the perspective transformation matrix is Among them, among Represents rotation and scaling, represents translation, [c1 c2] represents projection transformation.
  • embodiments of the present invention provide a running distance estimating device, including:
  • the first acquisition module is used to acquire the video image of the target site collected by the camera device;
  • a detection object detection and tracking module is used to perform target detection object detection on each frame of the video image, obtain the detection frame of the target detection object, and perform target tracking on the target detection object in multiple frames of the video image.
  • a first determination module configured to determine the first position coordinate of the target detection object in the image coordinate system according to the detection frame of the target detection object
  • a second determination module configured to determine the second position coordinates of the target detection object in the target site world coordinate system based on the first position coordinates and the perspective transformation matrix between the image coordinate system and the target site world coordinate system;
  • a third determination module configured to determine the running of the target detection object between the two adjacent frames of the video image based on the second position coordinates of the target detection object in the two adjacent frames of the video image. distance;
  • the fourth determination module is configured to obtain the total running distance of the target detection object based on the running distance of the target detection object corresponding to multiple consecutive frames of the video image.
  • embodiments of the present invention provide an electronic device, including: a processor, a memory, and a program stored on the memory and executable on the processor.
  • a program stored on the memory and executable on the processor.
  • embodiments of the present invention provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program When executed by a processor, it implements the operation described in the first aspect. Steps of moving distance estimation method.
  • the running speed of the target detection object can be easily estimated. Moving distance, the target detection object does not need to wear a wearable device, the operation is simple, does not affect the sports experience, and does not require the use of high-cost competition-level thermal cameras. Video images collected by ordinary camera devices can be used, and the cost is low.
  • Figure 1 is a schematic flow chart of a running distance estimation method according to an embodiment of the present invention
  • Figure 2 is a schematic diagram of the detection results of target detection objects in video images in an embodiment of the present invention
  • Figure 3 is a schematic coordinate diagram of a detection frame of a target detection object in a video image in an embodiment of the present invention
  • Figure 4 is a schematic structural diagram of a running distance estimating device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • An embodiment of the present invention provides a running distance estimation method, which includes:
  • Step 11 Obtain the video image of the target site collected by the camera device
  • the camera device may be an ordinary camera device installed at a fixed position.
  • the target site is within the field of view coverage of the camera device.
  • the target venue may be a football field, basketball court, tennis court, badminton court, etc. If the target venue is too large, such as a football field, and a If the field of view of one camera device cannot cover the target site, multiple camera devices can be used, each camera device covering a part of the target site.
  • Step 12 Perform target detection object detection on each frame of the video image, obtain the detection frame of the target detection object, and perform target tracking on the target detection object in the multiple frames of the video image;
  • the target detection object is, for example, an athlete. Please refer to FIG. 2.
  • FIG. 2 is a schematic diagram of the detection results of the target detection object in the video image in an embodiment of the present invention. It should be noted that the number of target detection objects detected in one frame of the video image may be one or multiple. In the embodiment of the present invention, for each target detection object, its running distance is calculated separately.
  • At least one of the following target detection object detection models can be used to perform target detection object detection on video images: YOLOv5, YOLOX, RetinaNet, etc.
  • the input of the target detection object detection model is a video image
  • the output is an N ⁇ 4 tensor, where N represents the number of target detection objects in the video image, and 4 represents the number of target detection objects in the video image.
  • SORT or deep SORT algorithm can be used for target detection object tracking.
  • Step 13 Determine the first position coordinate of the target detection object in the image coordinate system according to the detection frame of the target detection object;
  • Step 14 Determine the second position coordinates of the target detection object in the target site world coordinate system according to the first position coordinates and the perspective transformation matrix between the image coordinate system and the target site world coordinate system;
  • Step 15 Determine the running distance of the target detection object between the two adjacent frames of the video images based on the second position coordinates of the target detection object in the two adjacent frames of the video images;
  • the following formula can be used to calculate the running distance D of the target detection object between the two adjacent frames of the video images:
  • P W is the second position coordinate of the target detection object in the current frame video image in the world coordinate system of the target site, Detect objects in the target site for the target in the previous frame video image The second position coordinate in the world coordinate system.
  • Step 16 Obtain the total running distance of the target detection object based on the running distance of the target detection object corresponding to multiple consecutive frames of the video image.
  • the running distance of the target detection object calculated between two adjacent frames of the video images is accumulated to obtain the total running distance of the target detection object.
  • the running speed of the target detection object can be easily estimated. Moving distance, the target detection object does not need to wear a wearable device, the operation is simple, does not affect the sports experience, and does not require the use of high-cost competition-level thermal cameras. Video images collected by ordinary camera devices can be used, and the cost is low.
  • determining the first position coordinate of the target detection object in the image coordinate system according to the detection frame of the target detection object includes:
  • Step 131 Obtain the coordinates of the four corner points of the detection frame of the target detection object
  • Step 132 Use the average of the two horizontal axis (x-axis) coordinate values corresponding to the bottom edge of the detection frame of the target detection object as the horizontal axis (x-axis) coordinate of the first position coordinate, and set the target
  • the vertical axis (y-axis) coordinate value corresponding to the bottom edge of the detection frame of the detection object is used as the vertical axis (y-axis) coordinate of the first position coordinate, where the vertical axis (y-axis) extends along the height of the target detection object direction extension.
  • Figure 3 is a schematic diagram of the detection frame of the target detection object in an embodiment of the present invention.
  • Figure 3 shows the position coordinates of the detection frame of the target detection object in the image coordinate system.
  • the coordinates of the corner points are (x min , y min ), (x max , y min ), (x min , y max ), (x max , y max ) respectively.
  • the target detection object is in the image coordinate system.
  • This calculation method determines the coordinates of the target detection object based on the approximate position of the target detection object's feet. The reason is that the feet are located on the plane of the sports field. This method is used to determine the first position coordinates of the target detection object in the image coordinate system. , making the position coordinates of the converted world coordinate system more precise.
  • the position of the target detection object in the target site world coordinate system is determined based on the first position coordinates and the perspective transformation matrix between the image coordinate system and the target site world coordinate system.
  • the second location coordinates include:
  • Step 141 Convert the first position coordinates into first three-dimensional homogeneous coordinates
  • Step 142 Determine the second three-dimensional homogeneous coordinates of the target detection object in the target site world coordinate system according to the first three-dimensional homogeneous coordinates and the perspective transformation matrix between the image coordinate system and the target site world coordinate system;
  • the following formula can be used to determine the second three-dimensional homogeneous coordinates of the target detection object in the world coordinate system of the target site.
  • M is the perspective transformation matrix
  • T is the first three-dimensional homogeneous coordinate
  • Step 143 Convert the second three-dimensional homogeneous coordinates into non-homogeneous coordinates as the second position coordinates.
  • first the first position coordinates of the target detection object in the image coordinate system are converted into three-dimensional homogeneous coordinates.
  • the first position coordinates are (x, y).
  • the first three-dimensional homogeneous coordinates obtained after the conversion are
  • the secondary coordinates are (x, y, w), and then the perspective transformation matrix of the image coordinate system and the world coordinate system of the target site is used to transform the first three-dimensional homogeneous coordinates into the coordinates of the target detection object in the world coordinate system of the target site.
  • the second three-dimensional homogeneous coordinates this method can reduce the computational complexity.
  • the second three-dimensional homogeneous coordinates are converted into non-homogeneous coordinates.
  • the first position coordinate is (x, y)
  • the first three-dimensional homogeneous coordinate coordinate is (x, y, w)
  • w is 1.
  • w is the scaling factor. When w is 1, it means no scaling is performed and the original image size is maintained so that the conversion result will not be distorted.
  • w taking other values is not ruled out and can be set according to specific needs.
  • the position of the target detection object in the target site world coordinate system is determined based on the first position coordinates and the perspective transformation matrix between the image coordinate system and the target site world coordinate system.
  • the second position coordinates which previously also included:
  • Step 01 Perform target field detection on the first video image among the video images collected by the camera device. Detection is performed to obtain the detection frame of the target site in the first video image;
  • a target site detection model may be used to perform target detection on the first video image.
  • the input of the target site detection model is a video image
  • the output is a 4 ⁇ 2 tensor, where 4 represents the four corner points of the target site, and 2 represents the x and y coordinates of the corner points.
  • Step 02 Determine the perspective transformation matrix from the image coordinate system to the world coordinate system of the target site based on the position coordinates of the four corner points of the detection frame of the target site and the actual size of the target site.
  • the first video image is the first N video images collected by the camera device, and N is an integer greater than or equal to 1. That is to say, since the camera device is fixedly set , the field of view it covers remains unchanged, so only the perspective transformation matrix needs to be determined in advance based on the first N video images, thereby saving computing resources.
  • the first video image is a video image extracted every preset period from the video images collected by the camera device, that is, every period of time (for example, 4 seconds) to recalculate the perspective transformation matrix to avoid the impact of vibration and other factors on the camera device, making the calculation results more accurate.
  • the perspective transformation matrix is Among them, among Represents rotation and scaling, represents translation, [c1 c2] represents projection transformation.
  • the matrix size of the perspective transformation matrix is 3 ⁇ 3.
  • An embodiment of the present invention also provides a running distance estimating device 40, which includes:
  • the first acquisition module 41 is used to acquire the video image of the target site collected by the camera device;
  • the detection object detection and tracking module 42 is used to perform target detection object detection on each frame of the video image, obtain the detection frame of the target detection object, and perform target detection on the target detection object in multiple frames of the video image. track;
  • the first determination module 43 is configured to determine the first position coordinate of the target detection object in the image coordinate system according to the detection frame of the target detection object;
  • the second determination module 44 is used to determine the target field according to the first position coordinates and the image coordinate system.
  • the perspective transformation matrix of the earth-world coordinate system determines the second position coordinate of the target detection object in the world coordinate system of the target site;
  • the third determination module 45 is configured to determine the running position of the target detection object between the two adjacent frames of the video image based on the second position coordinates of the target detection object in the two adjacent frames of the video image. moving distance;
  • the fourth determination module 46 is configured to obtain the total running distance of the target detection object based on the running distance of the target detection object corresponding to multiple consecutive frames of the video image.
  • the running speed of the target detection object can be easily estimated. Moving distance, the target detection object does not need to wear a wearable device, the operation is simple, does not affect the sports experience, and does not require the use of high-cost competition-level thermal cameras. Video images collected by ordinary camera devices can be used, and the cost is low.
  • the first determination module 43 is used to obtain the coordinates of the four corner points of the detection frame of the target detection object; and correspond the bottom edges of the detection frame of the target detection object to The average of the two horizontal axis coordinate values is used as the horizontal axis coordinate of the first position coordinate, and the vertical axis coordinate value corresponding to the bottom edge of the detection frame of the target detection object is used as the vertical axis of the first position coordinate. coordinates, wherein the vertical axis extends along the height direction of the target detection object.
  • the second determination module 44 is used to convert the first position coordinates into first three-dimensional homogeneous coordinates; according to the first three-dimensional homogeneous coordinates and the image coordinate system and The perspective transformation matrix of the world coordinate system of the target site determines the second three-dimensional homogeneous coordinates of the target detection object in the world coordinate system of the target site; converts the second three-dimensional homogeneous coordinates into non-homogeneous coordinates as the The second position coordinates.
  • the first position coordinate is (x, y)
  • the first three-dimensional homogeneous coordinate coordinate is (x, y, w)
  • w is 1.
  • the running distance estimating device 40 also includes:
  • a site detection module configured to perform target site detection on the first video image in the video images collected by the camera device, and obtain a detection frame of the target site in the first video image
  • a perspective transformation matrix determination module configured to determine the world from the image coordinate system to the target site based on the position coordinates of the four corner points of the detection frame of the target site and the actual size of the target site.
  • the perspective transformation matrix of the coordinate system is configured to determine the world from the image coordinate system to the target site based on the position coordinates of the four corner points of the detection frame of the target site and the actual size of the target site.
  • the first video image is the first N video images collected by the camera device, and N is an integer greater than or equal to 1;
  • the first video image is a video image extracted every preset period from the video images collected by the camera device.
  • the perspective transformation matrix is Among them, among Represents rotation and scaling, represents translation, [c1 c2] represents projection transformation.
  • the embodiment of the present application also provides an electronic device 50, including a processor 51 and a memory 52.
  • the memory 52 stores programs or instructions that can be run on the processor 51.
  • the programs or instructions are When executed, the processor 51 implements each step of the above embodiment of the running distance estimation method and can achieve the same technical effect. To avoid duplication, the details are not repeated here.
  • An embodiment of the present invention also provides a readable storage medium, which stores a program or instructions.
  • a program or instructions When the program or instructions are executed by a processor, each process of the running distance estimation method embodiment is implemented, and can To achieve the same technical effect, to avoid repetition, we will not repeat them here.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present application can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk , CD), including several instructions to cause a terminal (which can be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

一种跑动距离估算方法、装置、电子设备及存储介质,该方法包括:获取摄像装置采集的目标场地的视频图像(11);对每一帧视频图像进行目标检测对象检测,得到目标检测对象的检测框,并对多帧视频图像中的目标检测对象进行目标追踪(12);根据目标检测对象的检测框,确定目标检测对象在图像坐标系中的第一位置坐标(13);根据第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定目标检测对象在目标场地世界坐标系中的第二位置坐标(14);根据相邻的两帧视频图像中的目标检测对象的第二位置坐标,确定在相邻的两帧视频图像间目标检测对象的跑动距离(15);根据连续多帧视频图像对应的目标检测对象的跑动距离,得到目标检测对象的总的跑动距离(16)。

Description

跑动距离估算方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请主张在2022年08月01日在中国提交的中国专利申请号No.202210914510.4的优先权,其全部内容通过引用包含于此。
技术领域
本发明实施例涉及显示技术领域,尤其涉及一种跑动距离估算方法、装置、电子设备及存储介质。
背景技术
跑动距离是运动中的一项重要指标,目前在运动场景中的跑动距离估算主要有以下方式:1)使用比赛级热感摄像机追踪,成本非常高;2)使用可穿戴设备进行跑动距离估算,该种方式需要在运动时配戴相应设备,会影响运动体验。
发明内容
本发明实施例提供一种跑动距离估算方法、装置、电子设备及存储介质,用于解决现有的跑动距离估算方法成本高,影响运动体验的问题。
为了解决上述技术问题,本发明是这样实现的:
第一方面,本发明实施例提供了一种跑动距离估算方法,包括:
获取摄像装置采集的目标场地的视频图像;
对每一帧所述视频图像进行目标检测对象检测,得到所述目标检测对象的检测框,并对多帧所述视频图像中的所述目标检测对象进行目标追踪;
根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标;
根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标;
根据相邻的两帧所述视频图像中的目标检测对象的第二位置坐标,确定在所述相邻的两帧所述视频图像间所述目标检测对象的跑动距离;
根据连续多帧所述视频图像对应的所述目标检测对象的跑动距离,得到所述目标检测对象的总的跑动距离。
可选的,所述根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标,包括:
获取所述目标检测对象的检测框的四个角点的坐标;
将所述目标检测对象的检测框的底边对应的两个横轴坐标值的平均值作为所述第一位置坐标的横轴坐标,将所述目标检测对象的检测框的底边对应的纵轴坐标值作为所述第一位置坐标的纵轴坐标,其中,纵轴延着所述目标检测对象的身高方向延伸。
可选的,所述根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标,包括:
将所述第一位置坐标转换为第一三维齐次坐标;
根据所述第一三维齐次坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二三维齐次坐标;
将所述第二三维齐次坐标转换为非齐次坐标作为所述第二位置坐标。
可选的,所述第一位置坐标为(x,y),第一三维齐次坐标坐标为(x,y,w),w为1。
可选的,所述根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标,之前还包括:
对所述摄像装置采集的视频图像中的第一视频图像进行目标场地检测,得到所述第一视频图像中的目标场地的检测框;
根据所述目标场地的检测框的四个角点的位置坐标以及所述目标场地的实际尺寸,确定从图像坐标系到目标场地世界坐标系的透视变换矩阵。
可选的,所述第一视频图像为所述摄像装置采集的前N张视频图像,N 为大于或等于1的整数;
或者,
所述第一视频图像为从所述摄像装置采集的视频图像中每隔预设周期抽取的视频图像。
可选的,所述透视变换矩阵为其中,其中表示旋转及缩放,表示平移,[c1 c2]表示投影变换。
第二方面,本发明实施例提供了一种跑动距离估算装置,包括:
第一获取模块,用于获取摄像装置采集的目标场地的视频图像;
检测对象检测追踪模块,用于对每一帧所述视频图像进行目标检测对象检测,得到所述目标检测对象的检测框,并对多帧所述视频图像中的所述目标检测对象进行目标追踪;
第一确定模块,用于根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标;
第二确定模块,用于根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标;
第三确定模块,用于根据相邻的两帧所述视频图像中的目标检测对象的第二位置坐标,确定在所述相邻的两帧所述视频图像间所述目标检测对象的跑动距离;
第四确定模块,用于根据连续多帧所述视频图像对应的所述目标检测对象的跑动距离,得到所述目标检测对象的总的跑动距离。
第三方面,本发明实施例提供了一种电子设备,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现如上述第一方面所述的跑动距离估算方法的步骤。
第四方面,本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面所述的跑动距离估算方法的步骤。
在本发明实施例中,通过对摄像装置采集的视频图像进行目标检测对象检测、目标追踪、图像坐标系位置坐标与目标场地世界坐标系位置坐标转换等操作,可以方便地估算目标检测对象的跑动距离,不需要目标检测对象佩戴可穿戴装置,操作简单,不影响运动体验,也不需要使用高成本的比赛级热感摄像机,使用普通的摄像装置采集的视频图像即可,成本较低。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1为本发明实施例的跑动距离估算方法的流程示意图;
图2为本发明一实施例中的视频图像中的目标检测对象的检测结果的示意图;
图3为本发明一实施例中的视频图像中的目标检测对象的的检测框的坐标示意图;
图4为本发明实施例的跑动距离估算装置的结构示意图;
图5为本发明实施例的电子设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参考图1,本发明实施例提供一种跑动距离估算方法,包括:
步骤11:获取摄像装置采集的目标场地的视频图像;
本发明实施例中,摄像装置可以是普通的摄像装置,固定位置安装。所述目标场地在所述摄像装置的视场覆盖范围内。所述目标场地可以是足球场、篮球场、网球场、羽毛球场等。若所述目标场地过大,例如为足球场,且一 个摄像装置的视场无法覆盖所述目标场地,则可以使用多个摄像装置,每个摄像装置覆盖所述目标场地的一部分。
步骤12:对每一帧所述视频图像进行目标检测对象检测,得到所述目标检测对象的检测框,并对多帧所述视频图像中的所述目标检测对象进行目标追踪;
所述目标检测对象例如为运动员。请参见图2,图2为本发明一实施例中的视频图像中的目标检测对象的检测结果的示意图。需要说明的是,一帧所述视频图像中检测到的目标检测对象的个数可以一个,也可以为多个。本发明实施例中,针对每一个目标检测对象,分别计算其跑动距离。
本发明实施例中,可以使用以下目标检测对象检测模型中的至少一项对视频图像进行目标检测对象检测:YOLOv5,YOLOX,RetinaNet等。
本发明实施例中,可选的,目标检测对象检测模型的输入为视频图像,输出为N×4的张量,其中,N表示视频图像中的目标检测对象的数量,4表示目标检测对象的检测框的四个坐标对应的数值(xmin,ymin,xmax,ymax),请参考图3。
本发明实施例中,可选的,可以使用SORT或deep SORT算法等进行目标检测对象追踪。
步骤13:根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标;
步骤14:根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标;
步骤15:根据相邻的两帧所述视频图像中的目标检测对象的第二位置坐标,确定在所述相邻的两帧所述视频图像间所述目标检测对象的跑动距离;
本发明实施例中,可选的,可以采用如下公式计算在所述相邻的两帧所述视频图像间所述目标检测对象的跑动距离D:
其中,PW为当前帧视频图像中目标检测对象在所述目标场地世界坐标系中的第二位置坐标,为前一帧视频图像中目标检测对象在所述目标场地 世界坐标系中的第二位置坐标。
如果所述目标检测对象第一次被追踪到,则D为0。
步骤16:根据连续多帧所述视频图像对应的所述目标检测对象的跑动距离,得到所述目标检测对象的总的跑动距离。
可选的,将计算到各相邻的两帧所述视频图像间所述目标检测对象的跑动距离累加,得到所述目标检测对象的总的跑动距离。
在本发明实施例中,通过对摄像装置采集的视频图像进行目标检测对象检测、目标追踪、图像坐标系位置坐标与目标场地世界坐标系位置坐标转换等操作,可以方便地估算目标检测对象的跑动距离,不需要目标检测对象佩戴可穿戴装置,操作简单,不影响运动体验,也不需要使用高成本的比赛级热感摄像机,使用普通的摄像装置采集的视频图像即可,成本较低。
本发明实施例中,可选的,所述根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标,包括:
步骤131:获取所述目标检测对象的检测框的四个角点的坐标;
步骤132:将所述目标检测对象的检测框的底边对应的两个横轴(x轴)坐标值的平均值作为所述第一位置坐标的横轴(x轴)坐标,将所述目标检测对象的检测框的底边对应的纵轴(y轴)坐标值作为所述第一位置坐标的纵轴(y轴)坐标,其中,纵轴(y轴)延着所述目标检测对象的身高方向延伸。
请参考图3,图3为本发明一实施例中的目标检测对象的检测框的示意图,图3示出了目标检测对象的检测框在图像坐标系中的位置坐标,该检测框的四个角点的坐标分别为(xmin,ymin),(xmax,ymin),(xmin,ymax),(xmax,ymax),所述目标检测对象在图像坐标系中的第一位置坐标(x,y)可以采用以下方式计算:
x=(xmin+xmax)/2;
y=ymax
该种计算方式,是依据目标检测对象脚部的大致位置确定目标检测对象的坐标,原因在于脚部位于运动场地的平面上,使用该种确定目标检测对象在图像坐标系中的第一位置坐标,使得转换后的世界坐标系的位置坐标更加 准确。
本发明实施例中,可选的,所述根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标,包括:
步骤141:将所述第一位置坐标转换为第一三维齐次坐标;
步骤142:根据所述第一三维齐次坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二三维齐次坐标;
本发明实施例中,可选的,可以采用如下公式确定所述目标检测对象在所述目标场地世界坐标系中的第二三维齐次坐标
其中,M为所述透视变换矩阵,为所述第一三维齐次坐标,T表示转置。
步骤143:将所述第二三维齐次坐标转换为非齐次坐标作为所述第二位置坐标。
本发明实施例中,首先将目标检测对象在图像坐标系中的第一位置坐标转换为三维齐次坐标,例如所述第一位置坐标为(x,y),转换后得到的第一三维齐次坐标坐标为(x,y,w),然后采用图像坐标系与目标场地世界坐标系的透视变换矩阵,将第一三维齐次坐标变换为目标检测对象在所述目标场地世界坐标系中的第二三维齐次坐标,该种方式可以减少计算复杂度。此外,为了方便用户查看,变换为目标场地世界坐标系中的第二三维齐次坐标之后,再将第二三维齐次坐标转换为非齐次坐标。
本发明实施例中,假设所述第一位置坐标为(x,y),第一三维齐次坐标坐标为(x,y,w),可选的,w为1。w是缩放因子,w为1说明不进行缩放,保持原本的图像尺寸,使得转换结果不会失真。当然,也不排除w取其他数值的可能,可根据具体需要设置。
本发明实施例中,可选的,所述根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标,之前还包括:
步骤01:对所述摄像装置采集的视频图像中的第一视频图像进行目标场 地检测,得到所述第一视频图像中的目标场地的检测框;
本发明实施例中,可以采用目标场地检测模型对第一视频图像进行目标检测。
可选的,目标场地检测模型的输入为视频图像,输出为4×2的张量,其中,4表示目标场地的四个角点,2表示角点的x和y坐标。
步骤02:根据所述目标场地的检测框的四个角点的位置坐标以及所述目标场地的实际尺寸,确定从图像坐标系到目标场地世界坐标系的透视变换矩阵。
本发明的一些实施例中,可选的,所述第一视频图像为所述摄像装置采集的前N张视频图像,N为大于或等于1的整数,也就是说,由于摄像装置是固定设置的,其覆盖的视场不变,因而只需要预先根据前N张视频图像确定透视变换矩阵即可,从而节省计算资源。
本发明的一些实施例中,可选的,所述第一视频图像为从所述摄像装置采集的视频图像中每隔预设周期抽取的视频图像,也就是说,每隔一段时间(例如4秒)重新计算一次透视变换矩阵,从而避免震动等因素对摄像装置的影响,使得计算结果更加准确。
本发明实施例中,可选的,所述透视变换矩阵为其中,其中表示旋转及缩放,表示平移,[c1 c2]表示投影变换。所述透视变换矩阵的矩阵大小为3×3。
请参考图4,本发明实施例还提供一种跑动距离估算装置40,包括:
第一获取模块41,用于获取摄像装置采集的目标场地的视频图像;
检测对象检测追踪模块42,用于对每一帧所述视频图像进行目标检测对象检测,得到所述目标检测对象的检测框,并对多帧所述视频图像中的所述目标检测对象进行目标追踪;
第一确定模块43,用于根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标;
第二确定模块44,用于根据所述第一位置坐标以及图像坐标系与目标场 地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标;
第三确定模块45,用于根据相邻的两帧所述视频图像中的目标检测对象的第二位置坐标,确定在所述相邻的两帧所述视频图像间所述目标检测对象的跑动距离;
第四确定模块46,用于根据连续多帧所述视频图像对应的所述目标检测对象的跑动距离,得到所述目标检测对象的总的跑动距离。
在本发明实施例中,通过对摄像装置采集的视频图像进行目标检测对象检测、目标追踪、图像坐标系位置坐标与目标场地世界坐标系位置坐标转换等操作,可以方便地估算目标检测对象的跑动距离,不需要目标检测对象佩戴可穿戴装置,操作简单,不影响运动体验,也不需要使用高成本的比赛级热感摄像机,使用普通的摄像装置采集的视频图像即可,成本较低。
本发明实施例中,可选的,所述第一确定模块43,用于获取所述目标检测对象的检测框的四个角点的坐标;将所述目标检测对象的检测框的底边对应的两个横轴坐标值的平均值作为所述第一位置坐标的横轴坐标,将所述目标检测对象的检测框的底边对应的纵轴坐标值作为所述第一位置坐标的纵轴坐标,其中,纵轴延着所述目标检测对象的身高方向延伸。
本发明实施例中,可选的,所述第二确定模块44,用于将所述第一位置坐标转换为第一三维齐次坐标;根据所述第一三维齐次坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二三维齐次坐标;将所述第二三维齐次坐标转换为非齐次坐标作为所述第二位置坐标。
可选的,所述第一位置坐标为(x,y),第一三维齐次坐标坐标为(x,y,w),w为1。
可选的,所述跑动距离估算装置40还包括:
场地检测模块,用于对所述摄像装置采集的视频图像中的第一视频图像进行目标场地检测,得到所述第一视频图像中的目标场地的检测框;
透视变换矩阵确定模块,用于根据所述目标场地的检测框的四个角点的位置坐标以及所述目标场地的实际尺寸,确定从图像坐标系到目标场地世界 坐标系的透视变换矩阵。
可选的,所述第一视频图像为所述摄像装置采集的前N张视频图像,N为大于或等于1的整数;
或者,
所述第一视频图像为从所述摄像装置采集的视频图像中每隔预设周期抽取的视频图像。
可选的,所述透视变换矩阵为其中,其中表示旋转及缩放,表示平移,[c1 c2]表示投影变换。
如图5所示,本申请实施例还提供一种电子设备50,包括处理器51和存储器52,存储器52上存储有可在所述处理器51上运行的程序或指令,该程序或指令被处理器51执行时实现上述跑动距离估算方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述跑动距离估算方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、 或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。

Claims (10)

  1. 一种跑动距离估算方法,包括:
    获取摄像装置采集的目标场地的视频图像;
    对每一帧所述视频图像进行目标检测对象检测,得到所述目标检测对象的检测框,并对多帧所述视频图像中的所述目标检测对象进行目标追踪;
    根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标;
    根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标;
    根据相邻的两帧所述视频图像中的目标检测对象的第二位置坐标,确定在所述相邻的两帧所述视频图像间所述目标检测对象的跑动距离;
    根据连续多帧所述视频图像对应的所述目标检测对象的跑动距离,得到所述目标检测对象的总的跑动距离。
  2. 根据权利要求1所述的方法,其中,所述根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标,包括:
    获取所述目标检测对象的检测框的四个角点的坐标;
    将所述目标检测对象的检测框的底边对应的两个横轴坐标值的平均值作为所述第一位置坐标的横轴坐标,将所述目标检测对象的检测框的底边对应的纵轴坐标值作为所述第一位置坐标的纵轴坐标,其中,纵轴延着所述目标检测对象的身高方向延伸。
  3. 根据权利要求1或2所述的方法,其中,所述根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标,包括:
    将所述第一位置坐标转换为第一三维齐次坐标;
    根据所述第一三维齐次坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二三维齐次坐标;
    将所述第二三维齐次坐标转换为非齐次坐标,作为所述第二位置坐标。
  4. 根据权利要求3所述的方法,其中,所述第一位置坐标为(x,y),第一三维齐次坐标坐标为(x,y,w),其中,w为1。
  5. 根据权利要求1所述的方法,其中,所述根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐标系中的第二位置坐标,之前还包括:
    对所述摄像装置采集的视频图像中的第一视频图像进行目标场地检测,得到所述第一视频图像中的目标场地的检测框;
    根据所述目标场地的检测框的四个角点的位置坐标以及所述目标场地的实际尺寸,确定从图像坐标系到目标场地世界坐标系的透视变换矩阵。
  6. 根据权利要求5所述的方法,其中,
    所述第一视频图像为所述摄像装置采集的前N张视频图像,N为大于或等于1的整数;
    或者,
    所述第一视频图像为从所述摄像装置采集的视频图像中每隔预设周期抽取的视频图像。
  7. 根据权利要求1所述的方法,其中,所述透视变换矩阵为其中,其中表示旋转及缩放,表示平移,[c1 c2]表示投影变换。
  8. 一种跑动距离估算装置,包括:
    第一获取模块,用于获取摄像装置采集的目标场地的视频图像;
    检测对象检测追踪模块,用于对每一帧所述视频图像进行目标检测对象检测,得到所述目标检测对象的检测框,并对多帧所述视频图像中的所述目标检测对象进行目标追踪;
    第一确定模块,用于根据所述目标检测对象的检测框,确定所述目标检测对象在图像坐标系中的第一位置坐标;
    第二确定模块,用于根据所述第一位置坐标以及图像坐标系与目标场地世界坐标系的透视变换矩阵,确定所述目标检测对象在所述目标场地世界坐 标系中的第二位置坐标;
    第三确定模块,用于根据相邻的两帧所述视频图像中的目标检测对象的第二位置坐标,确定在所述相邻的两帧所述视频图像间所述目标检测对象的跑动距离;
    第四确定模块,用于根据连续多帧所述视频图像对应的所述目标检测对象的跑动距离,得到所述目标检测对象的总的跑动距离。
  9. 一种电子设备,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现如权利要求1至7中任一项所述的跑动距离估算方法的步骤。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的跑动距离估算方法的步骤。
PCT/CN2023/110189 2022-08-01 2023-07-31 跑动距离估算方法、装置、电子设备及存储介质 WO2024027634A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210914510.4 2022-08-01
CN202210914510.4A CN115272934A (zh) 2022-08-01 2022-08-01 跑动距离估算方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024027634A1 true WO2024027634A1 (zh) 2024-02-08

Family

ID=83746510

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/110189 WO2024027634A1 (zh) 2022-08-01 2023-07-31 跑动距离估算方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115272934A (zh)
WO (1) WO2024027634A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272934A (zh) * 2022-08-01 2022-11-01 京东方科技集团股份有限公司 跑动距离估算方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903312A (zh) * 2019-01-25 2019-06-18 北京工业大学 一种基于视频多目标跟踪的足球球员跑动距离统计方法
KR101991094B1 (ko) * 2018-04-23 2019-06-19 주식회사 울프슨랩 거리 측정 방법, 거리 측정 장치, 컴퓨터 프로그램 및 기록매체
WO2020031950A1 (ja) * 2018-08-07 2020-02-13 日本電信電話株式会社 計測校正装置、計測校正方法、及びプログラム
JP2020125960A (ja) * 2019-02-04 2020-08-20 株式会社豊田中央研究所 移動体位置推定装置、及び移動体位置推定プログラム
CN114078247A (zh) * 2020-08-12 2022-02-22 华为技术有限公司 目标检测方法及装置
CN114120168A (zh) * 2021-10-15 2022-03-01 上海洛塔信息技术有限公司 一种目标跑动距离测算方法、系统、设备及存储介质
CN115272934A (zh) * 2022-08-01 2022-11-01 京东方科技集团股份有限公司 跑动距离估算方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101991094B1 (ko) * 2018-04-23 2019-06-19 주식회사 울프슨랩 거리 측정 방법, 거리 측정 장치, 컴퓨터 프로그램 및 기록매체
WO2020031950A1 (ja) * 2018-08-07 2020-02-13 日本電信電話株式会社 計測校正装置、計測校正方法、及びプログラム
CN109903312A (zh) * 2019-01-25 2019-06-18 北京工业大学 一种基于视频多目标跟踪的足球球员跑动距离统计方法
JP2020125960A (ja) * 2019-02-04 2020-08-20 株式会社豊田中央研究所 移動体位置推定装置、及び移動体位置推定プログラム
CN114078247A (zh) * 2020-08-12 2022-02-22 华为技术有限公司 目标检测方法及装置
CN114120168A (zh) * 2021-10-15 2022-03-01 上海洛塔信息技术有限公司 一种目标跑动距离测算方法、系统、设备及存储介质
CN115272934A (zh) * 2022-08-01 2022-11-01 京东方科技集团股份有限公司 跑动距离估算方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115272934A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
US9426449B2 (en) Depth map generation from a monoscopic image based on combined depth cues
WO2018137623A1 (zh) 图像处理方法、装置以及电子设备
WO2018103244A1 (zh) 直播视频处理方法、装置及电子设备
US7599568B2 (en) Image processing method, apparatus, and program
US7356254B2 (en) Image processing method, apparatus, and program
US11748894B2 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN108446694B (zh) 一种目标检测方法及装置
WO2018112788A1 (zh) 图像处理方法及设备
JP2019075156A (ja) 多因子画像特徴登録及び追尾のための方法、回路、装置、システム、及び、関連するコンピュータで実行可能なコード
US8861892B2 (en) Method and apparatus for determining projection area of image
WO2024027634A1 (zh) 跑动距离估算方法、装置、电子设备及存储介质
CN106412441B (zh) 一种视频防抖控制方法以及终端
CN107862713A (zh) 针对轮询会场的摄像机偏转实时检测预警方法及模块
CN114037923A (zh) 一种目标活动热点图绘制方法、系统、设备及存储介质
Lee et al. A vision-based mobile augmented reality system for baseball games
US20060279800A1 (en) Image processing apparatus, image processing method, and image processing program
JP6583923B2 (ja) カメラのキャリブレーション装置、方法及びプログラム
WO2020196520A1 (en) Method, system and computer readable media for object detection coverage estimation
CN116958795A (zh) 翻拍图像的识别方法、装置、电子设备及存储介质
CN115589532A (zh) 防抖处理方法、装置、电子设备和可读存储介质
JP6516646B2 (ja) 複数のカメラで撮影した画像から個々の被写体を識別する識別装置、識別方法及びプログラム
Nakabayashi et al. Event-based High-speed Ball Detection in Sports Video
CN114463663A (zh) 一种人员身高的计算方法、装置、电子设备及存储介质
JP6632134B2 (ja) 画像処理装置、画像処理方法およびコンピュータプログラム
Guthier et al. Histogram-based image registration for real-time high dynamic range videos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23849341

Country of ref document: EP

Kind code of ref document: A1