WO2023165452A1 - 一种运动信息的获取方法、标定方法及装置 - Google Patents

一种运动信息的获取方法、标定方法及装置 Download PDF

Info

Publication number
WO2023165452A1
WO2023165452A1 PCT/CN2023/078599 CN2023078599W WO2023165452A1 WO 2023165452 A1 WO2023165452 A1 WO 2023165452A1 CN 2023078599 W CN2023078599 W CN 2023078599W WO 2023165452 A1 WO2023165452 A1 WO 2023165452A1
Authority
WO
WIPO (PCT)
Prior art keywords
collection device
target
sports field
homography matrix
acquisition
Prior art date
Application number
PCT/CN2023/078599
Other languages
English (en)
French (fr)
Inventor
李明
曹世明
张利平
王波
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023165452A1 publication Critical patent/WO2023165452A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present application relates to the technical field of image processing, and in particular to a motion information acquisition method, calibration method and device.
  • Auxiliary sports training is a technique for generating sports statistical data for sports scenes through synchronous shooting, storage, raw data processing, and sports data analysis by a multi-camera video acquisition system.
  • athletes are required to wear wearable sensors to collect athlete information in order to obtain athlete sports information.
  • the way of wearing wearable sensors to obtain athletes' motion information is not convenient, and it is impossible to collect athletes' motion information such as in competition scenarios.
  • Embodiments of the present application provide a motion information acquisition method, a calibration method, and a device, which do not require a remote operator to wear a wearable sensor, and improve the convenience of obtaining motion information.
  • the embodiment of the present application provides a method for acquiring motion information, including: acquiring a single frame of images collected at the same time by multiple acquisition devices deployed in a set space including a sports field;
  • the first homography matrix corresponding to a collection device performs tracking processing on the target athlete included in the single-frame images collected by the two adjacent collection devices at the same time, so as to obtain the motion information of the target athlete;
  • the two collection devices are any two of the plurality of collection devices, and the two adjacent collection devices are adjacent in the set motion direction of the sports field; wherein, the first homography matrix is used to represent the same
  • the mapping relationship of the position coordinates of the target object in the single-frame images collected by two adjacent acquisition devices at the same time, the first homography matrix is based on the multi-frames at different times that are synchronously photographed by the two adjacent acquisition devices
  • the images are obtained through calibration, and the positions of the first target calibration object in the sports field are different in the images taken at different times; the first target calibration object includes a plurality of calibration points
  • the way of obtaining sports information of athletes by wearing wearable sensors is not very convenient.
  • multiple acquisition devices are deployed around the sports field, and then athletes are tracked based on the pre-calibrated homography matrix between cameras. It provides convenience by not requiring the athlete to wear any equipment.
  • the method of calibrating the camera by using the inherent visual features/marking points of the field depends on the quantity and distribution of the standard points that meet the requirements in the entire sports field. Few or uneven distribution of feature points will make the camera Calibration accuracy drops.
  • this application uses mobile calibration objects for calibration, which gets rid of the influence of the inherent visual features/marking points of the site, and is applicable to a wider range of scenarios, and is also applicable to scenes with larger sites. And because of the large number and uniform distribution of visual features/calibration points, the calibration accuracy is higher.
  • the existing technology often needs to expand the field of view to cover multiple marking points, which will result in a smaller image of the tracking target in the image, resulting in a decrease in detection accuracy.
  • Book The embodiment of the application can solve the problem of calibration in a large field without using a camera with a large field of view, which is beneficial to improve the accuracy of target detection and tracking in a large scene.
  • the multiple calibration points included in the first target calibration object are at the same distance from the plane where the ground of the sports field is located.
  • the method further includes: obtaining multiple data collected by the first collection device and the second collection device at different moments during the movement of the first target calibration object in the set area of the sports field.
  • a frame image, the first acquisition device and the second acquisition device are any two adjacent acquisition devices in the set motion direction of the sports field, and the set area includes at least the two adjacent acquisition devices Respectively corresponding to the same area in the sports field within the viewing angle range; acquiring the first position coordinate information of the multiple calibration points included in the first target calibration object in the multiple frames of images collected by the first collection device, respectively, and acquiring the second position coordinate information of the plurality of calibration points included in the first target calibration object respectively in the multi-frame images collected by the second acquisition device; according to the first position identification information and the second position The coordinate information determines the first homography matrix.
  • the method further includes: determining the target according to the second homography matrix corresponding to the third acquisition device and the position coordinates of the target athlete in the single-frame image captured by the third acquisition device The third position coordinates of the athlete in the coordinate system corresponding to the sports field; wherein, the second homography matrix corresponding to the third collection device is used to describe the position coordinates of the target object in the image captured by the third collection device and the The mapping relationship of the position coordinates of the target object in the coordinate system corresponding to the sports field; the third acquisition device is any one of the plurality of acquisition devices.
  • the third acquisition device is a reference acquisition device
  • the second homography matrix of the reference acquisition device is captured by the reference acquisition device according to the second target calibration object including a plurality of calibration points
  • the position coordinates in the image and the position coordinates of a plurality of calibration points included in the second target marking object in the coordinate system corresponding to the sports field are determined, and the second target marking object is located in the sports field. within the viewing range of the aforementioned reference capture device; or,
  • the third collection device is not a reference collection device and the third collection device is adjacent to the fourth collection device in the set movement direction of the sports field, and the second homography matrix corresponding to the third collection device is based on determined by the second homography matrix corresponding to the fourth collection device and the first homography matrix between the fourth collection device and the third collection device, and the fourth collection device is a reference collection device; or ,
  • the third collection device is not a reference collection device and there is at least one collection device between the third collection device and the fourth collection device in the set movement direction of the sports field, and the second homography corresponding to the third collection device
  • the matrix is determined according to the second homography matrix corresponding to the fourth collection device and the first homography matrix corresponding to every two adjacent collection devices between the third collection device and the fourth collection device.
  • the first homography matrix of adjacent cameras is cascaded to determine the second homography matrix of each camera, and it is not necessary to determine the second homography matrix through calibration for all cameras, reducing the complexity of calibration degree, also reduce the calibration duration.
  • the viewing range of the reference acquisition device includes a set landmark reference point located on the ground of the sports field;
  • the position coordinates of the plurality of calibration points included in the second target calibration object in the corresponding coordinate system of the sports field are based on the position coordinates of the set landmark reference point in the corresponding coordinate system of the sports field, the Determined by the relative positional relationship between the second target calibration object and the set landmark reference point and the topological parameters of the second target calibration object, the topology parameters characterize the relative relationship between the components included in the second target calibration object positional relationship and relative posture.
  • the embodiment of the present application provides a calibration method, including: obtaining the first and second acquisition devices among the multiple acquisition devices deployed in the set space including the sports field, respectively, at the first target calibration object Multi-frame images collected at different times during the movement of the set area of the sports field, the first collection device and the second collection device are any two collections adjacent to the set movement direction of the sports field device, the set area includes at least the two adjacent acquisition devices corresponding to the same area in the sports field within the viewing angle range; the multiple calibration points included in the acquisition of the first target calibration object are respectively in the The first position coordinate information in the multi-frame images collected by the first acquisition device, and obtaining the second positions of the multiple calibration points included in the first target calibration object respectively in the multi-frame images collected by the second acquisition device Coordinate information: determining the first homography matrix according to the first location identification information and the second location coordinate information.
  • the method of calibrating the camera by using the inherent visual features/marking points of the field depends on the quantity and distribution of the calibration points that meet the requirements in the entire sports field, and the few or uneven distribution of feature points will make the camera calibration difficult. Accuracy drops.
  • this application uses mobile calibration objects for calibration, which gets rid of the influence of the inherent visual features/marking points of the site, and is applicable to a wider range of scenarios, and is also applicable to scenes with larger sites. And because of the large number and uniform distribution of visual features/calibration points, the calibration accuracy is higher.
  • the existing technology often needs to expand the field of view to cover multiple marking points, which will result in a smaller image of the tracking target in the image, resulting in a decrease in detection accuracy.
  • the embodiment of the present application can solve the calibration problem of a large field without using a camera with a large field of view, which is beneficial to improve the accuracy of target detection and tracking in a large scene.
  • the method further includes: acquiring a first image of a second target calibration object included in the viewing range captured by the third acquisition device, where the second target calibration object includes a plurality of calibration points, and the The third acquisition device is any one of the plurality of acquisition devices; identifying the position coordinates of a plurality of calibration points included in the second target calibration object in the first image respectively; according to the second target calibration object The position coordinates of the plurality of calibration points included in the corresponding coordinate system of the sports field, and the position coordinates of the plurality of calibration points included in the second target marker in the first image respectively determine the third acquisition The second homography matrix corresponding to the device.
  • the third collection device is adjacent to the fourth collection device in the set movement direction of the sports field, and the method further includes: according to the second homography corresponding to the third collection device The matrix and the first homography matrix between the third collection device and the fourth collection device determine a second homography matrix corresponding to the fourth collection device.
  • the third collection device is separated from the fifth collection device by at least one collection device in the set movement direction of the sports field, and the method further includes: according to the first collection device corresponding to the third collection device The second homography matrix and the first homography matrix corresponding to every two adjacent collection devices between the third collection device and the fifth collection device determine the second homography matrix of the fifth collection device.
  • the third collection device is a reference collection device, and the viewing range of the reference collection device includes a set landmark reference point on the ground of the sports field;
  • the second target calibration The position coordinates of the plurality of calibration points included in the object in the corresponding coordinate system of the sports field are based on the position coordinates of the set landmark reference point in the corresponding coordinate system of the sports field, the second target calibration object and The relative positional relationship of the set landmark reference points and the topological parameters of the second target calibration object are determined, and the topological parameters characterize the relative positional relationship and relative posture between the components included in the second target calibration object.
  • an embodiment of the present application provides a sports information acquisition device, including: an acquisition unit, which acquires a single frame of images collected at the same time by multiple acquisition devices deployed in a set space including a sports field; a processing unit, It is used to pair the adjacent two acquisition devices in the same The target athlete included in the single-frame image collected at any time is tracked to obtain the movement information of the target athlete; the two adjacent acquisition devices are any two of the plurality of acquisition devices and the adjacent The two acquisition devices are adjacent in the set motion direction of the sports field; wherein, the first homography matrix is used to represent the position coordinates of the same target in the single-frame images collected by the two adjacent acquisition devices at the same time mapping relationship, the first homography matrix is obtained by calibrating multiple frames of images at different times captured synchronously by the two adjacent acquisition devices, and the first target calibration object in the images captured at different times is in the The locations in the playing field vary; the first target marker includes a plurality of marker points.
  • the multiple calibration points included in the first target calibration object are at the same distance from the plane where the ground of the sports field is located.
  • the acquiring unit is further configured to acquire the data acquired by the first acquisition device and the second acquisition device at different times during the movement of the first target calibration object in the set area of the sports field.
  • multiple frames of images, the first collection device and the second collection device are any two collection devices that are adjacent in the set motion direction of the sports field, and the set area includes at least the two adjacent The acquisition devices respectively correspond to the same area in the sports field within the viewing angle range;
  • the processing unit is further configured to obtain multiple calibration points included in the first target calibration object in the plurality of calibration points collected by the first acquisition device.
  • the processing unit is further configured to:
  • the second homography matrix corresponding to the third acquisition device is used to describe the position coordinates of the target object in the image captured by the third acquisition device and the position coordinates of the target object in the coordinate system corresponding to the sports field mapping relationship; the third acquisition device is any one of the plurality of acquisition devices.
  • the third acquisition device is a reference acquisition device
  • the second homography matrix of the reference acquisition device is captured by the reference acquisition device according to the second target calibration object including a plurality of calibration points
  • the position coordinates in the image and the position coordinates of a plurality of calibration points included in the second target marking object in the coordinate system corresponding to the sports field are determined, and the second target marking object is located in the sports field. within the viewing range of the aforementioned reference capture device; or,
  • the third collection device is not a reference collection device and the third collection device is adjacent to the fourth collection device in the set movement direction of the sports field, and the second homography matrix corresponding to the third collection device is based on determined by the second homography matrix corresponding to the fourth collection device and the first homography matrix between the fourth collection device and the third collection device, and the fourth collection device is a reference collection device; or ,
  • the third collection device is not a reference collection device and there is at least one collection device between the third collection device and the fourth collection device in the set movement direction of the sports field, and the second homography corresponding to the third collection device
  • the matrix is determined according to the second homography matrix corresponding to the fourth collection device and the first homography matrix corresponding to every two adjacent collection devices between the third collection device and the fourth collection device.
  • the viewing range of the reference acquisition device includes set landmark reference points located on the ground of the sports field; the multiple calibration points included in the second target calibration object are respectively in the The position coordinates in the corresponding coordinate system of the sports field are based on the position coordinates of the set landmark reference point in the corresponding coordinate system of the sports field, The relative positional relationship between the second target calibration object and the set landmark reference point and the topological parameters of the second target calibration object are determined, and the topological parameters represent the relationship between the components included in the second target calibration object. relative positional relationship and relative posture.
  • an embodiment of the present application provides a calibration device, including: an acquisition unit, configured to acquire a plurality of acquisition devices deployed in a set space including a sports field, where the first acquisition device and the second acquisition device are respectively located in the Multi-frame images collected by the first target calibration object at different moments during the movement of the set area of the sports field, the first collection device and the second collection device are adjacent in the set movement direction of the sports field Any two collection devices, the set area includes at least the two adjacent collection devices respectively corresponding to the same area in the sports field within the viewing angle range;
  • a processing unit configured to acquire first position coordinate information of multiple calibration points included in the first target calibration object in multiple frames of images collected by the first acquisition device, and obtain the first target calibration object including The second position coordinate information of the plurality of calibration points respectively in the multi-frame images collected by the second acquisition device; the first homography matrix is determined according to the first position identification information and the second position coordinate information .
  • the acquisition unit is further configured to acquire a first image of a second target calibration object included in the viewing range captured by the third acquisition device, and the second target calibration object includes a plurality of calibration points , the third collection device is any one of the plurality of collection devices;
  • the processing unit is further configured to identify the position coordinates of the multiple marking points included in the second target marking object in the first image; according to the position coordinates of the multiple marking points included in the second target marking object in the The position coordinates in the coordinate system corresponding to the sports field, and the position coordinates of the plurality of calibration points included in the second target calibration object in the first image respectively determine the second homography corresponding to the third acquisition device matrix.
  • the third collection device is adjacent to the fourth collection device in the set movement direction of the sports field, and the processing unit is further configured to The homography matrix and the first homography matrix between the third collection device and the fourth collection device determine a second homography matrix corresponding to the fourth collection device.
  • the third collection device is separated from the fifth collection device by at least one collection device in the set movement direction of the sports field, and the processing unit is further configured to:
  • the third collection device is a reference collection device, and the viewing range of the reference collection device includes set landmark reference points on the ground of the sports field;
  • the position coordinates of the plurality of calibration points included in the second target calibration object in the corresponding coordinate system of the sports field are based on the position coordinates of the set landmark reference point in the corresponding coordinate system of the sports field, the Determined by the relative positional relationship between the second target calibration object and the set landmark reference point and the topological parameters of the second target calibration object, the topology parameters represent the relative positions between the components included in the second target calibration object Relationships and Relative Postures.
  • the embodiment of the present application provides an apparatus for acquiring exercise information, including a memory and a processor.
  • the memory is used to store programs or instructions; the processor is used to call the programs or instructions to execute the first aspect or the method described in any design of the first aspect.
  • the embodiment of the present application provides a calibration device, including a memory and a processor.
  • the memory is used to store programs or instructions; the processor is used to call the programs or instructions to execute the second aspect or the second aspect The method described for either design of the surface.
  • the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by a terminal device, the processor executes the above-mentioned first aspect or the first aspect.
  • the method in any possible design of the first aspect, or causing the processor to execute the method in the second aspect or any possible design of the second aspect.
  • the present application provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the above first aspect or any possible implementation of the first aspect can be realized The method in the method, or the method in any possible implementation manner to realize the above second aspect or the second aspect.
  • FIG. 1 is a schematic diagram of an information system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of another information system architecture provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of a camera deployment method for a circular speed skating track provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of another camera deployment method for a circular speed skating track provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of another camera deployment method for a circular speed skating track provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of a camera deployment method for a track and field track provided by an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of a possible first target calibration object provided by the embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a calibration method for the first homography matrix provided in the embodiment of the present application.
  • FIG. 9 is a schematic diagram of moving calibration objects in the common view area of adjacent cameras provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of the feature point detection of the mobile calibration object provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of matching points of adjacent cameras provided in the embodiment of the present application.
  • FIG. 12 is a schematic diagram of determining a homography matrix through matching points of adjacent cameras provided by an embodiment of the present application.
  • Fig. 13 is a schematic flow chart of the calibration method of the second homography matrix provided by the embodiment of the present application.
  • Figure 14 is a schematic diagram of the relationship between the coordinate system and the calibration object of the field provided by the embodiment of the present application.
  • Fig. 15 is a schematic diagram of calibration of the mapping relationship between the coordinate system of the venue and the image coordinate system provided by the embodiment of the present application;
  • FIG. 16 is a schematic diagram of the motion information acquisition process provided by the embodiment of the present application.
  • Fig. 17 is a schematic diagram of ROI and mask provided by the embodiment of the present application.
  • Fig. 18 is a schematic diagram of human body tracking at two moments in a single position provided by the embodiment of the present application.
  • FIG. 19 is a schematic diagram of adjacent camera ID relay tracking provided by the embodiment of the present application.
  • Fig. 20 is a schematic diagram of athlete human body detection and trajectory projection to the field provided by the embodiment of the present application.
  • FIG. 21 is a schematic structural diagram of a device for obtaining motion information provided by an embodiment of the present application.
  • Figure 22 is a schematic structural diagram of the calibration device provided in the embodiment of the present application.
  • Fig. 23 is a schematic structural diagram of the device provided by the embodiment of the present application.
  • This application provides a sports information acquisition method, calibration method and device, which are used to map the coordinate system mapping relationship between the collection equipment and the collection equipment and between the collection equipment and the sports venue in the set space including the sports field Calibration, and according to the calibration results, obtain the sports information of athletes during training or competition on the sports field, such as motion trajectory, speed, number of steps or distance, etc.
  • Athletic information can be used to assist athletes in training.
  • the sports field can be a circular field, such as a circular track or a circular speed skating track.
  • the sports scene can also be a rectilinear field.
  • the sports field may also be in other forms, such as a football field, which is not specifically limited in this embodiment of the present application.
  • the information system includes a plurality of collection devices and data processing servers.
  • N collection devices are taken as an example, and N is a positive integer.
  • the number of cameras included in the information system can be configured according to the size of the sports field.
  • the collection device may be a camera, or a camera, or a video camera, or the like.
  • the multiple collection devices may be deployed in a set space where the sports field is located.
  • the sports venue is a circular speed skating track
  • the circular speed skating track is located in a speed skating hall
  • multiple collection devices are deployed in the speed skating hall.
  • the viewing range of each of the multiple collection devices includes a part of the sports field. Different capture devices have different viewing ranges. In the direction of motion, there is a common viewing area between the corresponding viewing ranges of two spatially adjacent capture devices.
  • the common view area is the area captured by spatially adjacent acquisition devices.
  • the data processing server may include one or more servers. If the data processing server includes multiple servers, it can be understood that the data processing server is a server cluster composed of multiple servers.
  • the data processing server can be used for video streams collected by multiple acquisition devices, can extract synchronous frames of multiple acquisition devices, and then process the synchronous frames frame by frame to obtain motion information.
  • the data processing server may also perform calibration processing.
  • the calibration process may include calibration to obtain the first homography matrix and/or the second homography matrix.
  • the first homography matrix is used to represent the mapping relationship of the position coordinates of the same target object in the single-frame images collected at the same time by two adjacent collection devices.
  • the second homography matrix is used to describe the mapping relationship between the position coordinates of the target object in the image captured by a collection device and the position coordinates of the target object in the coordinate system corresponding to the sports field.
  • the coordinate system corresponding to the sports field may be a spatial coordinate system created based on the sports field, and the origin may be a certain point in the sports field.
  • the origin of the coordinate system corresponding to the sports field may also be a spatial coordinate system established with the capital center of the country as the origin, or other locations may be used as the origin, which is not specifically limited in this application.
  • the information system may further include one or more routing devices, and the routing devices may be used to transmit the images collected by the collection device to the data processing server.
  • Routing devices can be routers, switches, and so on. Take the switch as an example, as shown in Figure 2, multi-layer switches can be deployed in the information system, taking two layers as an example, the switch deployed on the first layer can be used to connect one or more collection devices, and the switch deployed on the second layer can As the master switch, one end of the master switch is connected to the first-layer switch, and the other end is connected to the data processing server. For example, see Figure 2.
  • the information system also supports the acquisition of motion analysis data through mobile devices.
  • the information system further includes a mobile front end.
  • a mobile front end includes a web page server.
  • the web page server is connected to the data processing server.
  • the mobile front end may further include a wireless router (or a wired router) and one or more terminal devices.
  • the terminal device may be an electronic device that supports access to web pages, such as a desktop computer, a portable computer, and a mobile phone.
  • One or more terminal devices can operate the data server by accessing the web page server, for example, sending synchronous acquisition signals to multiple acquisition devices or stopping recording signals.
  • the synchronous acquisition signal is used to instruct the acquisition device to start video recording synchronously.
  • the stop recording signal is used to instruct the capture device to stop video recording.
  • Another example is video playback of historical records, or sports information and display, and so on.
  • the calibration method provided in the embodiment of the present application will be described in detail below with reference to the embodiments.
  • the acquisition device is taken as an example.
  • the allowed installation points in the set space to which the sports field belongs such as whether there are columns, trusses, or ceilings.
  • each camera can cover a part of the entire track, such as a length of track.
  • the spatially adjacent cameras have a common view area, such as a common view area of 1/2 or 1/3 of the image.
  • a truss refers to a planar or spatial structure composed of straight rods, generally with triangular units, used for customized parts of camera brackets.
  • Figures 3-5 show schematic diagrams of three possible camera deployment methods. See (a) in Figure 3, taking the deployment of 20 aircraft positions along the track as an example. Camera positions refer to cameras distributed in different positions. Each camera is located above the outside of the track, looking down at the track from a height. In Figure 3 (a), the camera is deployed on a column as an example. Figure 3(b) is a top view of the camera deployment. (c) in Fig. 3 is a side view of the camera deployed on the column.
  • Straight cameras are deployed on the extension of the straight and on the side of the curve to shoot athletes from the front. Each camera captures a 40-meter range area, and two spatially adjacent cameras have a common viewing range of 20 meters. A total of 20 cameras cover a 400-meter track (5 straight roads*2, curves 5*2). In some scenarios, after installing the camera at the set position, you can adjust the focus, orientation or field of view of the camera so that each camera focuses on a part of the track, and there is a common view area between adjacent camera positions . Camera groups are connected to two switches, cameras 1-10 are connected to one switch, cameras 11-20 are connected to another switch, and video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • FIG. 4 is a top view of the camera deployment.
  • FIG. 4 is a side view of the camera deployed on the ceiling truss.
  • Camera groups are connected to two switches, cameras 1-10 are connected to one switch, cameras 11-20 are connected to another switch, and video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • FIG. 5 is a side view of cameras 1-5 deployed on a column. Camera groups are connected to two switches, cameras 1-10 are connected to one switch, cameras 11-20 are connected to another switch, and video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • FIG. 6 shows a schematic diagram of the structure of the track and camera deployment. Each camera seat is located above the stands outside the track, looking down at the track from a height.
  • Figure 6(b) is a top view of a camera deployment.
  • (c) in Figure 6 is a side view of the camera deployed on the stands. Straight cameras are deployed on the extension of the straight and on the side of the curve to shoot athletes from the front.
  • Camera groups are connected to two switches, cameras 1-10 are connected to one switch, cameras 11-20 are connected to another switch, and video frames collected by cameras 1-20 are sent to the data processing server through the two switches.
  • the calibration scheme of the first homography matrix is described as follows.
  • the calibration object used to calibrate the first homography matrix is referred to as the first target calibration object.
  • the first target calibrator includes a plurality of calibration points.
  • the first target calibrator may include one or a set of calibrator.
  • Each calibrator in a set of calibrator includes at least one calibration point.
  • Calibration points have stable visual characteristics that do not change from moment to moment.
  • the first target marking object has a specific pattern, and the intersection points of the lines in the pattern can be used as marking points.
  • other methods may also be used to set the calibration points on the calibration object, which is not specifically limited in this embodiment of the present application.
  • Fig. 7 is a schematic diagram showing a possible structure of the first target calibrator.
  • the first target calibration object includes a group of calibration objects.
  • Each calibrator in a set of calibrator can be a cabinet shelf.
  • One side of the box shelf has a specific pattern, and the specific patterns on different calibration objects are different.
  • the two-dimensional code is taken as an example.
  • the points at each corner of the two-dimensional code can be selected as the calibration points, or the two corner points on the lower side of the box can be used as the calibration points, or the two points at the lower corners of the rectangle including the two-dimensional code can be used as the calibration points.
  • two points including the lower corners of the rectangle of the two-dimensional code are used as the calibration points as an example.
  • FIG. 8 it is a schematic flowchart of a calibration method for the first homography matrix.
  • the first homography matrices of the first camera and the second camera are calibrated as an example.
  • the first camera and the second camera are any two spatially adjacent cameras in the moving direction of the sports field.
  • camera 1 and camera 2 camera 2 and camera 3, . . .
  • the method provided in FIG. 8 may be executed by the data processing server, or may be executed by a processor or a processor system in the data processing server.
  • the set area at least includes a common-view area, that is, two adjacent cameras respectively correspond to the same area in the sports field within the viewing angle range.
  • the data processing server may send synchronous shooting signals to multiple cameras in the information system. In this way, multiple cameras take pictures synchronously during the moving process of the first target calibration object to obtain video streams respectively, and send them to the data processing server.
  • the first target marker moves from a starting position on the playing field to a finishing position.
  • Synchronous shooting signal refers to the signal used to trigger multi-camera shooting.
  • wired synchronous triggering it is generally a periodic (related to shooting frame rate) pulse signal.
  • a communication synchronization protocol is generally defined to transmit specific instructions to trigger the cycle. sexual shooting.
  • the sports field can be segmented in the direction of motion.
  • a 400-meter-long circular speed skating track is divided into two sections, each section is 200 meters, respectively 1-200 meters and 200-400 meters.
  • Two first target calibration objects can be used to move on two sections of the ice track respectively, and each first target calibration object only needs to move 200 meters. In this way, the operation time of the target calibration object can be reduced, and the calibration time can also be reduced.
  • the data processing server may send synchronous shooting signals to the two currently calibrated cameras. from The two cameras shoot synchronously during the moving process of the first target calibration object to obtain video streams respectively and send them to the data processing server.
  • the first target calibration object moves within a set area including the common viewing area of the two cameras.
  • the first target calibration object is captured by the first camera and the second camera synchronously with M frames of images containing the first target calibration object. It can be understood that the first camera captures one image at M moments to obtain M images. It should be understood that the first camera and the second camera acquire a frame of image synchronously, that is, the first camera and the second camera acquire a single frame of image at the same moment. It should be noted that a certain error is allowed at the same time, such as an error in the order of milliseconds, which is within the error range allowed by the calibration.
  • the positions of the first target calibration object in the sports field in the M images collected by the first camera are all different. It can be understood that the M images captured by the first camera all include the first calibration object, and the positions of the first calibration object in the images captured by the first camera at different times are different.
  • the M images captured by the second camera all include the first calibration object, and the positions of the first calibration object in the images captured by the second camera at different times are different.
  • the multiple calibration points included in the first target calibration object are at the same distance from the plane where the ground of the sports field is located. It can be understood that when selecting a calibration point, multiple calibration points parallel to the ground can be selected.
  • the first target calibration object has a calibration surface of a specific image and can be placed parallel to the ground.
  • the first calibration object moves in the common view area of the first camera and the second camera for 10 seconds, assuming that the recording adopts 25fps, the first camera and the second camera collect 250 frames of images in the common view area.
  • the first camera and the second camera send the collected images to the data processing server.
  • FIG. 9 is a schematic diagram of a set pattern of the first target calibration object in images captured by the first camera and the second camera at three different times.
  • the rectangular area of view 1 represents the image captured by the first camera
  • the rectangular area of view 2 represents the image captured by the second camera.
  • Lines represent track dividing lines.
  • the data processing server After receiving the video streams sent by the first camera and the second camera respectively, the data processing server detects the feature points of the first target calibration object in each frame image of the video stream (that is, detects the position of the calibration point), for example, detects two The bottom corner of the QR code.
  • FIG. 10 ( a ) shows the feature points of the first moving target calibration object. Taking the distance between the calibration point and the ground of the sports field at a constant value during the movement of the first target calibration object as an example, the multiple feature points of the cumulative continuous frames are in a plane parallel to the ground of the sports field, see Figure 10 Shown in (b). During the movement of the first target calibration object, the distance between the calibration point and the ground of the sports field remains constant.
  • the data processing server determines the position coordinates of the corresponding feature points of the same calibration point in the image collected at the same time by the adjacent camera position, forming a pair of matching points Pi and Pi'.
  • Pi is a feature point in the image collected by the first camera
  • Pi' is a feature point in the image collected by the second camera, as shown in FIG. 11 .
  • the position coordinates of all the feature points in the image captured by the first camera in the image coordinate system of the first camera constitute the first position coordinate information
  • all the feature points in the image captured by the second camera are in the image coordinate system of the second camera.
  • the position coordinates of the image coordinate system constitute the second position coordinate information.
  • the data processing server may calculate the first homography matrix H1 between two adjacent aircraft positions according to the position coordinates corresponding to the matching feature points in the first position coordinate information and the second position coordinate information.
  • the calculated The first homography matrix H1 makes the average projection error Error_proj of all feature points Pi' in the image captured by the second camera projected to the position of the corresponding calibration point P in the image captured by the first camera minimize.
  • the determined first homography matrix H1 is saved as the calibration results of the first camera and the second camera.
  • Pi" in Fig. 12 represents the 2D point projected from Pi' in the second camera to the image coordinate system of the first camera through the first homography matrix H1.
  • the reprojection (Pi” and Pi) of the same feature point observed from the perspectives of two cameras should be completely coincident, but because of the imaging quality, the same feature point under the perspectives of different cameras
  • the reprojection of feature points will inevitably have a pixel-level deviation, that is, Error_proj is not 0.
  • the calibration result can be compared with the tolerable deviation value Compare to determine whether the calibration results meet the requirements.
  • the tolerable deviation value can be determined according to user experience, or determined through experiments, etc.
  • the calibration can be improved by increasing the number of calibration points or improving the distribution of calibration points Accuracy and calibration success rate.
  • the calibration scheme of the second homography matrix provided by the embodiment of the present application is described.
  • a calibration object is set in a set area, so that the second homography matrix corresponding to the camera is calibrated through the image captured by the camera including the calibration object.
  • the calibration object used to calibrate the second homography matrix is referred to as the second target calibration object.
  • a plurality of calibration points are included in the second target calibrator.
  • the second target calibrator may include one or a set of calibrator. Each calibrator in a set of calibrator includes at least one calibration point. Calibration points have stable visual characteristics that do not change from moment to moment.
  • the first target marking object has a specific pattern, and the intersection points of the lines in the pattern can be used as marking points.
  • other methods may also be used to set the calibration points on the calibration object, which is not specifically limited in this embodiment of the present application.
  • the second homography may be calibrated for all cameras.
  • the second homography matrix can be calibrated for some cameras, and the second homography matrix of other cameras that have not been calibrated can be obtained through the first homography matrix between adjacent cameras and the second homography matrix of the calibrated camera. homography matrix to determine.
  • the calibration of the second homography matrix of the third camera is taken as an example. If the second homography matrix is calibrated for all cameras, the third camera may be any camera of all cameras. If not all cameras perform calibration of the second homography matrix, the third camera may be understood as any camera performing calibration of the second homography matrix.
  • FIG. 13 it is a schematic flowchart of a calibration method for the second homography matrix provided in the embodiment of the present application.
  • the method may be performed by the data processing server, or by a processor or processor system in the data processing server.
  • the multiple marking points included in the second target marking object are respectively in the first image
  • first target calibrator and the second target calibrator used in the embodiment of the present application may be the same or different, which is not specifically limited in the embodiment of the present application.
  • the viewing range of the third camera includes a landmark reference point set on the ground of the sports field.
  • Landmark reference points refer to points that can be repeatedly and accurately measured on the sports field, such as the intersection point between the finish line and the track line, and for example, the intersection point between the starting line and a certain track line, and artificially marked points can also be used as landmarks reference point.
  • the position coordinates of the plurality of calibration points included in the second target calibration object in the corresponding coordinate system of the sports field are based on the set landmark parameters.
  • the parameters characterize the relative positional relationship and relative posture among the components included in the second target calibration object.
  • each group of stereoscopic markers may include multiple stereoscopic markers.
  • Each stereomarker includes at least one calibration point.
  • FIG. 14 take the three-dimensional calibration object as the calibration object shown in FIG. 8 as an example. The physical distance between the calibration objects can be measured to obtain the position coordinates of each calibration point in the coordinate system of the sports field.
  • FIG. 14 shows an example of placement of stereoscopic calibration objects.
  • FIG. 14 shows the plane coordinate system of the sports field and the position coordinates of the calibration object.
  • the data processing server recognizes that the position coordinates of the multiple calibration points included in the second target calibration object in the first image are denoted by P 1 .
  • the position coordinates of multiple calibration points on the sports field are represented by P 1w .
  • the distances between the multiple calibration points and the ground are the same, and the ground is flat. Therefore, the height of each calibration point from the ground is c.
  • the second homography matrix is denoted by Hw .
  • P iw H w *P i .
  • FIG. 15 two points at the lower corner of the pattern on each calibration object are detected, and the position coordinates of each point in the image are obtained. Then calculate the second homography matrix Hw according to the feature point Pi and the known calibration point Piw in the coordinate system of the sports field.
  • FIG. 15 represents the image captured by the third camera, and the black dot represents the detected feature point P 1 .
  • (b) in Fig. 15 shows the calibration point P iw corresponding to Pi in the coordinate system of the sports field. It should be understood that the relative physical distance between the calibration objects and the size of the calibration objects remain consistent in (b) of FIG. 15 .
  • the first homography matrix between adjacent cameras and the second homography matrix of the calibrated camera can be used to Sure.
  • the camera whose second homography matrix is calibrated in the above manner may be called a reference camera.
  • a specific landmark point can be the intersection point of the finish line and the track line, or a specific landmark point can also be the intersection point of the starting line and the track line.
  • camera 1 is the reference camera.
  • the second homography matrix of camera 1 is H 1w .
  • Camera 1 is adjacent to camera 2, and the second homography matrix of camera 2 may be cascaded and determined according to the second homography matrix of camera 1 and the first homography matrix between camera 1 and camera 2.
  • H 2,1 represents the first homography of the camera coordinate system of camera 2 mapped to the camera coordinate system of camera 1.
  • the second homography matrix of camera 2 can be cascaded and determined according to the second homography matrix of camera 1 and the first homography matrix between camera 1 and camera 2.
  • the second homography matrix of camera 3 may be determined according to the second homography matrix of camera 1 , the first homography matrix between camera 1 and camera 2 , and the first homography matrix between camera 2 and camera 3 .
  • H 2,1 represents the first homography between camera 2 and camera 1
  • H 3,2 represents the first homography between camera 3 and camera 2.
  • H i,i-1 represents the first homography matrix of the camera coordinate system of camera i mapped to the camera coordinate system of camera i-1, H iw table The second homography matrix of camera i. It should be understood that the camera coordinate system of camera i is mapped to the first homography matrix of camera i-1's camera coordinate system, and the first homography matrix of camera i-1's camera coordinate system is mapped to camera i's camera coordinate system Matrices are inverses of each other.
  • the second homography matrix may be determined in a nearby cascade manner.
  • both camera 19 and camera 3 are reference cameras, and for cameras between camera 3 and camera 19, the second homography matrix can be determined based on camera 19 cascading, or based on camera 3 level Link to determine the second homography matrix.
  • the second homography matrix may be determined in cascade from the camera 19 or the camera 3 according to the distance.
  • the cascade error can also be determined according to the cascade relationship and the error when calibrating the first homography matrix of adjacent cameras, and then select the camera with a smaller cascade error to cascade to determine the second homography matrix .
  • the error when the first homography matrix of camera 19 and camera 20 is calibrated is A
  • the error when the first homography matrix of camera 20 and camera 1 is calibrated is B
  • the error when the first homography matrix of camera 1 and camera 2 is calibrated The error is C
  • the error when the first homography matrix of camera 2 and camera 3 is calibrated is D.
  • the second homography matrix of camera 1 is based on the first homography matrix of camera 1 and camera 2, the first homography matrix of camera 2 and camera 3, and the camera The second homography matrix of 3 is determined in cascade.
  • the data processing server determines the first homography matrix and the second homography matrix for each camera, it saves the first homography matrix of every two adjacent cameras and the second homography matrix corresponding to each camera, so as to It is used for obtaining subsequent exercise information.
  • FIG. 16 it is a schematic flowchart of a method for acquiring motion information provided by the embodiment of the present application.
  • the method can be executed by the data processing server, or realized by a processor or processor system in the data processing server.
  • the calibrated first homography matrices corresponding to the adjacent two cameras perform tracking processing on the target athlete included in the single-frame images captured by the two adjacent cameras at the same time, so as to obtain the target athlete's motion information.
  • the two adjacent cameras are any two of the plurality of cameras, and the two adjacent cameras are adjacent in the set motion direction of the sports field; wherein, the first homography matrix is used to represent The mapping relationship of the position coordinates of the same target object in the single-frame images collected by two adjacent cameras at the same time, the first homography matrix is based on the multi-frame images at different times synchronously captured by the two adjacent cameras After calibration, the positions of the first target calibration object in the sports field are different in the images taken at different times; the first target calibration object includes a plurality of calibration points.
  • the data processing server may send a synchronous shooting signal to each camera, so that each camera starts shooting synchronously to obtain a synchronous video stream.
  • Each camera sends the captured video stream to the data processing server.
  • the data processing server obtains the synchronous frames of multiple cameras for processing, such as using vision algorithms to perform human body detection and target athlete tracking processing in the moving areas of each image, such as ID tracking.
  • detection may also be performed on objects moving at a low speed, so as to exclude the object moving at a low speed from further processing.
  • it may be determined whether the object is a low-speed moving object according to whether the position of the object changes in consecutive video frames of the same camera and the distance of the change.
  • Low-speed moving objects include stationary objects.
  • the interference area before performing synchronization frame processing, can be removed by the phase frame mask area.
  • a mask can be used for each camera, and the ROI area and non-interest area (non- ROI) area.
  • the black area represents the non-ROI area of the non-track area.
  • the data processing server performs human body detection on the video streams of the first camera and the second camera respectively.
  • human body detection may include human body contour detection, and may also include human body bone detection, such as hip joints.
  • human body detection is performed to determine the position of the human body detection frame and extract image features.
  • the target athlete in the multiple frames of images can be determined according to the extracted image features, such as human body outline, human body posture, etc., and the target athlete can also be ID marked.
  • the embodiment of the present application does not specifically limit the specific detection manners of the human body outline, the human body posture, and the like.
  • the athlete tracking results in two frames of images captured by a camera at time t and time t+1.
  • the target athlete can be detected in both the image collected by the first camera at a certain moment and the image collected by the second camera at this moment.
  • the athlete ID tracking In order to ensure the consistency of the athlete ID tracking during the whole movement of the athlete, it can be based on the identity of the athlete detected in the image collected by the first camera at a certain moment and the athlete detected in the image collected by the second camera at that moment. Matching relationship for ID tracking, or understood as ID inheritance.
  • view 1 represents an image collected by the first camera at time t
  • view 2 represents an image collected by the second camera at time t.
  • the data processing server may perform human body detection on View 1 to obtain the human body frame and the centroid of the human body.
  • the target athlete first passes through the field of view of the first camera in the direction of movement of the sports field and then enters the field of view of the second camera. After the ID of the target athlete is determined according to the image collected by the first camera, for example, as shown in (b) in FIG. 19 , the ID of the target athlete in view 1 is 1, and the centroid node is A1.
  • the data processing server detects from view 2 that the body frame of the target athlete is BB2, and according to the first homography matrix H12 mapped from the camera coordinate system of the first camera to the camera coordinate system of the second camera, the athlete’s body frame detected from view 1 is
  • the position coordinate of the centroid node mapped to view 2 is A1'. If A1' is located in BB2, it can be determined that the player detected by view 2 is the same player as the player detected by view 1, and the ID of the player detected by view 2 can be determined Assign a value of 1. If, if A1 ' is located outside BB2, it can be determined that the athlete detected by view 2 is not the same athlete as the athlete detected by view 1, that is, the athlete in the body frame BB2 is other athletes.
  • the determined second homography matrix corresponding to each camera and the position coordinates of the target athlete in the images captured by each camera can be used to determine Determine the position coordinates of the target athlete in the corresponding coordinate system on the sports field, and connect the position coordinates of the target athlete captured by multiple cameras to form a movement track of the target athlete.
  • the position coordinates of the target athlete can select the center of mass of the target athlete, and map the center of mass of the target athlete included in the images collected by each camera to the position coordinates of the coordinate system corresponding to the sports field so as to obtain the movement trajectory of the target athlete, see Figure 20 shown.
  • the number of steps of the athlete can be further calculated through the solution provided in the embodiment of the present application.
  • the number of steps is calculated based on the movement and angle changes of the joint points of the athlete's legs.
  • steps are counted based on changes in the distance between two knee joints in the captured images.
  • the speed of the athlete can be further calculated through the solution provided by the embodiment of the present application. For example, after mapping the center of mass of the target athlete included in the images captured by each camera to the position coordinates of the coordinate system corresponding to the sports field, the athlete's movement distance can be calculated according to the position coordinates, and the athlete's movement duration can be calculated according to the acquisition time of each image , so as to calculate the movement speed of the athlete.
  • the ID can also be used to track the The results generate the motion video of the target athlete.
  • the data processing server calculates the athlete's sports information or generates a sports video, it can be sent to the user's terminal device for display.
  • the data processing server can also analyze the motion data, generate an analysis result, and send the analysis result to the terminal device for display.
  • the display form of the analysis result is not specifically limited in this embodiment of the present application.
  • the data processing server includes hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the modules and method steps described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • FIG. 21 is a schematic structural diagram of an apparatus for acquiring motion information provided by an embodiment of the present application.
  • the device can be applied to a data processing server.
  • the device includes an acquisition unit 2101 and a processing unit 2102 .
  • the acquiring unit 2101 is used to acquire the single-frame images collected at the same time by multiple acquisition devices deployed in the set space including the sports field;
  • the response matrix performs tracking processing on the target athlete included in the single-frame images collected by the two adjacent acquisition devices at the same time to obtain the movement information of the target athlete;
  • the two adjacent acquisition devices are the multiple Any two of the two collection devices and the adjacent two collection devices are adjacent in the set motion direction of the sports field;
  • the first homography matrix is used to represent the same target object in two adjacent collection devices
  • the mapping relationship of position coordinates in a single frame of image collected by the device at the same time, the first homography matrix is obtained by calibration based on multiple frames of images at different times synchronously captured by the two adjacent collection devices. In the images taken at all times, the positions of the first target marking object in the sports field are different; the first target marking object includes a plurality of marking points.
  • the multiple marking points included in the first target marking object are at the same distance from the plane where the ground of the sports field is located.
  • the acquiring unit 2101 is further configured to acquire the difference between the first acquisition device and the second acquisition device during the movement process of the first target calibration object in the set area of the sports ground.
  • Multi-frame images collected at any time the first collection device and the second collection device are any two collection devices adjacent to the set movement direction of the sports field, and the set area includes at least the adjacent The two acquisition devices respectively correspond to the same area in the sports field within the viewing angle;
  • the processing unit 2102 is further configured to obtain a plurality of calibration points included in the first target calibration object respectively in the first acquisition device The first position coordinate information in the collected multi-frame images, and acquiring the second position coordinate information of the plurality of calibration points included in the first target calibration object in the multi-frame images collected by the second acquisition device; according to The first location identification information and the second location coordinate information determine the first homography matrix.
  • processing unit 2102 is further configured to:
  • the second homography matrix corresponding to the third acquisition device is used to describe the position coordinates of the target object in the image captured by the third acquisition device and the position coordinates of the target object in the coordinate system corresponding to the sports field mapping relationship; the third acquisition device is any one of the plurality of acquisition devices.
  • the third collection device is a reference collection device
  • the second homography matrix of the reference collection device is taken on the reference collection device according to the second target calibration object including a plurality of calibration points in the image of The position coordinates of the position and the position coordinates of the plurality of calibration points included in the second target marking object in the coordinate system corresponding to the sports field are determined, and the second target marking object is located in the sports field.
  • the reference within the viewing range of the capture device; or,
  • the third collection device is not a reference collection device and the third collection device is adjacent to the fourth collection device in the set movement direction of the sports field, and the second homography matrix corresponding to the third collection device is based on determined by the second homography matrix corresponding to the fourth collection device and the first homography matrix between the fourth collection device and the third collection device, and the fourth collection device is a reference collection device; or ,
  • the third collection device is not a reference collection device and there is at least one collection device between the third collection device and the fourth collection device in the set movement direction of the sports field, and the second homography corresponding to the third collection device
  • the matrix is determined according to the second homography matrix corresponding to the fourth collection device and the first homography matrix corresponding to every two adjacent collection devices between the third collection device and the fourth collection device.
  • the viewing range of the reference collection device includes set landmark reference points located on the ground of the sports field;
  • the position coordinates in the corresponding coordinate system of the sports field are based on the position coordinates of the set landmark reference point in the corresponding coordinate system of the sports field, the relative position of the second target calibration object and the set landmark reference point relationship and the topological parameters of the second target calibration object, the topological parameters characterize the relative positional relationship and relative posture between the components included in the second target calibration object.
  • FIG. 22 is a schematic structural diagram of a calibration device provided by an embodiment of the present application.
  • the device can be applied to a data processing server.
  • the device includes an acquisition unit 2201 and a processing unit 2202 .
  • the acquisition unit 2201 is configured to acquire the movement process of the first acquisition device and the second acquisition device among the plurality of acquisition devices deployed in the set space including the sports field in the set area of the first target marker in the sports field Multiple frames of images collected at different times, the first collection device and the second collection device are any two collection devices adjacent to the set movement direction of the sports field, and the set area includes at least the The two adjacent collection devices respectively correspond to the same area in the sports field within the viewing angle;
  • a processing unit 2202 configured to acquire first position coordinate information of multiple calibration points included in the first target calibration object in multiple frames of images collected by the first acquisition device, and obtain the first target calibration object
  • the plurality of calibration points included in the second position coordinate information in the multi-frame images collected by the second acquisition device; the first homography is determined according to the first position identification information and the second position coordinate information matrix.
  • the acquisition unit 2201 is further configured to acquire the first image of the second target calibration object included in the viewing range captured by the third acquisition device, and the second target calibration object includes a plurality of A calibration point, the third acquisition device is any one of the plurality of acquisition devices;
  • the processing unit 2202 is further configured to identify the position coordinates of the multiple marking points included in the second target marking object in the first image; according to the position coordinates of the multiple marking points included in the second target marking object in the The position coordinates in the coordinate system corresponding to the sports field, and the position coordinates of the multiple calibration points included in the second target calibration object in the first image respectively determine the second unit corresponding to the third acquisition device. should matrix.
  • the third collection device is adjacent to the fourth collection device in the set movement direction of the sports field
  • the processing unit 2202 is further configured to The second homography matrix and the first homography matrix between the third collection device and the fourth collection device determine a second homography matrix corresponding to the fourth collection device.
  • the third collection device and the fifth collection device The collection device is separated by at least one collection device, and the processing unit 2202 is further configured to:
  • the third collection device is a reference collection device, and the viewing range of the reference collection device includes set landmark reference points located on the ground of the sports field;
  • the position coordinates of the plurality of calibration points included in the second target calibration object in the corresponding coordinate system of the sports field are based on the position coordinates of the set landmark reference point in the corresponding coordinate system of the sports field, the Determined by the relative positional relationship between the second target calibration object and the set landmark reference point and the topological parameters of the second target calibration object, the topology parameters represent the relative positions between the components included in the second target calibration object Relationships and Relative Postures.
  • each functional unit in each embodiment of the present application can be integrated in one processing In the device, it can also be physically present separately, or two or more units can be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units. Only one or more of the units in Fig. 21 and Fig. 22 can be realized by software, hardware, firmware or a combination thereof.
  • the software or firmware includes but is not limited to computer program instructions or codes, and can be executed by a hardware processor.
  • the hardware includes but not limited to various integrated circuits, such as central processing unit (CPU), digital signal processor (DSP), field programmable gate array (FPGA) or application specific integrated circuit (ASIC).
  • the embodiment of the present application further provides a device for implementing the method for acquiring motion information or the calibration method provided in the embodiment of the present application.
  • the apparatus may include: one or more processors 2301, a memory 2302, and one or more computer programs (not shown in the figure).
  • the above components may be coupled through one or more communication lines 2303 .
  • one or more computer programs are stored in the memory 2302, and the one or more computer programs include instructions; the processor 2301 calls the instructions stored in the memory 2302, so that the device executes the movement information provided by the embodiment of the present application. Acquisition method or calibration method.
  • the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or Execute the methods, steps and logic block diagrams disclosed in the embodiments of the present application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the memory may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM random access memory
  • static RAM static random access memory
  • dynamic RAM dynamic random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous connection dynamic random access memory direct rambus RAM, DR RAM
  • direct rambus RAM direct rambus RAM
  • the device may also include a communication interface 2304 for communicating with other devices through a transmission medium, for example, communicating with a collection device through the communication interface 2304, so as to receive images collected by the collection device frame.
  • the communication interface 2304 may be a transceiver, a circuit, a bus, a module, or other types of communication interfaces.
  • the transceiver when the communication interface 2304 is a transceiver, the transceiver may include an independent receiver and an independent transmitter; it may also be a transceiver integrating transceiver functions, or an interface circuit.
  • the processor 2301, the memory 2302, and the communication interface 2304 may be connected to each other through a communication line 2303; the communication line 2303 may be a Peripheral Component Interconnect (PCI for short) bus or expansion Industry Standard Architecture (Extended Industry Standard Architecture, referred to as EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication line 2303 can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 23 , but it does not mean that there is only one bus or one type of bus.
  • An embodiment of the present application provides a computer-readable medium for storing a computer program, where the computer program includes instructions for executing the method steps in the method embodiment corresponding to FIG. 4 .
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) having computer-usable program code embodied therein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

一种运动信息的获取方法、标定方法及装置,无需远动员佩戴可穿戴式传感器,提高获取运动信息的便利性。通过在运动场地周围部署多个采集设备,然后基于预先标定的相机之间的单应矩阵来进行运动员的跟踪。无需运动员佩戴任何设备,可以提供便利性。本申请采用移动标定物进行标定,摆脱了场地固有视觉特征/标定点的影响,适用场景更加宽泛,针对场地较大的场景也同样适用。本申请无需使用大视场角相机也可以解决大场地标定的问题,在大场景下有利于提升目标检测和跟踪的精度。

Description

一种运动信息的获取方法、标定方法及装置
相关申请的交叉引用
本申请要求在2022年03月04日提交中国专利局、申请号为202210209801.3、申请名称为“一种运动信息的获取方法、标定方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别涉及一种运动信息的获取方法、标定方法及装置。
背景技术
体育辅助训练是一种针对体育运动场景,通过多相机视频采集系统同步拍摄、存储、原始数据处理、运动数据分析,生成体育运动统计数据的技术。目前方案中,需要运动员佩戴可穿戴式传感器收集运动员信息,以获取运动员的运动信息。佩戴可穿戴式传感器获取运动员的运动信息的方式,便利性较差,并且诸如比赛场景下,无法收集运动员的运动信息。
发明内容
本申请实施例提供一种运动信息的获取方法、标定方法及装置,无需远动员佩戴可穿戴式传感器,提高获取运动信息的便利性。
第一方面,本申请实施例提供一种运动信息获取方法,包括:获取部署在包括运动场地的设定空间内的多个采集设备分别在同一时刻采集的单帧图像;根据标定的相邻两个采集设备对应的第一单应矩阵对所述相邻两个采集设备在同一时刻采集的单帧图像中包括的目标运动员进行跟踪处理,以获得所述目标运动员的运动信息;所述相邻两个采集设备为所述多个采集设备中的任两个且所述相邻两个采集设备在运动场地的设定运动方向上相邻;其中,所述第一单应矩阵用于表征同一目标物在相邻两个采集设备在同一时刻采集的单帧图像中的位置坐标的映射关系,所述第一单应矩阵是根据所述相邻两个采集设备同步拍摄的不同时刻的多帧图像进行标定得到的,在不同时刻拍摄的图像中第一目标标定物在所述运动场地中的位置不同;所述第一目标标定物包括多个标定点。
目前通过佩戴可穿戴式传感器获取运动员的运动信息的方式,便利性较差。通过本申请实施例提供的方案,在运动场地周围部署多个采集设备,然后基于预先标定的相机之间的单应矩阵来进行运动员的跟踪。无需运动员佩戴任何设备,可以提供便利性。目前采用场地固有视觉特征/标定点对相机进行标定的方法(以下称为现有技术),依赖于整个运动场地中符合要求的标定点的数量和分布,特征点少或分布不均匀都会使相机标定精度下降。与之相比本申请采用移动标定物进行标定,摆脱了场地固有视觉特征/标定点的影响,适用场景更加宽泛,针对场地较大的场景也同样适用。并且因为视觉特征/标定点数量多和分布均匀,标定精度更高。此外现有技术为了利用场地固有视觉特征/标定点往往需要扩大视场角来覆盖多个标定点,这样会导致图像中跟踪目标成像较小,从而导致检测精度下降。本 申请实施例无需使用大视场角相机也可以解决大场地标定的问题,在大场景下有利于提升目标检测和跟踪的精度。
在一种可能的设计中,在不同的图像中所述第一目标标定物包括的多个标定点与所述运动场地的地面所处的平面的距离相同。
在一种可能的设计中,所述方法还包括:获取第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
在一种可能的设计中,所述方法还包括:根据第三采集设备对应的第二单应矩阵以及所述第三采集设备拍摄的单帧图像中所述目标运动员的位置坐标确定所述目标运动员在所述运动场地对应的坐标系的第三位置坐标;其中,第三采集设备对应的第二单应矩阵用于描述所述第三采集设备拍摄的目标物在图像中的位置坐标与所述目标物在所述运动场地对应的坐标系中的位置坐标的映射关系;所述第三采集设备为所述多个采集设备中的任一个。上述方案提供一种有效且简单的确定运动员位置的方案。
在一种可能的设计中,所述第三采集设备为参考采集设备,所述参考采集设备的第二单应矩阵是根据第二目标标定物包括多个标定点在所述参考采集设备拍摄的图像中的位置坐标,以及所述第二目标标定物包括的多个标定点在所述运动场地对应的坐标系中的位置坐标确定的,所述第二目标标定物位于所述运动场地中所述参考采集设备的取景范围内;或者,
所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第四采集设备与所述第三采集设备之间的第一单应矩阵确定的,所述第四采集设备是参考采集设备;或者,
所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备间隔至少一个采集设备,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间每两个相邻采集设备对应的第一单应矩阵确定的。
上述方案中,采用相邻相机的第一单应矩阵级联的方式,来确定各个相机的第二单应矩阵,无需针对所有相机均通过标定方式来确定第二单应矩阵,降低标定的复杂度,也降低标定时长。
在一种可能的设计中,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;
所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所述第二目标标定物包括的部件之间的相对位置关系和相对姿态。
第二方面,本申请实施例提供一种标定方法,包括:获取包括运动场地的设定空间中部署的多个采集设备中第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
采用场地固有视觉特征/标定点对相机进行标定的方法(以下称为现有技术),依赖于整个运动场地中符合要求的标定点的数量和分布,特征点少或分布不均匀都会使相机标定精度下降。与之相比本申请采用移动标定物进行标定,摆脱了场地固有视觉特征/标定点的影响,适用场景更加宽泛,针对场地较大的场景也同样适用。并且因为视觉特征/标定点数量多和分布均匀,标定精度更高。此外现有技术为了利用场地固有视觉特征/标定点往往需要扩大视场角来覆盖多个标定点,这样会导致图像中跟踪目标成像较小,从而导致检测精度下降。本申请实施例无需使用大视场角相机也可以解决大场地标定问题,在大场景下有利于提升目标检测和跟踪精度。
在一种可能的设计中,所述方法还包括:获取第三采集设备采集的取景范围内所包括第二目标标定物的第一图像,所述第二目标标定物包括多个标定点,所述第三采集设备为所述多个采集设备中的任一个;识别所述第二目标标定物包括的多个标定点分别在所述第一图像的位置坐标;根据所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物包括的多个标定点分别在所述第一图像中的位置坐标确定所述第三采集设备对应的所述第二单应矩阵。
在一种可能的设计中,在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述方法还包括:根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间的第一单应矩阵确定所述第四采集设备对应的第二单应矩阵。
在一种可能的设计中,在运动场地的设定运动方向上所述第三采集设备与第五采集设备间隔至少一个采集设备,所述方法还包括:根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第五采集设备之间每两个相邻采集设备对应的第一单应矩阵确定所述第五采集设备的第二单应矩阵。
在一种可能的设计中,所述第三采集设备为参考采集设备,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所第二目标标定物包括的部件之间的相对位置关系和相对姿态。
第三方面,本申请实施例提供一种运动信息获取装置,包括:获取单元,获取部署在包括运动场地的设定空间内的多个采集设备分别在同一时刻采集的单帧图像;处理单元,用于根据标定的相邻两个采集设备对应的第一单应矩阵对所述相邻两个采集设备在同一 时刻采集的单帧图像中包括的目标运动员进行跟踪处理,以获得所述目标运动员的运动信息;所述相邻两个采集设备为所述多个采集设备中的任两个且所述相邻两个采集设备在运动场地的设定运动方向上相邻;其中,所述第一单应矩阵用于表征同一目标物在相邻两个采集设备在同一时刻采集的单帧图像中的位置坐标的映射关系,所述第一单应矩阵是根据所述相邻两个采集设备同步拍摄的不同时刻的多帧图像进行标定得到的,在不同时刻拍摄的图像中第一目标标定物在所述运动场地中的位置不同;所述第一目标标定物包括多个标定点。
在一种可能的设计中,在不同的图像中所述第一目标标定物包括的多个标定点与所述运动场地的地面所处的平面的距离相同。
在一种可能的设计中,所述获取单元,还用于获取第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;所述处理单元,还用于获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
在一种可能的设计中,所述处理单元,还用于:
根据第三采集设备对应的第二单应矩阵以及所述第三采集设备拍摄的单帧图像中所述目标运动员的位置坐标确定所述目标运动员在所述运动场地对应的坐标系的第三位置坐标;
其中,第三采集设备对应的第二单应矩阵用于描述所述第三采集设备拍摄的目标物在图像中的位置坐标与所述目标物在所述运动场地对应的坐标系中的位置坐标的映射关系;所述第三采集设备为所述多个采集设备中的任一个。
在一种可能的设计中,所述第三采集设备为参考采集设备,所述参考采集设备的第二单应矩阵是根据第二目标标定物包括多个标定点在所述参考采集设备拍摄的图像中的位置坐标,以及所述第二目标标定物包括的多个标定点在所述运动场地对应的坐标系中的位置坐标确定的,所述第二目标标定物位于所述运动场地中所述参考采集设备的取景范围内;或者,
所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第四采集设备与所述第三采集设备之间的第一单应矩阵确定的,所述第四采集设备是参考采集设备;或者,
所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备间隔至少一个采集设备,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间每两个相邻采集设备对应的第一单应矩阵确定的。
在一种可能的设计中,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、 所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所述第二目标标定物包括的部件之间的相对位置关系和相对姿态。
第四方面,本申请实施例提供一种标定装置,包括:获取单元,用于获取包括运动场地的设定空间中部署的多个采集设备中第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;
处理单元,用于获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
在一种可能的设计中,所述获取单元,还用于获取第三采集设备采集的取景范围内所包括第二目标标定物的第一图像,所述第二目标标定物包括多个标定点,所述第三采集设备为所述多个采集设备中的任一个;
所述处理单元,还用于识别所述第二目标标定物包括的多个标定点分别在所述第一图像的位置坐标;根据所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物包括的多个标定点分别在所述第一图像中的位置坐标确定所述第三采集设备对应的所述第二单应矩阵。
在一种可能的设计中,在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述处理单元,还用于根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间的第一单应矩阵确定所述第四采集设备对应的第二单应矩阵。
在一种可能的设计中,在运动场地的设定运动方向上所述第三采集设备与第五采集设备间隔至少一个采集设备,所述处理单元,还用于:
根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第五采集设备之间每两个相邻采集设备对应的第一单应矩阵确定所述第五采集设备的第二单应矩阵。
在一种可能的设计中,所述第三采集设备为参考采集设备,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;
所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所第二目标标定物包括的部件之间的相对位置关系和相对姿态。
第五方面,本申请实施例提供一种运动信息的获取装置,包括存储器、处理器。所述存储器,用于存储程序或指令;所述处理器,用于调用所述程序或指令,以执行第一方面或者第一方面的任一设计所述的方法。
第六方面,本申请实施例提供一种标定装置,包括存储器、处理器。所述存储器,用于存储程序或指令;所述处理器,用于调用所述程序或指令,以执行第二方面或者第二方 面的任一设计所述的方法。
第七方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被终端设备执行时,使得该处理器执行上述第一方面或第一方面的任意可能的设计中的方法,或者使得该处理器执行上述第二方面或第二方面的任意可能的设计中的方法。
第八方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当该计算机程序或指令被处理器执行时,实现上述第一方面或第一方面的任意可能的实现方式中的方法,或者实现上述第二方面或第二方面的任意可能的实现方式中的方法。
上述第三方面至第八方面中任一方面可以达到的技术效果可以参照上述第一方面或者第二方面中有益效果的描述,此处不再重复赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍。
图1为本申请实施例提供的一种信息系统架构示意图;
图2为本申请实施例提供的另一种信息系统架构示意图;
图3为本申请实施例提供的一种针对环形速滑赛道的相机部署方式示意图;
图4为本申请实施例提供的另一种针对环形速滑赛道的相机部署方式示意图;
图5为本申请实施例提供的又一种针对环形速滑赛道的相机部署方式示意图;
图6为本申请实施例提供的一种针对田径赛道的相机部署方式示意图;
图7为本申请实施例提供的一种可能的第一目标标定物结构示意图;
图8为本申请实施例提供的第一单应矩阵的标定方法流程示意图;
图9为本申请实施例提供的相邻相机的共视区域中移动标定物示意图;
图10为本申请实施例提供的移动标定物的特征点检测示意图;
图11为本申请实施例提供的相邻相机的匹配点示意图;
图12为本申请实施例提供的通过相邻相机的匹配点进行单应矩阵的确定示意图;
图13为本申请实施例提供的第二单应矩阵的标定方法流程示意图;
图14为本申请实施例提供的场地的坐标系与标定物关系示意图;
图15为本申请实施例提供的场地的坐标系与图像坐标系的映射关系标定示意图;
图16为本申请实施例提供的运动信息获取流程示意图;
图17为本申请实施例提供的ROI以及mask示意图;
图18为本申请实施例提供的单机位在两个时刻人体跟踪示意图;
图19为本申请实施例提供的相邻相机ID接力跟踪示意图;
图20为本申请实施例提供的运动员人体检测以及轨迹投射到场地的示意图;
图21为本申请实施例提供的运动信息的获取装置结构示意图;
图22为本申请实施例提供的标定装置结构示意图;
图23为本申请实施例提供的装置结构示意图。
具体实施方式
本申请提供一种运动信息的获取方法、标定方法及装置,用于对包含运动场地的设定空间内的采集设备与采集设备之间和采集设备与运动场地之间的坐标系的映射关系进行标定,以及根据标定结果获取运动员在运动场地训练或者比赛过程中的运动信息,比如运动轨迹、运动速度、运动步数或者运动距离等等。运动信息可以用于辅助运动员的训练。运动场地可以环形场地,比如环形跑道或者环形速滑冰道。运动场景也可以是直线场地。运动场地也可以是其它形式的,比如足球场等,本申请实施例对此不作具体限定。
参见图1所示,为本申请实施例提供的一种信息系统架构示意图。信息系统中包括多个采集设备和数据处理服务器,图1中以N个采集设备为例,N为正整数。信息系统中包括的摄像头的数量可以依据运动场地的大小来配置。采集设备可以是摄像头、或者相机、或者摄像机等等。该多个采集设备可以部署于运动场地所处的设定空间内。比如,运动场地是环形速滑冰道,环形速滑冰道位于速滑馆,速滑馆内部署多个采集设备。多个采集设备中每个采集设备的取景范围包括部分运动场地。不同的采集设备的取景范围不同,在运动方向上,空间相邻的两个采集设备分别对应的取景范围之间具有共视区域。共视区域即为空间相邻采集设备共同拍摄到的区域。
数据处理服务器可以包括一个或者多个服务器,如果数据处理服务器包括多个服务器,可以理解为,数据处理服务器是由多个服务器构成的服务器集群。数据处理服务器可以用于针对多个采集设备采集的视频流,可以提取多个采集设备的同步帧,然后再逐帧对同步帧进行处理以获取运动信息。一些场景中,数据处理服务器还可以执行标定处理。标定处理可以包括标定获得第一单应矩阵和/或第二单应矩阵。第一单应矩阵用于表征同一目标物在相邻两个采集设备在同一时刻采集的单帧图像中的位置坐标的映射关系。第二单应矩阵用于表征用于描述一个采集设备拍摄的目标物在图像中的位置坐标与所述目标物在所述运动场地对应的坐标系中的位置坐标的映射关系。一些实施例中,运动场地对应的坐标系可以是基于运动场地所创建的空间坐标系,原点可以是运动场地中的某一位置点。一些实施例中,运动场地对应的坐标系的原点也可以是以国家的首都中心为原点建立的空间坐标系,原点也可以采用其它位置,本申请对此不作具体限定。
在一些可能的场景中,信息系统中还可以包括一个或者多个路由设备,路由设备可以用于将采集设备采集的图像传输给数据处理服务器。路由设备可以路由器、交换机等等。以交换机为例,参见图2所示,信息系统中可以部署多层交换机,以两层为例,第一层部署的交换机可以用于连接一个或者多个采集设备,第二层部署的交换机可以作为主交换机,主交换机的一端连接第一层交换机,另一端连接数据处理服务器。例如,参见图2所示。
在另一些可能的场景中,信息系统还支持通过移动设备获取运动分析数据。示例性地,信息系统还包括移动前端。比如,移动前端包括web页面服务器。参见图2所示,web页面服务器与数据处理服务器连接。移动前端还可以还包括无线路由器(或者有线路由器)和一个或者多个终端设备。终端设备可以是台式计算机、便携计算机、手机等支持访问web页面的电子设备。一个或者多个终端设备可以通过访问web页面服务器来对数据服务器进行操作,例如向多个采集设备发送同步采集信号或者停止录制信号。同步采集信号用于指示采集设备同步启动视频录制。停止录制信号用于指示采集设备停止视频录制。再例如,历史记录的视频回放、或者运动信息并显示等等。
下面结合实施例对本申请实施例提供的标定方法进行详细说明。在运动场地所属的设 定空间内进行采集设备的部署。后续描述时,以采集设备为相机为例。在运动场地所属的设定空间部署相机时,可以根据运动场地所属的设定空间内允许安装的点位来确定,比如是否有立柱、桁架或者吊顶等等。在运动场地所属的设定空间部署相机时,每台相机能够覆盖整个赛道的部分区域,比如一段长度的赛道。空间相邻相机具有共视区域,比如具有图像的1/2或者1/3的共视区域。桁架是指由直杆组成的一般具有三角形单元的平面或空间结构,用于相机支架的定制件。
作为一种举例,以速滑馆的环形速滑赛道部署相机为例。赛道一圈长为400米,运动员在赛道上的运动方向为逆时针。图3-图5所示为三种可能的相机部署方式示意图。参见图3中(a)所示,以沿赛道部署20台机位为例。机位是指分布在不同位置的相机。每台机位位于赛道外的上方,从高处斜视俯拍赛道。图3中(a)以相机部署于立柱上为例。图3中(b)为相机部署的俯视图。图3中(c)为相机部署于立柱上的侧视图。直道相机部署在直道的延长线上和弯道的侧面,从正面拍摄运动员。每台相机拍摄40米范围区域,空间相邻的两个相机有20米的共视范围,总计20台相机覆盖400米赛道(直道5台*2,弯道5台*2)。一些场景中,在将相机安装在设定位置之后,可以通过调整相机的焦点、朝向或者视场角,使得每台相机聚焦拍摄赛道的一部分区域,并且相邻机位之间存在共视区域。相机分组连接到两个交换机,相机1-10分别连接一个交换机,相机11-20分别连接另一个交换机,相机1-20采集的视频帧通过两个交换机发送给数据处理服务器。
参见图4中(a)所示,以沿赛道部署20台机位为例。以相机部署于吊顶桁架上为例。每台机位位于赛道上的上方,从高处俯拍赛道。相机镜头轴线与地面成锐角,不垂直于地面,以便覆盖更大的拍摄范围。图4中(b)为相机部署的俯视图。图4中(c)为相机部署于吊顶桁架上的侧视图。一些场景中,在将相机安装在设定位置之后,可以通过调整相机的焦点、朝向或者视场角,使得每台相机聚焦拍摄赛道的一部分区域,并且相邻机位之间存在共视区域。相机分组连接到两个交换机,相机1-10分别连接一个交换机,相机11-20分别连接另一个交换机,相机1-20采集的视频帧通过两个交换机发送给数据处理服务器。
参见图5中(a)所示,以沿赛道部署20台机位为例。以相机部署于立柱为例。20台机位沿赛道部署,1-5拍摄非换道区直道,6-10、16-20拍摄两个弯道,11-15拍摄非换道区直道。图5中(b)所示为相机1-5部署于立柱上的侧视图。相机分组连接到两个交换机,相机1-10分别连接一个交换机,相机11-20分别连接另一个交换机,相机1-20采集的视频帧通过两个交换机发送给数据处理服务器。
作为另一种举例,以田径赛道部署相机为例。相机可以部署于立柱、桁架或者吊顶等位置,还可以部署于看台的设定位置。例如,参见图6所示,以沿赛道部署20台机位为例。图6中,以相机部署于看台的设定位置为例。图6中(a)所示为赛道与相机部署的结构示意图。每台机位位于赛道外看台的上方,从高处斜视俯拍赛道。图6中(b)为一台相机部署的俯视图。图6中(c)为相机部署于看台上的侧视图。直道相机部署在直道的延长线上和弯道的侧面,从正面拍摄运动员。一些场景中,在将相机安装在设定位置之后,可以通过调整相机的焦点、朝向或者视场角,使得每台相机聚焦拍摄赛道的一部分区域,并且相邻机位之间存在共视区域。相机分组连接到两个交换机,相机1-10分别连接一个交换机,相机11-20分别连接另一个交换机,相机1-20采集的视频帧通过两个交换机发送给数据处理服务器。通过在田径赛道部署相机,可以对参与竞赛项目例如短跑,中长跑,跨栏的运动员进行分析获得运动员的运动信息,比如运动轨迹、运动姿态或者运动速度等。
需要说明的是,上述相机部署仅作为示例,具体部署可以结合实际场景来部署,本申请实施例对此不作具体限定。本申请实施例对部署的相机的数量,以及相机的分组情况,以及部署的交换机的数量不作具体限定。
在相机完成部署后,需要对相邻两个相机的图像坐标系之间转换关系进行标定,即标定第一单应矩阵。还可以进一步标定相机的图像坐标系与运动场地坐标系之间的转换关系,即标定第二单应矩阵。
如下对本申请实施例提供的第一单应矩阵的标定方案进行描述。本申请实施例在标定第一单应矩阵时,通过在移动标定物的过程中拍摄多帧图像来实现。为了便于描述,将用于标定第一单应矩阵的标定物称为第一目标标定物。第一目标标定物中包括多个标定点。第一目标标定物可以包括一个或者一组标定物。一组标定物中的每个标定物中包括至少一个标定点。标定点具有不会时刻发生变化的稳定视觉特征。一些可能的示例中,第一目标标定物上具有特定图案,图案中的线条交点可以作为标定点。另一些可能的示例中,第一目标标定物上可以具有发光屏,通过显示的发光点作为标定点。当然也可以采用其它的方式设置标定物上的标定点,本申请实施例对此不作具体限定。
作为一种举例,以第一目标标定物上具有特定图案为例。图7所示为一种可能的第一目标标定物结构示意图。图7中以第一目标标定物包括一组标定物为例。一组标定物中每个标定物可以为一个箱体架子。箱体架子其中一面具有特定图案,不同的标定物上的特定图案不同,图7中以二维码为例。可以选择二维码中各个角的点可以作为标定点,或者箱体的下边两个边角点作为标定点,或者包括二维码的矩形下边角的两个点作为标定点。本申请实施例中以包括二维码的矩形下边角的两个点作为标定点为例。
参见图8所示,为第一单应矩阵的标定方法流程示意图。图8中以标定第一相机和第二相机的第一单应矩阵为例。第一相机和第二相机为运动场地的运动方向上任意两个空间相邻的相机。例如图3中相机1和相机2,相机2和相机3,……。图8提供的方法可以由数据处理服务器来执行,也可以由数据处理服务器中的处理器或者处理器系统来执行。
801,获取第一相机和第二相机分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像。所述设定区域至少包括共视区域,即相邻两个相机分别对应在视角范围内所述运动场地中的相同区域。
一些实施例中,数据处理服务器可以向信息系统中的多个相机发送同步拍摄信号。从而多个相机在第一目标标定物的移动过程中同步拍摄分别获得视频流,并发送给数据处理服务器。在该实施例中,第一目标标定物从运动场地的起始位置移动到终点位置。
比如,环形场地,则第一目标标定物可以绕环形移动一周,使得多个相机中每个相机在第一目标标定物的移动过程的某个时间段能够拍摄到第一目标标定物。同步拍摄信号指用来触发多相机拍摄的信号,有线同步触发中一般是一个周期性(与拍摄帧率相关)的脉冲信号,无线系统中一般定义一个通讯同步协议,传输特定的指令来触发周期性的拍摄。
再比如,运动场地的相机比较多,或者在运动场地的距离较长的情况下,可以对运动场地在运动方向上进行分段。例如将400米长的环形速滑冰道分成2段,每段200米,分别为1-200米和200-400米。可以使用两个第一目标标定物分别在两段冰道上移动,每个第一目标标定物仅需要移动200米,采用该方式,可以减少目标标定物的操作时间,也可以减少标定时间。
另一些实施例中,数据处理服务器可以向当前标定的两个相机发送同步拍摄信号。从 而两个相机在第一目标标定物的移动过程中同步拍摄分别获得视频流并发送给数据处理服务器。在该实施例中,在两个相机的同步拍摄时间段内,第一目标标定物在包含两个相机共视区域的设定区域内移动。
例如,第一目标标定物被第一相机和第二相机同步采集M帧包含第一目标标定物的图像,可以理解第一相机在M个时刻分别采集一张图像得到M张图像。应理解的是,第一相机和第二相机同步采集一帧图像,即为第一相机和第二相机在同一时刻采集单帧图像。需要说明的是,同一时刻允许有一定的误差,比如毫秒级的误差,在标定所允许的误差范围内。
第一相机采集的M张图像中第一目标标定物在所述运动场地中的位置均不同。可以理解,第一相机采集的M张图像中均包括第一目标标定物,第一相机在不同时刻采集的图像中第一目标标定物的位置不同。第二相机采集的M张图像中均包括第一目标标定物,第二相机在不同时刻采集的图像中第一目标标定物的位置不同。在一种可能的实施方式中,第一相机或者第二相机采集的不同的图像中所述第一目标标定物包括的多个标定点与所述运动场地的地面所处的平面的距离相同。可以理解为,在选择标定点时,可以选择平行于地面的多个标定点。一些实施例中,第一目标标定物具有特定图像的标定面,可以平行于地面放置。比如,第一标定物在第一相机和第二相机的共视区域移动历时10秒,假设录制采用25fps,则第一相机和第二相机在共视区域共采集250帧图像。第一相机和第二相机将采集的图像发送给数据处理服务器。例如,参见图9所示,第一相机和第二相机在三个不同时刻拍摄的图像中第一目标标定物的设定图案示意图。图9中,视图1的矩形区域表示第一相机拍摄的图像,视图2的矩形区域表示第二相机拍摄的图像。线条表示赛道分割线。
802,获取所述第一目标标定物包括的多个标定点分别在所述第一相机采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二相机采集的多帧图像中的第二位置坐标信息。
数据处理服务器在接收到第一相机和第二相机分别发送的视频流后,在视频流的每一帧图像中检测第一目标标定物的特征点(即检测标定点的位置),例如检测二维码下边角点。参见图10中(a)所示为检测移动的第一目标标定物的特征点。以在第一目标标定物移动的过程中,标定点距离运动场地的地面的距离保持定值为例,累积连续帧的多个特征点处于一个平行于运动场地的地面的平面内,参见图10中(b)所示。第一目标标定物移动的过程中,标定点距离运动场地的地面的距离保持定值,可以理解为,第一目标标定物移动过程中,标定点与地面的距离的微小变化几乎不影响标定结果的情况下,可以忽略不计,在该情况下,也可以认为标定点距离运动场地的地面的距离保持定值。数据处理服务器确定相邻机位同一时刻采集到图像中的同一个标定点对应特征点的位置坐标,构成一对匹配点Pi和Pi’。Pi是第一相机采集的图像中的特征点,Pi’是第二相机采集的图像中的特征点,参见图11所示。可以理解的是,第一相机采集的图像中的所有特征点在第一相机的图像坐标系的位置坐标构成第一位置坐标信息,第二相机采集的图像中的所有特征点在第二相机的图像坐标系的位置坐标构成第二位置坐标信息。
803,根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
数据处理服务器可以根据第一位置坐标信息和第二位置坐标信息中匹配特征点分别对应的位置坐标,计算两个相邻机位之间的第一单应矩阵H1。参见图12所示,计算得到 的第一单应矩阵H1使得第二相机采集图像中所有特征点Pi’投影到第一相机采集的图像对应标定点P位置的平均投影误差Error_proj最小。进而保存确定的第一单应矩阵H1作为第一相机和第二相机的标定结果。图12中Pi”表示第二相机中的Pi’通过第一单应矩阵H1投影到第一相机的图像坐标系中的2D点。Error_proj=Pi-Pi”=Pi-H1*Pi’。
需注意的是,理想情况下,两个相机的视角下观测到的同一个特征点的重投影(Pi”和Pi)应该是完全重合的,但是因为成像质量,不同相机的视角下的同一个特征点的重投影不可避免的会出现像素级的偏差,即Error_proj不为0。例如噪点、光照强度、镜头畸变等原因影响成像质量。在标定过程中,可以将标定结果与可容忍的偏差值比较,以确定标定结果是否满足要求。可容忍的偏差值可以是根据用户经验确定,或者通过实验确定等。一些实施例中,可以通过增加标定点的数量或改善标定点的分布来提高标定的准确度以及标定的成功率。
接下来,对本申请实施例提供的第二单应矩阵的标定方案进行描述。本申请实施例在标定第二单应矩阵时,通过在设定区域设置标定物,从而通过相机拍摄的包括标定物的图像来实现标定相机对应的第二单应矩阵。为了便于描述,将用于标定第二单应矩阵的标定物称为第二目标标定物。第二目标标定物中包括多个标定点。第二目标标定物可以包括一个或者一组标定物。一组标定物中的每个标定物中包括至少一个标定点。标定点具有不会时刻发生变化的稳定视觉特征。一些可能的示例中,第一目标标定物上具有特定图案,图案中的线条交点可以作为标定点。另一些可能的示例中,第一目标标定物上可以具有发光屏,通过显示的发光点作为标定点。当然也可以采用其它的方式设置标定物上的标定点,本申请实施例对此不作具体限定。
一些实施例中,可以对所有的相机均标定第二单应矩阵。
另一些实施例中,可以对一些相机标定第二单应矩阵,其它未执行标定相机的第二单应矩阵,可以通过相邻相机之间的第一单应矩阵以及已标定的相机的第二单应矩阵来确定。
后续在描述时,以标定第三相机的第二单应矩阵为例。如果针对所有的相机均标定第二单应矩阵,第三相机可以是所有相机的任意一个相机。如果并非所有的相机均执行标定第二单应矩阵时,第三相机可以理解为执行标定第二单应矩阵的任意一个相机。
参见图13所示,为本申请实施例提供的第二单应矩阵的标定方法流程示意图。该方法可以由数据处理服务器执行,或者由数据处理服务器中的处理器或者处理器系统来执行。
1301,获取第三相机采集的取景范围内所包括第二目标标定物的第一图像,所述第二目标标定物包括多个标定点。所述第三相机为所述多个相机中的任一个。
1302,识别所述第二目标标定物包括的多个标定点分别在所述第一图像的位置坐标。
1303,根据所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物包括的多个标定点分别在所述第一图像中的位置坐标确定所述第三相机对应的所述第二单应矩阵。
需要说明的是,本申请实施例采用的第一目标标定物与第二目标标定物可以相同也可以不同,本申请实施例对此不作具体限定。
第三相机的取景范围内包括位于运动场地的地面上设定的地标参考点。地标参考点是指在运动场地上可以重复的准确测量出坐标的点,例如终点线和赛道线的交点,再例如,起跑线与某条赛道线的交点,也可以采用人为标注的点作为地标参考点。第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参 考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所第二目标标定物包括的部件之间的相对位置关系和相对姿态。
示例性地,在第三相机的取景范围内摆放多组立体标定物,每组立体标定物可以包括多个立体标定物。每个立体标定物包括至少一个标定点。参见图14所示,以立体标定物为图8所示的标定物为例。可以测量标定物之间的物理距离,以获得各个标定点在运动场地的坐标系的位置坐标。图14中(a)表示立体标定物的摆放样例。图14中(b)表示运动场地的平面坐标系以及标定物的位置坐标。
数据处理服务器识别到第二目标标定物包括的多个标定点分别在所述第一图像的位置坐标通过P1表示。
多个标定点在运动场地的位置坐标通过P1w表示。
本申请实施例中,以多个标定点与地面的距离相同为例,并且地面是平整。因此,每个标定点距离地面的高度均为c。
因此基于检测到的特征点为Pi以及已知的标定点Piw,i=1……n。从而能够确定第二单应矩阵。第二单应矩阵通过Hw来表示。Piw=Hw*Pi。例如参见图15所示,检测到每个标定物上图案的下边角的两个点,并获取每个点在图像中的位置坐标。然后根据特征点Pi和运动场地的坐标系中的已知的标定点Piw计算第二单应矩阵Hw。图15中(a)表示第三相机拍摄到的图像,黑点表示检测到的特征点P1。图15中(b)表示运动场地的坐标系中与Pi相对应的标定点Piw。应理解的是,标定物之间的相对物理距离以及标定物的尺寸在图15中(b)保持一致。
在一些可能的实施方式中,仅对部分相机采用上述方式标定第二单应矩阵的情况下,可以通过相邻相机之间的第一单应矩阵以及已标定的相机的第二单应矩阵来确定。在该情况下,采用上述方式标定第二单应矩阵的相机可以称为参考相机。参考相机选择时,可以选择能够拍摄到特定地标点的相机,比如特定地标点可以是终点线和赛道线的交点,或者特定地标也可以是起跑线与赛道线的交点。
例如,相机1为参考相机。相机1的第二单应矩阵为H1w。相机1与相机2相邻,则相机2的第二单应矩阵可以根据相机1的第二单应矩阵以及相机1与相机2之间的第一单应矩阵来级联确定。相机2的第二单应矩阵为H2w,则H2w=H1w*H2,1。H2,1表示相机2的相机坐标系映射到相机1的相机坐标系的第一单应矩阵。
例如,相机1与相机3间隔相机2,则相机2的第二单应矩阵可以根据相机1的第二单应矩阵以及相机1与相机2之间的第一单应矩阵来级联确定。相机3的第二单应矩阵可以是根据相机1的第二单应矩阵、相机1与相机2之间的第一单应矩阵以及相机2和相机3之间的第一单应矩阵来确定。相机3的第二单应矩阵为H3w,则H3w=H1w*H2,1*H3,2。H2,1表示相机2和相机1之间的第一单应矩阵,H3,2表示相机3和相机2之间的第一单应矩阵。
进一步地,相机1与相机i间隔i-1个相机。则Hiw=H1w*H2,1*H3,2*……*Hi,i-1。其中,Hi,i-1表示相机i的相机坐标系映射到相机i-1的相机坐标系的第一单应矩阵,Hiw表 示相机i的第二单应矩阵。应理解的是,相机i的相机坐标系映射到相机i-1的相机坐标系的第一单应矩阵,与相机i-1的相机坐标系映射到相机i的相机坐标系的第一单应矩阵互为逆矩阵。
在一些可能的场景中,参考相机可以包括多个,即针对多个参考相机均采用上述方式来标定第二单应矩阵。其它为进行标定第二单应矩阵的相机,可以就近采用级联的方式确定第二单应矩阵。一种示例中,参见图3所示,相机19和相机3均为参考相机,针对相机3与相机19之间的相机可以基于相机19级联确定第二单应矩阵,也可以基于相机3级联确定第二单应矩阵。另一种示例中,针对相机3与相机19之间的相机可以根据距离来确定从相机19或者相机3级联确定第二单应矩阵。又一种示例中,还可以根据级联关系以及标定相邻相机的第一单应矩阵时的误差来确定级联误差,然后选择级联误差较小的相机来级联确定第二单应矩阵。比如相机19与相机20的第一单应矩阵标定时的误差为A,相机20与相机1的第一单应矩阵标定时的误差为B,相机1与相机2的第一单应矩阵标定时的误差为C,相机2与相机3的第一单应矩阵标定时的误差为D。针对相机1来说,如果A*B大于C*D,则相机1的第二单应矩阵根据相机1与相机2的第一单应矩阵、相机2与相机3的第一单应矩阵以及相机3的第二单应矩阵来级联确定。
数据处理服务器在针对各个相机均确定第一单应矩阵和第二单应矩阵的后,将每相邻两个相机的第一单应矩阵以及各个相机对应的第二单应矩阵保存下来,以用于后续的运动信息的获取。
如下结合具体实施例对运动信息的获取方法流程进行详细说明。参见图16所示,为本申请实施例提供的运动信息的获取方法流程示意图。该方法可以由数据处理服务器来执行,或者由数据处理服务器中的处理器或者处理器系统来实现。
1601,获取部署在包括运动场地的设定空间内的多个相机分别在同一时刻采集的单帧图像。
1602,根据标定的相邻两个相机对应的第一单应矩阵对所述相邻两个相机在同一时刻采集的单帧图像中包括的目标运动员进行跟踪处理,以获得所述目标运动员的运动信息。
所述相邻两个相机为所述多个相机中的任两个且所述相邻两个相机在运动场地的设定运动方向上相邻;其中,所述第一单应矩阵用于表征同一目标物在相邻两个相机在同一时刻采集的单帧图像中的位置坐标的映射关系,所述第一单应矩阵是根据所述相邻两个相机同步拍摄的不同时刻的多帧图像进行标定得到的,在不同时刻拍摄的图像中第一目标标定物在所述运动场地中的位置不同;所述第一目标标定物包括多个标定点。
示例性地,数据处理服务器可以向各个相机分别发送同步拍摄信号,从而各个相机同步启动拍摄获得同步视频流。各个相机将采集得到的视频流发送给数据处理服务器。数据处理服务器获取多个相机的同步帧进行处理,比如利用视觉算法,在各个图像中运动区域进行人体检测和目标运动员的跟踪处理,比如ID跟踪。
一些实施例中,还可以针对低速运动的物体进行检测,以排除该低速运动的物体不再进行后续处理。一些实施例中,可以根据同一相机的连续的视频帧中物体的位置是否变化,以及变化距离来确定该物体是否为低速运动的物体。低速运动的物体包括不动的物体。通过排除低速运动的物体可以避免算法的误检低速运动的物体为运动员,也可以防止热身道运动员的干扰,或者教练员的干扰等。
在一些实施例中,在执行同步帧处理之前,可以通过相位画面掩膜mask去除干扰区 域。为了避免非赛道区域人物,比如裁判、教练、热身运动员,对于检测的干扰,可以针对每台相机采用掩膜的方式,在相机画面中标注出感兴趣(ROI)区域和不感兴趣(non-ROI)区域。例如,参见图17所示,黑色区域表示非赛道区域的non-ROI区域。
以相邻的两个相机分别为第一相机和第二相机的ID跟踪为例。第一相机和第二相机在运动场地的设定方向上相邻。第一相机和第二相机分别在接收到数据处理服务器的同步拍摄信号之后,将同步采集的视频流发送给数据处理服务器。数据处理服务器对第一相机和第二相机的视频流分别进行人体检测。一些实施例中,人体检测可以包括人体轮廓检测,还可以包括人体骨骼检测,比如髋关节。比如,可以通过髋关节来表示人体的质心。参见图17所示,执行人体检测确定人体检测框的位置并提取图像特征。针对相同相机连续拍摄的多帧图像来说,可以根据提取的图像特征,比如人体轮廓,人体姿态等来确定多帧图像中的目标运动员,还可以对目标运动员进行ID标记。具体的人体轮廓、人体姿态等的检测方式,本申请实施例对此不作具体限定。例如,参见图18所示,一个相机在t时刻和t+1时刻拍摄得到的两帧图像中的运动员跟踪结果。当目标运动员位于两个相机的共视区域时,在第一相机在某一时刻采集的图像和第二相机在该时刻采集的图像中均能检测到目标运动员。为了保证运动员全程运动过程中的运动员ID跟踪的一致性,可以根据从第一相机在某一时刻采集的图像中检测到的运动员和从第二相机在该时刻采集的图像中检测到的运动员的匹配关系,来进行ID跟踪,或者理解为ID继承。
作为一种举例,参见图19中(a)所示,视图1表示第一相机在时刻t采集到的图像,视图2表示第二相机在时刻t采集到的图像。在进行相邻相机之间的ID跟踪时,数据处理服务器可以对视图1进行人体检测获得人体框以及人体的质心点。以目标运动员在运动场地的运动方向上先经过第一相机的视场范围再进入到第二相机的视场范围内。在根据第一相机采集的图像确定目标运动员的ID后,比如,参见图19中(b)所示,视图1中目标运动员的ID=1,质心节点为A1。数据处理服务器从视图2检测到目标运动员的人体框为BB2,根据第一相机的相机坐标系映射到第二相机的相机坐标系的第一单应矩阵H12,将从视图1检测到的运动员的质心节点映射到视图2中的位置坐标为A1’,如果A1’位于BB2内,可以确定视图2检测到的运动员与视图1检测到的运动员为同一运动员,可以将视图2检测到的运动员的ID赋值为1。如果,如果A1’位于BB2外,可以确定视图2检测到的运动员与视图1检测到的运动员不为同一运动员,即人体框BB2中的运动员为其他运动员。
在一种可能的实施方式中,在对多个相机采集到的图像进行ID跟踪之后,可以根据已确定的各个相机对应的第二单应矩阵以及各个相机拍摄的图像中目标运动员的位置坐标来确定目标运动员在运动场地对应的坐标系的位置坐标,将多个相机拍摄的目标运动员的位置坐标连接能够形成该目标运动员的运动轨迹。例如,目标运动员的位置坐标可以选择目标运动员的质心,将每个相机采集的图像包括的目标运动员的质心映射到运动场地对应的坐标系的位置坐标从而可以获得目标运动员的运动轨迹,参见图20所示。
在一种可能的实施方式中,通过本申请实施例提供的方案还可以进一步计算运动员的步数。比如,根据运动员腿部关节点的运动和夹角变化来计算步数。例如,根据采集的图像中两膝关节间距变化来计步。通过本申请实施例提供的方案还可以进一步计算运动员的速度。比如,将每个相机采集的图像包括的目标运动员的质心映射到运动场地对应的坐标系的位置坐标后,可以根据位置坐标来计算运动员的运动距离,根据各个图像的采集时刻来运动员的运动时长,从而计算运动员的运动速度。本申请实施例还可以通过ID跟踪结 果生成目标运动员的运动视频。在数据处理服务器计算运动员的运动信息或者生成运动视频后,可以发送给用户的终端设备来显示。一些实施例中,数据处理服务器还可以对运动数据分析,生成分析结果,将分析结果发送给终端设备进行显示。分析结果的显示形式本申请实施例不作具体限定。
可以理解的是,为了实现上述方法实施例中功能,数据处理服务器包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的模块及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。
作为一种举例,参见图21所示,为本申请实施例提供的一种运动信息的获取装置结构示意图。该装置可以应用于数据处理服务器。该装置包括获取单元2101和处理单元2102。获取单元2101,获取部署在包括运动场地的设定空间内的多个采集设备分别在同一时刻采集的单帧图像;处理单元2102,用于根据标定的相邻两个采集设备对应的第一单应矩阵对所述相邻两个采集设备在同一时刻采集的单帧图像中包括的目标运动员进行跟踪处理,以获得所述目标运动员的运动信息;所述相邻两个采集设备为所述多个采集设备中的任两个且所述相邻两个采集设备在运动场地的设定运动方向上相邻;其中,所述第一单应矩阵用于表征同一目标物在相邻两个采集设备在同一时刻采集的单帧图像中的位置坐标的映射关系,所述第一单应矩阵是根据所述相邻两个采集设备同步拍摄的不同时刻的多帧图像进行标定得到的,在不同时刻拍摄的图像中第一目标标定物在所述运动场地中的位置不同;所述第一目标标定物包括多个标定点。
在一种可能的实现方式中,在不同的图像中所述第一目标标定物包括的多个标定点与所述运动场地的地面所处的平面的距离相同。
在一种可能的实现方式中,所述获取单元2101,还用于获取第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;所述处理单元2102,还用于获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
在一种可能的实现方式中,所述处理单元2102,还用于:
根据第三采集设备对应的第二单应矩阵以及所述第三采集设备拍摄的单帧图像中所述目标运动员的位置坐标确定所述目标运动员在所述运动场地对应的坐标系的第三位置坐标;
其中,第三采集设备对应的第二单应矩阵用于描述所述第三采集设备拍摄的目标物在图像中的位置坐标与所述目标物在所述运动场地对应的坐标系中的位置坐标的映射关系;所述第三采集设备为所述多个采集设备中的任一个。
在一种可能的实现方式中,所述第三采集设备为参考采集设备,所述参考采集设备的第二单应矩阵是根据第二目标标定物包括多个标定点在所述参考采集设备拍摄的图像中 的位置坐标,以及所述第二目标标定物包括的多个标定点在所述运动场地对应的坐标系中的位置坐标确定的,所述第二目标标定物位于所述运动场地中所述参考采集设备的取景范围内;或者,
所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第四采集设备与所述第三采集设备之间的第一单应矩阵确定的,所述第四采集设备是参考采集设备;或者,
所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备间隔至少一个采集设备,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间每两个相邻采集设备对应的第一单应矩阵确定的。
在一种可能的实现方式中,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所述第二目标标定物包括的部件之间的相对位置关系和相对姿态。
作为另一种举例,参见图22所示,为本申请实施例提供的一种标定装置结构示意图。该装置可以应用于数据处理服务器。该装置包括获取单元2201和处理单元2202。
获取单元2201,用于获取包括运动场地的设定空间中部署的多个采集设备中第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;
处理单元2202,用于获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
在一种可能的实现方式中,所述获取单元2201,还用于获取第三采集设备采集的取景范围内所包括第二目标标定物的第一图像,所述第二目标标定物包括多个标定点,所述第三采集设备为所述多个采集设备中的任一个;
所述处理单元2202,还用于识别所述第二目标标定物包括的多个标定点分别在所述第一图像的位置坐标;根据所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物包括的多个标定点分别在所述第一图像中的位置坐标确定所述第三采集设备对应的所述第二单应矩阵。
在一种可能的实现方式中,在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述处理单元2202,还用于根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间的第一单应矩阵确定所述第四采集设备对应的第二单应矩阵。
在一种可能的实现方式中,在运动场地的设定运动方向上所述第三采集设备与第五采 集设备间隔至少一个采集设备,所述处理单元2202,还用于:
根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第五采集设备之间每两个相邻采集设备对应的第一单应矩阵确定所述第五采集设备的第二单应矩阵。
在一种可能的实现方式中,所述第三采集设备为参考采集设备,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;
所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所第二目标标定物包括的部件之间的相对位置关系和相对姿态。
本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能单元可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。图21、图22中的各个单元的只一个或多个可以软件、硬件、固件或其结合实现。所述软件或固件包括但不限于计算机程序指令或代码,并可以被硬件处理器所执行。所述硬件包括但不限于各类集成电路,如中央处理单元(CPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)或专用集成电路(ASIC)。
基于以上实施例及相同构思,本申请实施例还提供了一种装置,用于实现本申请实施例提供的运动信息的获取方法或者标定方法。如图23所示,装置可以包括:一个或多个处理器2301,存储器2302,以及一个或多个计算机程序(图中未示出)。作为一种实现方式,上述各器件可以通过一个或多个通信线路2303耦合。其中,存储器2302中存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令;处理器2301调用存储器2302中存储的所述指令,使得装置执行本申请实施例提供的运动信息的获取方法或者标定方法。
在本申请实施例中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和 直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置。
作为一种实现方式,所述装置还可以包括通信接口2304,用于通过传输介质和其它装置进行通信,例如,可以通过所述通信接口2304,与采集设备进行通信,从而接收采集设备采集的图像帧。在本申请实施例中,通信接口2304可以是收发器、电路、总线、模块或其它类型的通信接口。在本申请实施例中,通信接口2304为收发器时,收发器可以包括独立的接收器、独立的发射器;也可以集成收发功能的收发器、或者是接口电路。
在本申请一些实施例中,所述处理器2301、存储器2302以及通信接口2304可以通过通信线路2303相互连接;通信线路2303可以是外设部件互连标准(Peripheral Component Interconnect,简称PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,简称EISA)总线等。所述通信线路2303可以分为地址总线、数据总线、控制总线等。为便于表示,图23中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
本领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的通信系统的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请的一个实施例提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行图4对应的方法实施例中的方法步骤的指令。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (25)

  1. 一种运动信息获取方法,其特征在于,包括:
    获取部署在包括运动场地的设定空间内的多个采集设备分别在同一时刻采集的单帧图像;
    根据标定的相邻两个采集设备对应的第一单应矩阵对所述相邻两个采集设备在同一时刻采集的单帧图像中包括的目标运动员进行跟踪处理,以获得所述目标运动员的运动信息;所述相邻两个采集设备为所述多个采集设备中的任两个且所述相邻两个采集设备在运动场地的设定运动方向上相邻;
    其中,所述第一单应矩阵用于表征同一目标物在相邻两个采集设备在同一时刻采集的单帧图像中的位置坐标的映射关系,所述第一单应矩阵是根据所述相邻两个采集设备同步拍摄的不同时刻的多帧图像进行标定得到的,在不同时刻拍摄的图像中第一目标标定物在所述运动场地中的位置不同;所述第一目标标定物包括多个标定点。
  2. 如权利要求1所述的方法,其特征在于,在不同的图像中所述第一目标标定物包括的多个标定点与所述运动场地的地面所处的平面的距离相同。
  3. 如权利要求1或2所述的方法,其特征在于,所述方法还包括:
    获取第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;
    获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;
    根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    根据第三采集设备对应的第二单应矩阵以及所述第三采集设备拍摄的单帧图像中所述目标运动员的位置坐标确定所述目标运动员在所述运动场地对应的坐标系的第三位置坐标;
    其中,第三采集设备对应的第二单应矩阵用于描述所述第三采集设备拍摄的目标物在图像中的位置坐标与所述目标物在所述运动场地对应的坐标系中的位置坐标的映射关系;所述第三采集设备为所述多个采集设备中的任一个。
  5. 如权利要求4所述的方法,其特征在于,所述第三采集设备为参考采集设备,所述参考采集设备的第二单应矩阵是根据第二目标标定物包括多个标定点在所述参考采集设备拍摄的图像中的位置坐标,以及所述第二目标标定物包括的多个标定点在所述运动场地对应的坐标系中的位置坐标确定的,所述第二目标标定物位于所述运动场地中所述参考采集设备的取景范围内;或者,
    所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第四采集设备与所述第三采集设备之间的第一单应矩阵确定的,所述第四采集设备是参考采集设备;或者,
    所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备间隔至少一个采集设备,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间每两个相邻采集设备对应的第一单应矩阵确定的。
  6. 如权利要求5所述的方法,其特征在于,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;
    所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所述第二目标标定物包括的部件之间的相对位置关系和相对姿态。
  7. 一种标定方法,其特征在于,包括:
    获取包括运动场地的设定空间中部署的多个采集设备中第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;
    获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;
    根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
  8. 如权利要求7所述的方法,其特征在于,所述方法还包括:
    获取第三采集设备采集的取景范围内所包括第二目标标定物的第一图像,所述第二目标标定物包括多个标定点,所述第三采集设备为所述多个采集设备中的任一个;
    识别所述第二目标标定物包括的多个标定点分别在所述第一图像的位置坐标;
    根据所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物包括的多个标定点分别在所述第一图像中的位置坐标确定所述第三采集设备对应的所述第二单应矩阵。
  9. 如权利要求8所述的方法,其特征在于,在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述方法还包括:
    根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间的第一单应矩阵确定所述第四采集设备对应的第二单应矩阵。
  10. 如权利要求8所述的方法,其特征在于,在运动场地的设定运动方向上所述第三采 集设备与第五采集设备间隔至少一个采集设备,所述方法还包括:
    根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第五采集设备之间每两个相邻采集设备对应的第一单应矩阵确定所述第五采集设备的第二单应矩阵。
  11. 如权利要求8-10任一项所述的方法,其特征在于,所述第三采集设备为参考采集设备,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;
    所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所第二目标标定物包括的部件之间的相对位置关系和相对姿态。
  12. 一种运动信息获取装置,其特征在于,包括:
    获取单元,获取部署在包括运动场地的设定空间内的多个采集设备分别在同一时刻采集的单帧图像;
    处理单元,用于根据标定的相邻两个采集设备对应的第一单应矩阵对所述相邻两个采集设备在同一时刻采集的单帧图像中包括的目标运动员进行跟踪处理,以获得所述目标运动员的运动信息;所述相邻两个采集设备为所述多个采集设备中的任两个且所述相邻两个采集设备在运动场地的设定运动方向上相邻;
    其中,所述第一单应矩阵用于表征同一目标物在相邻两个采集设备在同一时刻采集的单帧图像中的位置坐标的映射关系,所述第一单应矩阵是根据所述相邻两个采集设备同步拍摄的不同时刻的多帧图像进行标定得到的,在不同时刻拍摄的图像中第一目标标定物在所述运动场地中的位置不同;所述第一目标标定物包括多个标定点。
  13. 如权利要求12所述的装置,其特征在于,在不同的图像中所述第一目标标定物包括的多个标定点与所述运动场地的地面所处的平面的距离相同。
  14. 如权利要求12或13所述的装置,其特征在于,所述获取单元,还用于获取第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;
    所述处理单元,还用于获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
  15. 如权利要求12-14任一项所述的装置,其特征在于,所述处理单元,还用于:
    根据第三采集设备对应的第二单应矩阵以及所述第三采集设备拍摄的单帧图像中所述目标运动员的位置坐标确定所述目标运动员在所述运动场地对应的坐标系的第三位置 坐标;
    其中,第三采集设备对应的第二单应矩阵用于描述所述第三采集设备拍摄的目标物在图像中的位置坐标与所述目标物在所述运动场地对应的坐标系中的位置坐标的映射关系;所述第三采集设备为所述多个采集设备中的任一个。
  16. 如权利要求15所述的装置,其特征在于,所述第三采集设备为参考采集设备,所述参考采集设备的第二单应矩阵是根据第二目标标定物包括多个标定点在所述参考采集设备拍摄的图像中的位置坐标,以及所述第二目标标定物包括的多个标定点在所述运动场地对应的坐标系中的位置坐标确定的,所述第二目标标定物位于所述运动场地中所述参考采集设备的取景范围内;或者,
    所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第四采集设备与所述第三采集设备之间的第一单应矩阵确定的,所述第四采集设备是参考采集设备;或者,
    所述第三采集设备不为参考采集设备且在运动场地的设定运动方向上所述第三采集设备与第四采集设备间隔至少一个采集设备,所述第三采集设备对应的第二单应矩阵是根据所述第四采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间每两个相邻采集设备对应的第一单应矩阵确定的。
  17. 如权利要求16所述的装置,其特征在于,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;
    所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所述第二目标标定物包括的部件之间的相对位置关系和相对姿态。
  18. 一种标定装置,其特征在于,包括:
    获取单元,用于获取包括运动场地的设定空间中部署的多个采集设备中第一采集设备和第二采集设备分别在所述第一目标标定物在运动场地的设定区域的运动过程中的不同时刻采集的多帧图像,所述第一采集设备和所述第二采集设备为在运动场地的设定运动方向上相邻的任意两个采集设备,所述设定区域至少包括所述相邻两个采集设备分别对应在视角范围内所述运动场地中的相同区域;
    处理单元,用于获取所述第一目标标定物包括的多个标定点分别在所述第一采集设备采集的多帧图像中的第一位置坐标信息,以及获取所述第一目标标定物包括的多个标定点分别在所述第二采集设备采集的多帧图像中的第二位置坐标信息;根据所述第一位置标识信息以及所述第二位置坐标信息确定所述第一单应矩阵。
  19. 如权利要求18所述的装置,其特征在于,所述获取单元,还用于获取第三采集设备采集的取景范围内所包括第二目标标定物的第一图像,所述第二目标标定物包括多个标定点,所述第三采集设备为所述多个采集设备中的任一个;
    所述处理单元,还用于识别所述第二目标标定物包括的多个标定点分别在所述第一图像的位置坐标;根据所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物包括的多个标定点分别在所述第一图像中的位置坐标确定所述第三采集设备对应的所述第二单应矩阵。
  20. 如权利要求19所述的装置,其特征在于,在运动场地的设定运动方向上所述第三采集设备与第四采集设备相邻,所述处理单元,还用于根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第四采集设备之间的第一单应矩阵确定所述第四采集设备对应的第二单应矩阵。
  21. 如权利要求19所述的装置,其特征在于,在运动场地的设定运动方向上所述第三采集设备与第五采集设备间隔至少一个采集设备,所述处理单元,还用于:
    根据所述第三采集设备对应的第二单应矩阵以及所述第三采集设备与所述第五采集设备之间每两个相邻采集设备对应的第一单应矩阵确定所述第五采集设备的第二单应矩阵。
  22. 如权利要求19-21任一项所述的装置,其特征在于,所述第三采集设备为参考采集设备,所述参考采集设备的取景范围内包括位于所述运动场地的地面上的设定地标参考点;
    所述第二目标标定物包括的多个标定点分别在所述运动场地对应坐标系中的位置坐标是根据所述设定地标参考点在所述运动场地对应坐标系中的位置坐标、所述第二目标标定物与所述设定地标参考点的相对位置关系以及所述第二目标标定物的拓扑参数确定的,所述拓扑参数表征所第二目标标定物包括的部件之间的相对位置关系和相对姿态。
  23. 一种运动信息获取装置,其特征在于,包括处理器和存储器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述存储器中存储的计算机程序,实现如权利要求1-6任一项所述的方法。
  24. 一种标定装置,其特征在于,包括处理器和存储器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述存储器中存储的计算机程序,实现如权利要求7-11任一项所述的方法。
  25. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序在处理器上运行时,使得所述处理器执行如上述权利要求1-11任一项所述的方法。
PCT/CN2023/078599 2022-03-04 2023-02-28 一种运动信息的获取方法、标定方法及装置 WO2023165452A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210209801.3A CN116740130A (zh) 2022-03-04 2022-03-04 一种运动信息的获取方法、标定方法及装置
CN202210209801.3 2022-03-04

Publications (1)

Publication Number Publication Date
WO2023165452A1 true WO2023165452A1 (zh) 2023-09-07

Family

ID=87882988

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/078599 WO2023165452A1 (zh) 2022-03-04 2023-02-28 一种运动信息的获取方法、标定方法及装置

Country Status (2)

Country Link
CN (1) CN116740130A (zh)
WO (1) WO2023165452A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146231A (zh) * 2007-07-03 2008-03-19 浙江大学 根据多视角视频流生成全景视频的方法
CN101853524A (zh) * 2010-05-13 2010-10-06 北京农业信息技术研究中心 使用图像序列生成玉米果穗全景图的方法
CN102164269A (zh) * 2011-01-21 2011-08-24 北京中星微电子有限公司 全景监控方法及装置
CN105869166A (zh) * 2016-03-29 2016-08-17 北方工业大学 一种基于双目视觉的人体动作识别方法及系统
CN106991690A (zh) * 2017-04-01 2017-07-28 电子科技大学 一种基于运动目标时序信息的视频序列同步方法
CN109240496A (zh) * 2018-08-24 2019-01-18 中国传媒大学 一种基于虚拟现实的声光交互系统
EP3493148A1 (en) * 2017-11-30 2019-06-05 Thomson Licensing View synthesis for unstabilized multi-view video
CN111091025A (zh) * 2018-10-23 2020-05-01 阿里巴巴集团控股有限公司 图像处理方法、装置和设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146231A (zh) * 2007-07-03 2008-03-19 浙江大学 根据多视角视频流生成全景视频的方法
CN101853524A (zh) * 2010-05-13 2010-10-06 北京农业信息技术研究中心 使用图像序列生成玉米果穗全景图的方法
CN102164269A (zh) * 2011-01-21 2011-08-24 北京中星微电子有限公司 全景监控方法及装置
CN105869166A (zh) * 2016-03-29 2016-08-17 北方工业大学 一种基于双目视觉的人体动作识别方法及系统
CN106991690A (zh) * 2017-04-01 2017-07-28 电子科技大学 一种基于运动目标时序信息的视频序列同步方法
EP3493148A1 (en) * 2017-11-30 2019-06-05 Thomson Licensing View synthesis for unstabilized multi-view video
CN109240496A (zh) * 2018-08-24 2019-01-18 中国传媒大学 一种基于虚拟现实的声光交互系统
CN111091025A (zh) * 2018-10-23 2020-05-01 阿里巴巴集团控股有限公司 图像处理方法、装置和设备

Also Published As

Publication number Publication date
CN116740130A (zh) 2023-09-12

Similar Documents

Publication Publication Date Title
CN106251334B (zh) 一种摄像机参数调整方法、导播摄像机及系统
CN103905792B (zh) 一种基于ptz监控摄像机的3d定位方法及装置
CN109186584A (zh) 一种基于人脸识别的室内定位方法及定位系统
US20200258257A1 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
KR20150050172A (ko) 관심 객체 추적을 위한 다중 카메라 동적 선택 장치 및 방법
CN110969097A (zh) 监控目标联动跟踪控制方法、设备及存储装置
CN112311965A (zh) 虚拟拍摄方法、装置、系统及存储介质
CN107167077B (zh) 立体视觉测量系统和立体视觉测量方法
EP3901910A1 (en) Generation device, generation method and program for three-dimensional model
US10922871B2 (en) Casting a ray projection from a perspective view
US20220005276A1 (en) Generation device, generation method and storage medium for three-dimensional model
CN103500471A (zh) 实现高分辨率增强现实系统的方法
JP2003179800A (ja) 多視点画像生成装置、画像処理装置、および方法、並びにコンピュータ・プログラム
CN109448105A (zh) 基于多深度图像传感器的三维人体骨架生成方法及系统
WO2024012405A1 (zh) 一种标定方法及装置
WO2023165452A1 (zh) 一种运动信息的获取方法、标定方法及装置
CN114037923A (zh) 一种目标活动热点图绘制方法、系统、设备及存储介质
CN111279352B (zh) 通过投球练习的三维信息获取系统及摄像头参数算出方法
KR20190064540A (ko) 파노라마 영상 생성 장치 및 방법
KR102298047B1 (ko) 디지털 콘텐츠를 녹화하여 3d 영상을 생성하는 방법 및 장치
KR101845612B1 (ko) 투구 연습을 통한 3차원 정보 획득 시스템 및 카메라 파라미터 산출 방법
CN108307175A (zh) 基于柔性传感器的舞蹈动态影像捕捉和还原系统及控制方法
KR101456861B1 (ko) 다중 카메라 시스템에서의 오브젝트의 동적 정보를 이용한시공간 교정 추적 방법 및 그 장치
KR101375708B1 (ko) 복수 영상을 이용한 모션 캡처 시스템, 방법, 및 상기 방법을 실행시키기 위한 컴퓨터 판독 가능한 프로그램을 기록한 매체
WO2021056552A1 (zh) 视频的处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23762851

Country of ref document: EP

Kind code of ref document: A1