CN111784749A - Space positioning and motion analysis system based on binocular vision - Google Patents
Space positioning and motion analysis system based on binocular vision Download PDFInfo
- Publication number
- CN111784749A CN111784749A CN201911282790.6A CN201911282790A CN111784749A CN 111784749 A CN111784749 A CN 111784749A CN 201911282790 A CN201911282790 A CN 201911282790A CN 111784749 A CN111784749 A CN 111784749A
- Authority
- CN
- China
- Prior art keywords
- binocular vision
- analysis system
- images
- motion analysis
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Abstract
The invention discloses a binocular vision-based space positioning and motion analysis system. The two array camera systems are used for replacing binocular cameras to collect scene images from two specified directions, so that the field angle of the collected images can be remarkably improved; the system combines the gesture analysis and the motion trail analysis to record the motion process of the athlete, and is used for analyzing the action and the position of the athlete frame by frame and observing the motion state of the athlete at different moments.
Description
Technical Field
The invention belongs to the field of image processing and machine vision, and particularly relates to a binocular vision-based space positioning and motion analysis system.
Background
Common positioning methods include ultrasonic positioning, laser positioning, infrared positioning, optical positioning and the like, and are mainly applied to the fields of military affairs, industrial measurement, building construction and the like. The currently used vision measuring technology mainly comprises the following 3 methods, namely a binocular stereo vision method, a structured light method and a geometric optics method.
The binocular stereo vision positioning has the advantages of non-contact, automatic measurement, no harm to human eyes and the like. The most common method is a parallel optical axis model, two cameras are horizontally arranged at a distance of a base line, the same characteristic point only has horizontal parallax in two images through distortion correction and epipolar correction, the parallax of the corresponding point is obtained by using an image registration method, and the depth information of an object point in a scene is finally obtained by using the relation between the parallax and the depth. At present, a large number of binocular stereo vision positioning methods are proposed and applied to various engineering fields. However, the conventional binocular positioning technology often has the disadvantage of insufficient range, and in order to increase the range and ensure the positioning accuracy, the baseline distance needs to be enlarged, that is, the optical center distance between the two cameras needs to be increased, so that the field angle of the binocular camera is reduced, the positioning range is also reduced, and the positioning requirement of large places such as ice rinks is difficult to meet. And the motion analysis of the moving object in the traditional binocular stereo vision positioning is difficult due to the characteristic of small field angle. Therefore, there is a need in engineering to develop a positioning system that can satisfy both the range and the wide viewing angle.
Disclosure of Invention
In view of the above, the invention uses the image stitching technology to enlarge the field angle of the binocular camera, can solve the problem of insufficient positioning range of the binocular stereoscopic vision system at low cost, and simultaneously combines the posture and motion trajectory analysis technology to assist the motion training.
The invention provides a binocular vision-based space positioning and motion analysis system scheme. The technical scheme is as follows:
1) two array camera systems for capturing scene images from two designated orientations instead of binocular cameras
2) And performing feature extraction on an image acquired by each lens of the array camera, and tracking and gesture recognition on a specified target in the image.
3) After the images obtained by the two array camera systems are corrected, the images are spliced into a large-view-field image
4) Calculating to obtain the space position through two large view field images and analyzing to obtain the motion track
In the step 1), the array cameras are erected on a high platform, and two array cameras are respectively aligned to the playground by adopting a placing mode of a binocular vision model with intersected optical axes (the intersected optical axes are based on a sub-lens optical axis serving as a splicing reference during panoramic splicing), so that the whole playground is ensured to be within a common view field of the two cameras. And calibrating the array camera by adopting a Zhangyingyou calibration method to acquire the internal and external parameters of the camera.
In the step 2), the image needs to be preprocessed, the distortion is corrected, the noise is eliminated, and the quality of the image is improved.
In the step 3), the characteristic points are selected at the ground during splicing, so that accurate information of the ground during splicing is ensured.
In the step 4), a plurality of mark points are selected on the motion field, the coordinate positions of the mark points are measured, and the mark point images are respectively intercepted from the videos shot by the two array cameras. And obtaining the position coordinates in the two-dimensional image of the unprocessed target object through the processing, and analyzing and calculating the target space position by combining the internal and external parameters of the camera and the distortion coefficient.
Compared with the prior art, the invention has the following advantages:
1) most of the sport event analysis systems in the market need athletes to carry portable sensing equipment, and the invention only needs to adopt a video image processing means to collect data of the athletes, thereby ensuring that the data can not influence and burden the behavior and actions of the athletes while collecting the data, and simultaneously leading the training environment to be closer to the real competition environment.
2) The invention combines the posture analysis and the motion trail analysis for the first time, records the complete motion process of the athlete, and is used for analyzing the action and the position of the athlete frame by frame and observing the virtual three-dimensional models of the athlete at different moments.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network flow for automatically detecting an athlete and performing gesture recognition
FIG. 3 is a binocular positioning vision model according to the present invention;
fig. 4 is a relationship diagram of four types of coordinate systems in the three-dimensional reconstruction process, the four types of coordinate systems: a pixel coordinate system, an image coordinate system, a camera coordinate system, and a world coordinate system.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It should be noted that the described examples are only intended to facilitate the understanding of the invention, and do not have any limiting effect thereon.
As shown in fig. 1, the embodiment provides a binocular vision-based spatial localization and motion analysis system, and the specific implementation process of the system includes the following steps:
1) and calibrating the array camera. And (3) adopting a Zhangyingyou calibration method, ensuring that each sub-lens of each array camera can capture photos of different positions and angles of a plurality of calibration plates, and calculating internal and external parameters of each sub-lens of the camera.
2) Image acquisition and preprocessing. Because the system adopts a binocular vision model with intersected optical axes (the intersected optical axes are based on the optical axes of the sub-lenses serving as splicing references during panoramic splicing), strict requirements on the placement of the cameras are not required, only a larger baseline distance needs to be reserved between the cameras to ensure the precision, and a larger common view field is required between the cameras.
3) And (3) feature extraction, as shown in fig. 2, extracting features in the acquired picture by using an image processing method, so that subsequent splicing and gesture recognition are facilitated.
4) And (3) gesture recognition, as shown in fig. 2, inputting the extracted features into a candidate network of the region to perform semantic segmentation on the picture, and collecting the detected position information of the human body and the feature points of the human body, so that human motion description information of the moving personnel in the image can be obtained.
5) Target detection and tracking, as shown in fig. 2, identifies players in the field based on the characteristics and tracks designated players in the captured video using a tracking algorithm.
6) And (4) panoramic stitching, namely stitching the images collected by the array camera into a panoramic image of the sports stadium. And when splicing is carried out, the characteristic points on the ground of the sports field are used for calculating a conversion matrix so as to ensure that the position information of the athlete is accurate.
7) Stereo matching, as shown in fig. 3, the binocular vision model with intersecting optical axes does not have strict requirements on the placement positions of the two cameras. The cameras are placed relatively randomly, and the inclination angles of the cameras and the placement distance between the two cameras can be flexibly adjusted according to the requirements of a venue and the actual characteristics of an acquired object. And calculating the parallax of the corresponding image by using a stereo matching algorithm, and realizing mathematical transformation from the point coordinates on the two-dimensional images shot at two angles to the three-dimensional actual space coordinates.
8) And (4) track reduction, namely reducing the space track of the sportsman in the field according to the target tracking and stereo matching results.
9) Three-dimensional reconstruction, known from a binocular vision measurement model, can determine the position of any point P after the position in a three-dimensional space is observed by a binocular camera, and because of the calculation requirement, we usually provide four types of coordinate systems: a pixel coordinate system, an image coordinate system, a camera coordinate system, and a world coordinate system. As shown in FIG. 4, a rectangular coordinate system uO is defined on the image0v, denoted as pixel coordinate system, coordinate system xO1y, as image coordinate system, coordinate system OXcYcZcRecorded as camera coordinate system, world coordinate system is represented by XwAxis, YwAxis and ZwThe shafts are jointly composed. Let the point of the three-dimensional world coordinate of the point P be [ X, Y, Z,1 ]]The two-dimensional pixel coordinate of its projection point p is [ u, v,1 ]]The relationship between world coordinates and pixel coordinates is as follows:
M1i.e. the internal reference matrix of the camera, M2Is an external parameter matrix of the camera, M34Referred to as the projection matrix. And reconstructing the three-dimensional models of the athlete and the field through the results of posture detection and three-dimensional matching.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (6)
1. A space positioning and motion analysis system based on binocular vision is characterized by comprising the following steps:
1) two array camera systems for capturing scene images from two designated orientations instead of binocular cameras
2) And performing feature extraction on an image acquired by each lens of the array camera, and performing target tracking and gesture recognition on a specified moving target.
3) After the images obtained by the two array camera systems are corrected, the images are spliced into a large-view-field image
4) And calculating to obtain the position of the target in the three-dimensional space through the two large-field images, and obtaining the motion track of the target by analyzing the continuous video images.
2. The binocular vision based spatial localization and motion analysis system of claim 1, wherein a binocular vision model of intersecting optical axes (the intersecting optical axes are based on the optical axes of the sub-lenses serving as the reference of the stitching in the panoramic stitching) is adopted, and the requirement of camera placement limitation is less.
3. The binocular vision based spatial orientation and motion analysis system of claim 1, wherein the input image is pre-processed to correct distortion, remove noise, and improve image quality.
4. The binocular vision based spatial localization and motion analysis system of claim 1, wherein the feature points are selected at the ground during stitching to ensure that ground information is accurate during stitching.
5. The binocular vision based spatial orientation and motion analysis system of claim 1, wherein a plurality of marker points are selected on the motion field, coordinate positions of the marker points are measured, and images of the marker points are respectively captured from videos captured by two array cameras. After the above processing, the position coordinates in the two-dimensional image of the unprocessed target object are obtained, and the position of the target in the actual three-dimensional space is calculated based on the position coordinates.
6. The binocular vision based spatial localization and motion analysis system of claim 1, wherein the target and scene depth information is obtained using camera internal and external parameters, distortion coefficients and image feature point correspondences.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911282790.6A CN111784749A (en) | 2019-12-13 | 2019-12-13 | Space positioning and motion analysis system based on binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911282790.6A CN111784749A (en) | 2019-12-13 | 2019-12-13 | Space positioning and motion analysis system based on binocular vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111784749A true CN111784749A (en) | 2020-10-16 |
Family
ID=72755481
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911282790.6A Pending CN111784749A (en) | 2019-12-13 | 2019-12-13 | Space positioning and motion analysis system based on binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111784749A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299120A (en) * | 2021-12-31 | 2022-04-08 | 北京银河方圆科技有限公司 | Compensation method, registration method and readable storage medium based on multiple camera modules |
-
2019
- 2019-12-13 CN CN201911282790.6A patent/CN111784749A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299120A (en) * | 2021-12-31 | 2022-04-08 | 北京银河方圆科技有限公司 | Compensation method, registration method and readable storage medium based on multiple camera modules |
CN114299120B (en) * | 2021-12-31 | 2023-08-04 | 北京银河方圆科技有限公司 | Compensation method, registration method, and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021004312A1 (en) | Intelligent vehicle trajectory measurement method based on binocular stereo vision system | |
CN106251399B (en) | A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam | |
CN107871120B (en) | Sports event understanding system and method based on machine learning | |
CN103716594B (en) | Panorama splicing linkage method and device based on moving target detecting | |
CN110142785A (en) | A kind of crusing robot visual servo method based on target detection | |
CN105898107B (en) | A kind of target object grasp shoot method and system | |
CN112509125A (en) | Three-dimensional reconstruction method based on artificial markers and stereoscopic vision | |
CN114119739A (en) | Binocular vision-based hand key point space coordinate acquisition method | |
CN110889829A (en) | Monocular distance measurement method based on fisheye lens | |
CN110648362B (en) | Binocular stereo vision badminton positioning identification and posture calculation method | |
CN109101935A (en) | Figure action based on thermal imaging camera captures system and method | |
CN107038714A (en) | Many types of visual sensing synergistic target tracking method | |
CN104700355A (en) | Generation method, device and system for indoor two-dimension plan | |
CN111784749A (en) | Space positioning and motion analysis system based on binocular vision | |
CN110910489B (en) | Monocular vision-based intelligent court sports information acquisition system and method | |
CN108090930A (en) | Barrier vision detection system and method based on binocular solid camera | |
CN106846284A (en) | Active-mode intelligent sensing device and method based on cell | |
CN112154484A (en) | Ortho image generation method, system and storage medium | |
JP4886661B2 (en) | Camera parameter estimation apparatus and camera parameter estimation program | |
CN115035546A (en) | Three-dimensional human body posture detection method and device and electronic equipment | |
CN111860275B (en) | Gesture recognition data acquisition system and method | |
CN113487726A (en) | Motion capture system and method | |
CN113421286A (en) | Motion capture system and method | |
Garau et al. | Unsupervised continuous camera network pose estimation through human mesh recovery | |
CN111399634A (en) | Gesture-guided object recognition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201016 |