CN117152272A - Viewing angle tracking method, device, equipment and storage medium based on holographic sand table - Google Patents

Viewing angle tracking method, device, equipment and storage medium based on holographic sand table Download PDF

Info

Publication number
CN117152272A
CN117152272A CN202311415264.9A CN202311415264A CN117152272A CN 117152272 A CN117152272 A CN 117152272A CN 202311415264 A CN202311415264 A CN 202311415264A CN 117152272 A CN117152272 A CN 117152272A
Authority
CN
China
Prior art keywords
target
data
tracking
image
camera model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311415264.9A
Other languages
Chinese (zh)
Other versions
CN117152272B (en
Inventor
张雪兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Euclideon Technology Co ltd
Original Assignee
Shenzhen Euclideon Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Euclideon Technology Co ltd filed Critical Shenzhen Euclideon Technology Co ltd
Priority to CN202311415264.9A priority Critical patent/CN117152272B/en
Publication of CN117152272A publication Critical patent/CN117152272A/en
Application granted granted Critical
Publication of CN117152272B publication Critical patent/CN117152272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, and discloses a holographic sand table-based visual angle tracking method, device and equipment and a storage medium, which are used for improving the accuracy of holographic sand table-based visual angle tracking. Comprising the following steps: carrying out coordinate extraction on the target calibration image to obtain a corner coordinate set, and constructing an initial camera model; performing correction parameter calculation on the initial camera model to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model; image acquisition is carried out on a tracking target in the holographic sand table, so that an initial tracking image is obtained; extracting the outline of the arc segment to obtain a plurality of elliptical arc segments, calculating the distances of the plurality of elliptical arc segments to obtain a distance data set, and merging the plurality of elliptical arc segments to obtain target fitting elliptical data; and performing motion vector analysis on the target fitting ellipse data to obtain a target motion vector, and performing tracking view calculation on the target camera model to obtain tracking view data.

Description

Viewing angle tracking method, device, equipment and storage medium based on holographic sand table
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for tracking a viewing angle based on a holographic sand table.
Background
Holographic sand table technology is an advanced tool for simulating and visualizing three-dimensional geographic information systems in the fields of geography, urban planning, battlefield layout and the like. It presents the geographical information of the real world in a visual way by projecting a virtual three-dimensional scene on the surface of the sand table. In order to realize high precision and real-time interactivity of the holographic sand table, research and application of key technologies such as camera calibration, object tracking, visual angle control and the like are required.
However, problems such as noise, illumination change, extraction errors of corner points of the calibration plate and the like often exist, and inaccuracy of calibration and tracking can be caused. The illumination conditions and background interference in different environments can affect the stability and accuracy of image acquisition and object tracking.
Disclosure of Invention
The invention provides a holographic sand table-based visual angle tracking method, a device, equipment and a storage medium, which are used for improving the accuracy of holographic sand table-based visual angle tracking.
The first aspect of the invention provides a holographic sand table-based visual angle tracking method, which comprises the following steps:
Image acquisition is carried out on a calibration plate in a preset holographic sand table through a preset image acquisition device to obtain an initial calibration image, and meanwhile, the initial calibration image is preprocessed to obtain a target calibration image;
extracting corner coordinates of a calibration plate of the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set;
carrying out correction parameter calculation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model;
based on the target camera model, acquiring an image of a tracking target preset in the holographic sand table to obtain an initial tracking image;
extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections, simultaneously, carrying out distance calculation on the elliptical arc sections through a preset geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data;
And carrying out motion vector analysis on the target fitting ellipse data to obtain a target motion vector, carrying out tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to carry out view angle tracking through the tracking view angle data.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the image capturing, by using a preset image capturing device, of a calibration plate in a preset holographic sand table to obtain an initial calibration image, and meanwhile, preprocessing the initial calibration image to obtain a target calibration image, includes:
image acquisition is carried out on a calibration plate in a preset holographic sand table through a preset image acquisition device, so that an initial calibration image is obtained;
performing gray level conversion processing on the initial calibration image to obtain a corresponding gray level image;
denoising the gray level image through a preset Gaussian filtering algorithm to obtain a denoised image;
and carrying out binarization processing on the denoising image through a preset maximum inter-class variance method to obtain the target calibration image.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the extracting the corner coordinates of the calibration board for the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model by using the corner coordinate set, where the initial camera model includes a camera internal parameter and a camera external parameter, includes:
Detecting key feature points of the target calibration image through a preset corner detection algorithm to obtain a plurality of key feature points;
respectively calibrating a pixel extraction area of each key feature point to obtain a pixel extraction area corresponding to each key feature point;
performing region division on the pixel extraction region corresponding to each key feature point to obtain a plurality of division sub-regions corresponding to each pixel extraction region;
carrying out pixel distribution analysis on a plurality of divided subareas corresponding to each pixel extraction area to obtain pixel distribution data of each pixel extraction area;
screening a plurality of key feature points through pixel distribution data of each pixel extraction area to obtain a plurality of corner points, and respectively carrying out coordinate data calculation on each corner point to obtain a corner point coordinate set corresponding to the target calibration image;
and constructing an initial camera model through the angular point coordinate set.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect of the present invention, the constructing an initial camera model by using the set of corner coordinates includes:
performing size information analysis on the calibration plate to obtain size information data corresponding to the calibration plate;
Performing focal length calculation based on the size information data and the angular point coordinate set to obtain target focal length data;
performing principal point position estimation on an imaging plane of the holographic sand table based on the target focal length data to obtain a center pixel coordinate corresponding to the imaging plane;
calculating distortion parameters through the central pixel coordinates to obtain target distortion parameters, and combining the target focal length data, the central pixel coordinates and the target distortion parameters into camera internal parameters;
performing camera position data analysis through the angular point coordinate set to obtain camera position parameters, and performing camera direction analysis through the central pixel coordinate and the angular point coordinate set to obtain corresponding camera direction data;
and merging the camera position parameters and the camera direction data into camera external parameters, and constructing the initial camera model through the camera internal parameters and the camera external parameters.
With reference to the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect of the present invention, the calculating correction parameters for the initial camera model according to the nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model by using the correction parameter set to obtain a target camera model includes:
Performing first fitting error calculation on the internal parameters of the camera through an objective function in the nonlinear optimization algorithm to obtain first fitting error data;
performing second fitting error calculation on the external parameters of the camera through the objective function to obtain second fitting error data;
carrying out gradient calculation on the objective function through the first fitting error data and the second fitting error data to obtain corresponding objective gradient data;
performing function curvature calculation on the objective function to obtain function curvature data corresponding to the objective function;
performing correction parameter calculation on the initial camera model based on the target gradient data and the function curvature data to obtain a corresponding correction parameter set;
and correcting the initial camera model through the correction parameter set to obtain a target camera model.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the extracting an arc segment contour from the initial tracking image to obtain a plurality of elliptical arc segments, and simultaneously, performing distance calculation on the plurality of elliptical arc segments by using a preset geometric calculation algorithm to obtain a distance data set, and merging the plurality of elliptical arc segments by using the distance data set to obtain target fitting elliptical data, where the method includes:
Extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections;
extracting space points from a plurality of elliptical arc segments to obtain a space point set;
performing secondary curve mapping on each elliptical arc section to obtain a plurality of secondary curves;
performing plane element conversion on each secondary curve to obtain a unit circle on a conversion plane;
performing projective geometry relation analysis on the unit circle and the space point set to obtain a target projective geometry relation;
and carrying out distance calculation on a plurality of elliptical arc segments through the target projective geometric relationship to obtain a distance data set, and merging the plurality of elliptical arc segments through the distance data set to obtain target fitting elliptical data.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the performing motion vector analysis on the target fitted ellipse data to obtain a target motion vector, performing tracking view calculation on the target camera model according to the target motion vector to obtain tracking view data, and controlling the target camera model to perform view tracking according to the tracking view data, including:
And carrying out ellipse parameter calculation on the target fitting ellipse data to obtain an ellipse parameter set, wherein the ellipse parameter set comprises: an ellipse major axis dataset, an ellipse minor axis dataset, and an ellipse center coordinate dataset;
performing motion vector analysis on the target fitting ellipse data through an ellipse major axis data set, an ellipse minor axis data set and an ellipse center coordinate data set to obtain a target motion vector;
calculating the target motion pose according to the target motion vector to obtain motion pose data;
and carrying out tracking view angle calculation on the target camera model according to the motion pose data to obtain tracking view angle data, and controlling the target camera model to carry out view angle tracking according to the tracking view angle data.
The second aspect of the present invention provides a holographic sand table-based viewing angle tracking device, comprising:
the processing module is used for acquiring images of the calibration plates in the preset holographic sand table through the preset image acquisition device to obtain an initial calibration image, and preprocessing the initial calibration image to obtain a target calibration image;
The extraction module is used for extracting corner coordinates of the calibration plate of the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set;
the computing module is used for carrying out correction parameter computation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model;
the acquisition module is used for acquiring images of tracking targets preset in the holographic sand table based on the target camera model to obtain initial tracking images;
the merging module is used for extracting arc section contours of the initial tracking images to obtain a plurality of elliptical arc sections, calculating distances of the elliptical arc sections through a preset geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data;
the analysis module is used for carrying out motion vector analysis on the target fitting ellipse data to obtain a target motion vector, carrying out tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to carry out view angle tracking through the tracking view angle data.
A third aspect of the present application provides a holographic sand table based viewing angle tracking device comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the holographic sand table based perspective tracking device to perform the holographic sand table based perspective tracking method described above.
A fourth aspect of the application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the holographic sand table based viewing angle tracking method described above.
According to the technical scheme provided by the application, an image acquisition device is used for acquiring an image of a calibration plate in a holographic sand table to obtain an initial calibration image, and the initial calibration image is preprocessed to obtain a target calibration image; extracting corner coordinates of a calibration plate of the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set; carrying out correction parameter calculation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model; based on a target camera model, acquiring an image of a tracking target preset in a holographic sand table to obtain an initial tracking image; extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections, performing distance calculation on the elliptical arc sections through a geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data; and performing motion vector analysis on the target fitting ellipse data to obtain a target motion vector, performing tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to perform view angle tracking through the tracking view angle data. In the scheme of the application, the high-precision camera calibration can be realized by carrying out image acquisition and calibration on the calibration plate in the holographic sand table, and the accuracy of view angle tracking is ensured. And correcting the camera model by using a nonlinear optimization algorithm to realize real-time view tracking. This is critical for virtual reality and augmented reality applications because they require low latency and high frame rates to provide a smooth user experience. The view angle tracking in the scheme is self-adaptive, and the tracking view angle is adjusted in real time according to the target motion vector. Different target movement modes can be adapted, and the scheme realizes data fusion by extracting distance data from a plurality of elliptical arc sections and combining the distance data. This can increase the robustness of tracking, reduce the effects of noise and interference, and improve the accuracy of target detection.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a holographic sand table-based viewing angle tracking method according to an embodiment of the present invention;
FIG. 2 is a flowchart of extracting corner coordinates of a calibration plate for a target calibration image in an embodiment of the invention;
FIG. 3 is a flowchart of constructing an initial camera model from a set of corner coordinates in an embodiment of the present invention;
FIG. 4 is a flowchart of the correction parameter calculation of the initial camera model according to the nonlinear optimization algorithm in the embodiment of the invention;
FIG. 5 is a schematic diagram of an embodiment of a holographic sand table based viewing angle tracking device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a viewing angle tracking device based on a holographic sand table according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a holographic sand table-based visual angle tracking method, a device, equipment and a storage medium, which are used for improving the accuracy of holographic sand table-based visual angle tracking.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For easy understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and an embodiment of a viewing angle tracking method based on a holographic sand table in an embodiment of the present invention includes:
s101, performing image acquisition on a calibration plate in a preset holographic sand table through a preset image acquisition device to obtain an initial calibration image, and preprocessing the initial calibration image to obtain a target calibration image;
it is to be understood that the execution subject of the present invention may be a viewing angle tracking device based on a holographic sand table, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, firstly, a server performs image acquisition on a calibration plate on a holographic sand table through a preset image acquisition device. This process typically involves a camera or sensor that captures an image of the calibration plate, which will become the initial calibration image of the server. The initial calibration image is a color image, but is typically converted to a gray scale image in order to simplify processing and reduce computational complexity. This is achieved by weighted averaging the red, green and blue channel values of each pixel of the color image, resulting in a corresponding gray scale image. The gray scale image only contains brightness information, so that the subsequent processing is simpler and more convenient. In the image acquisition process, noise is often introduced, which affects the extraction accuracy of the corner points of the calibration plate. To remove this noise, the server uses a preset gaussian filtering algorithm. Gaussian filtering is a technique for smoothing an image by weighted averaging pixel values around each pixel to reduce noise in the image. This will result in a smoother image which will help to more accurately detect the characteristics of the calibration plate. Once the server has obtained the denoised greyscale image, it is next binarized to more easily detect the corner points of the calibration plate. Here, the server uses a preset maximum inter-class variance method, also called the oxford method. This method converts the gray image to a black and white binary image by adaptively selecting a threshold, wherein the features of the calibration plate will appear black and the background will appear white. The objective of the maximum inter-class variance method is to find a threshold such that the variance between the two classes (foreground and background) of the image is maximized, thus achieving the best binarization effect. For example, assume that the server is to be calibrated on a holographic sand table for visualization of city planning. The server firstly uses a preset video camera to collect images of the calibration plates on the holographic sand table. This initial image is a color photograph, capturing an image of the calibration plate. The server converts this color image into a gray scale image, containing only luminance information. This helps to simplify the subsequent processing. The server uses a gaussian filter algorithm to denoise the gray scale image to eliminate noise from the image acquisition process. The server applies a maximum inter-class variance method to carry out binarization processing on the denoising image so as to obtain a target calibration image. The target calibration image is a black and white binary image in which the features of the calibration plate are accurately captured and the background is eliminated.
S102, extracting corner coordinates of a calibration plate of a target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set;
it should be noted that, the server detects key feature points from the target calibration image, where the feature points generally correspond to corner points on the calibration plate. The server uses corner detection algorithms, such as Harris corner detection or Shi-Tomasi corner detection, to find these key feature points. Once the key feature points are detected, the server determines a pixel extraction area around each feature point. This region defines the set of pixels in which the server will perform subsequent analysis. Typically, the server will select a rectangular or circular area of fixed size around the key feature points as the pixel extraction area. The pixel extraction area will typically be divided into a plurality of sub-areas for better analysis and extraction of features. This may be accomplished by meshing or other segmentation methods. Each sub-region will contain a set of pixels. For each pixel extraction area, the server performs a pixel distribution analysis to determine which sub-areas contain pixels of the corner points. This can be done by analyzing the distribution of pixel intensities or gray values. Typically, the subregions of the corner points will exhibit a pixel distribution characteristic different from the background, manifested as a high variation of pixel values or a significant variation of gradients. Based on the result of the pixel distribution analysis, the server screens out corner points corresponding to each feature point. These corner points will have obvious characteristics and can be used for further analysis. Once the corner points are screened out, the server calculates the pixel coordinates of the corner points, and the coordinates form a corner point coordinate set corresponding to the target calibration image. Using the set of corner coordinates, the server builds an initial camera model. This includes both internal parameters of the camera (e.g., focal length, principal point coordinates) and external parameters (e.g., camera position and orientation). Typically, this requires camera calibration, using known calibration plate dimensions and world coordinates of corner points to estimate camera parameters.
Wherein the server analyzes the size information of the calibration plate. This is to know the actual physical dimensions of the calibration plate for subsequent camera calibration. The dimensional information is typically in meters or millimeters. Based on the size information of the calibration plate and the corner coordinate set, the server calculates the focal length of the camera. The focal length is one of the parameters inside the camera, which represents the distance of the focal point of the camera lens from the imaging plane. The calculation of the focal length generally relies on the principle of similar triangles, where the dimensions on the calibration plate and the distance between the camera and the calibration plate are known. The server estimates the principal point position of the imaging plane by calibrating the angular point coordinate set of the board and the known focal distance. The principal point is a point on the imaging plane that corresponds to the location where the light passes through the camera's optical center. It is typically represented as pixel coordinates, represented as center pixel coordinates on the imaging plane. Most cameras suffer from distortions, such as radial and tangential distortions, which affect the accuracy of the image. The server calculates these distortion parameters by means of a set of center pixel coordinates and corner coordinates on the calibration plate. These parameters are used to correct the image to reduce distortion effects. Through the corner coordinate set, the server analyzes the position parameters of the camera. This includes the position of the camera in the world coordinate system, typically expressed in three-dimensional coordinates. These parameters describe the position of the camera relative to the calibration plate. Also, through the set of corner coordinates, the server may also analyze the direction parameters of the camera, describing the direction or orientation of the camera. This is typically expressed as a rotation matrix or euler angle.
S103, carrying out correction parameter calculation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model;
specifically, using a nonlinear optimization algorithm, the server first defines an objective function that is used to calculate a first fitting error for the internal parameters of the camera. The first fitting error refers to the difference between the model predicted internal parameters and the actual observations. This objective function will measure the fit of the camera internal parameters. Similarly, the server defines another objective function for calculating a second fitting error of the camera external parameters. This objective function measures the differences between the position and orientation of the camera and the actual situation, i.e. the differences between the model predicted external parameters and the observations. For nonlinear optimization, the server calculates the gradient of the objective function. The gradient represents the direction of change of the objective function in the parameter space, which will instruct the server to make parameter adjustments towards the direction of minimized error. Gradients are typically calculated by numerical methods or analytical methods. The curvature of the function represents the curvature or degree of curvature of the objective function in the parameter space. It tells the server the slope change of the objective function in different directions. The curvature of the function is typically obtained by calculating the second derivative. Based on the gradient of the objective function and the curvature of the function, the server uses a nonlinear optimization algorithm (e.g., the Levenberg-Marquardt algorithm or quasi-newton method) to calculate the set of correction parameters. These correction parameters will be used to fine tune the initial camera model to minimize the first and second fitting errors. By applying the set of correction parameters to the initial camera model, the server obtains the target camera model. The model is better adapted to the actual situation, so that the accuracy of view tracking is improved. For example, assume that a server is conducting an urban planning project, a holographic sand table technique is used to visualize the urban planning model, and real-time tracking and adjustment of the urban planning model is required. The server defines an objective function for calculating a first fitting error of the camera internal parameters. This may include the predicted focal length, principal point position, and differences between distortion parameters and actual observations. The form of the objective function is as follows: objective function 1=error 1 (predicted focal length-actual focal length) +error 2 (predicted principal point position-actual principal point position) +error 3 (predicted distortion parameter-actual distortion parameter). Likewise, the server defines another objective function for calculating a second fitting error of the camera external parameters. This may include differences between the predicted camera position and orientation and the actual observations. The form of the objective function is as follows: objective function 2=error 4 (predicted camera position-actual camera position) +error 5 (predicted camera direction-actual camera direction). The server calculates the gradient and the function curvature of the objective function 1 and the objective function 2 using numerical or analytical methods. With the gradient and the function curvature, the server uses a nonlinear optimization algorithm to calculate the set of correction parameters. These parameters can fine tune the camera internal and external parameters to reduce fitting errors. And applying the correction parameter set to the initial camera model to obtain a more accurate target camera model. The model can be used for real-time tracking and visualization of the urban planning model, and accuracy and stability of view angle tracking are ensured.
S104, based on a target camera model, acquiring an image of a tracking target preset in the holographic sand table to obtain an initial tracking image;
it should be noted that the target to be tracked on the holographic sand table needs to be clearly defined. This may be buildings in city planning, traffic flows, environmental changes, etc., depending on the requirements of the project. In the previous step, the server has built a target camera model, including internal parameters (focal length, principal point position, distortion parameters) and external parameters (position, orientation). This camera model is a tool used to simulate the position and orientation of an actual camera on a holographic sand table. In order to capture an image of a tracked target, an image acquisition path needs to be planned. This includes determining the shooting angle and position of the camera to ensure that the target can be adequately captured and that the image quality is good. Path planning may be performed by means of a Computer Aided Design (CAD) tool. With the target camera model, the internal and external parameters of the camera can be controlled to move and rotate the camera in accordance with the planned path. This ensures that the image captured by the camera matches the actual situation. And starting image acquisition according to the planned path and camera parameters. The camera will capture a plurality of images, each frame containing a tracked object on the holographic sand table. These images need to be stored in real time for subsequent processing and analysis. In the image acquisition process, real-time feedback and adjustment of camera parameters are required to ensure image quality and accuracy of target capture. This can be achieved by image analysis and comparison of the actual target locations. For example, suppose a server is conducting an urban planning project, using holographic sand tables to visualize urban traffic flow. The server wishes to track vehicles and pedestrians in the city in order to better understand traffic conditions. In this project, the server has well defined tracking targets for vehicles and pedestrians in the city. These targets will be visualized and tracked on a holographic sand table. The server has built a target camera model including focal length, principal point position, distortion parameters, position and orientation. This camera model will be used to simulate the position and orientation of the actual camera. Using CAD tools, the server plans the image acquisition path. This includes determining the height, angle and path of movement of the camera to ensure that the motion trajectories of vehicles and pedestrians can be captured. By the target camera model, the server controls internal parameters of the camera (such as focal length and principal point position) as well as external parameters (such as camera position and orientation). This ensures that the camera moves and rotates accurately along the planned path. The image acquisition process is started and the camera captures a plurality of images according to the planned path. Each frame of image contains vehicles and pedestrians in the city. These images are stored in real time for later analysis. During the image acquisition process, the server can analyze the image in real time and compare the actual target positions so as to ensure that all vehicles and pedestrians are captured and the image quality is good. The server adjusts the camera parameters, if necessary, to achieve better results.
S105, extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections, simultaneously, carrying out distance calculation on the elliptical arc sections through a preset geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data;
in particular, the server extracts a plurality of elliptical arcs from the initial tracking image, the arcs representing profile features of the tracked object. This process involves image processing and curve fitting techniques to extract contours by detecting edges of objects. The server performs spatial point extraction on each elliptical arc segment, and the points represent the positions of the outlines of the elliptical arc segments in three-dimensional space. The server performs a conic mapping that maps the contours to conic lines on a plane, which helps parameterize elliptical arc segments. The server performs plane element conversion, maps the quadratic curve to a unit circle on the conversion plane, and calculates the relation between different elliptical arc segments, including the distance and the relative position between the elliptical arc segments, through projective geometry relation analysis. Based on the result of the projective geometry analysis, the server calculates distance data between elliptical arc segments and uses the data to merge multiple elliptical arc segments to obtain target fitted elliptical data. This merging process may be implemented by clustering algorithms or other merging techniques. For example, if the server is performing a simulation of city planning, it is desirable to track the outline of a building in a city, the server uses this method to extract the outline information of the building from the holographic sand table image, map it to a unit circle on a plane, and calculate the distance and positional relationship between different buildings. These data can be used to better understand the city layout and the relative locations of the buildings, providing important support for city planning decisions.
S106, performing motion vector analysis on the target fitting ellipse data to obtain a target motion vector, performing tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to perform view angle tracking through the tracking view angle data.
The parameter set of the ellipse is calculated from the target fitted ellipse data. This includes the major axis dataset, the minor axis dataset, and the center coordinate dataset of the ellipse. The purpose of the ellipse parameter calculation is to obtain an accurate description of the shape and location of the target contour. And performing motion vector analysis to know the motion state of the target. The motion vector includes displacement and velocity information of the object in three-dimensional space. This analysis may infer the motion of the object by comparing parameters at different points in time based on successive elliptical parameter data sets. And calculating the motion pose data of the target based on the motion vector of the target. The motion pose includes the position and direction of the target in three-dimensional space, which information is important for tracking the perspective of the target. And carrying out tracking view angle calculation on the target camera model through the motion pose data. This step considers the position and orientation of the target to determine how to adjust the view of the camera in order to continue tracking the target. The aim of view tracking is to keep the target visible in the image and centered as much. For example, assume that the server is using a holographic sand table for city planning simulation and wishes to track moving vehicles in a city. The server firstly acquires an initial tracking image from the holographic sand table, extracts contour information of the vehicle from the image, and fits the contour information into an ellipse. By continuously collecting these ellipse data, the server calculates the motion vectors of the vehicle, knowing their displacement and velocity. Based on the motion vectors, the server calculates the motion pose of each vehicle, including their position and orientation in the city model. Using these pose data, the server adjusts the view angle of the camera to ensure that the vehicle is always in the center of the image and remains visible, thereby achieving view angle tracking of the vehicle.
In the embodiment of the application, an image acquisition device is used for acquiring an image of a calibration plate in a holographic sand table to obtain an initial calibration image, and the initial calibration image is preprocessed to obtain a target calibration image; extracting corner coordinates of a calibration plate of the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set; carrying out correction parameter calculation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model; based on a target camera model, acquiring an image of a tracking target preset in a holographic sand table to obtain an initial tracking image; extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections, performing distance calculation on the elliptical arc sections through a geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data; and performing motion vector analysis on the target fitting ellipse data to obtain a target motion vector, performing tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to perform view angle tracking through the tracking view angle data. In the scheme of the application, the high-precision camera calibration can be realized by carrying out image acquisition and calibration on the calibration plate in the holographic sand table, and the accuracy of view angle tracking is ensured. And correcting the camera model by using a nonlinear optimization algorithm to realize real-time view tracking. This is critical for virtual reality and augmented reality applications because they require low latency and high frame rates to provide a smooth user experience. The view angle tracking in the scheme is self-adaptive, and the tracking view angle is adjusted in real time according to the target motion vector. Different target movement modes can be adapted, and the scheme realizes data fusion by extracting distance data from a plurality of elliptical arc sections and combining the distance data. This can increase the robustness of tracking, reduce the effects of noise and interference, and improve the accuracy of target detection.
In a specific embodiment, the process of executing step S101 may specifically include the following steps:
(1) Image acquisition is carried out on a calibration plate in a preset holographic sand table through a preset image acquisition device, so that an initial calibration image is obtained;
(2) Performing gray level conversion treatment on the initial calibration image to obtain a corresponding gray level image;
(3) Denoising the gray level image through a preset Gaussian filtering algorithm to obtain a denoised image;
(4) And carrying out binarization processing on the denoising image by a preset maximum inter-class variance method to obtain a target calibration image.
Specifically, the server uses a preset image acquisition device to acquire images of the calibration plates in the holographic sand table. This calibration plate typically contains features of known shape and size for subsequent camera calibration. The acquired image is referred to as an initial calibration image. In city planning, the calibration plate is a planar plate with specific markers. And carrying out gray level conversion processing on the initial calibration image, and converting the color image into a gray level image. This is because in most cases the grey information is easier to process and analyze during the calibration and tracking of the holographic sand table. The grayscale image is an image containing only luminance information. Gray scale images typically contain some noise from disturbances in the image acquisition process or errors in the camera sensor. To remove these noises, the server performs denoising processing on the gray-scale image using a preset gaussian filtering algorithm. Gaussian filtering is a commonly used image smoothing technique that can blur noise in an image while preserving the main features of the image. The denoised image is still a grey scale image, which the server converts to a binary image for the subsequent calibration process. This is done by a preset maximum inter-class variance method. The maximum inter-class variance method is an adaptive thresholding method, which automatically determines a threshold based on the brightness distribution of pixels in an image, and classifies the pixels into two classes, namely foreground and background. The result of this step is a target calibration image that contains only the contour information of the target, while the background is eliminated. For example, assume that the server is performing a holographic sand table simulation of city planning. In this process, the server calibrates the simulated environment to accurately project elements such as city buildings and roads on the sand table. The server first places a specially designed calibration plate on the sand table with some unique markers on it. Images of the calibration plate are acquired by a camera, the images including position and shape information of the markers. The server converts these images to grayscale images and then removes the noise present using gaussian filtering. And the server binarizes the image through a maximum inter-class variance method to obtain a target calibration image, wherein the target calibration image only contains the outline information of the marker. This target calibration image will be used in the subsequent camera calibration process to ensure the accuracy and visualization of the simulated environment.
In a specific embodiment, as shown in fig. 2, the process of executing step S102 may specifically include the following steps:
s201, detecting key feature points of a target calibration image through a preset corner detection algorithm to obtain a plurality of key feature points;
s202, calibrating pixel extraction areas of each key feature point to obtain pixel extraction areas corresponding to each key feature point;
s203, carrying out region division on the pixel extraction region corresponding to each key feature point to obtain a plurality of division sub-regions corresponding to each pixel extraction region;
s204, performing pixel distribution analysis on a plurality of divided subareas corresponding to each pixel extraction area to obtain pixel distribution data of each pixel extraction area;
s205, screening a plurality of key feature points through pixel distribution data of each pixel extraction area to obtain a plurality of corner points, and respectively carrying out coordinate data calculation on each corner point to obtain a corner point coordinate set corresponding to the target calibration image;
s206, constructing an initial camera model through the corner coordinate set.
It should be noted that, a preset corner detection algorithm is used to process the target calibration image so as to detect the key feature points. These key feature points are typically corner points or other salient feature points on the target calibration plate. The corner detection algorithm will identify the location of these points. For each detected key feature point, pixels around it are extracted as one region. This region is a small image block containing nearby pixels of key feature points. Each pixel extraction area is further divided into a plurality of sub-areas. This division may be based on pixel density, color information, or other characteristics. The divided sub-regions facilitate a more accurate analysis of the pixel distribution. For each sub-region of the pixel extraction area, a pixel distribution analysis is performed. This includes counting the brightness or color distribution of the pixels, as well as the spatial relationship between the pixels. These analyses may help determine which sub-regions contain corner information. Based on the result of the pixel distribution analysis, sub-areas containing corner information are screened out, so that which key feature points are actually corner points are determined. And calculating coordinate data of the screened corner points to obtain a corner point coordinate set corresponding to the target calibration image. An initial camera model is constructed using the set of corner coordinates. This camera model includes camera internal parameters (e.g., focal length, principal point position, distortion parameters) and camera external parameters (e.g., camera position and orientation). These parameters are used to describe the manner and location of the camera when capturing images. For example, assume that a server is performing a holographic sand table simulation of a city plan, a city map on the surface of the sand table needs to be calibrated. The server uses a specially designed city map calibration plate with obvious corner features. The server collects the target calibration image from the camera into the computer. The server detects corner points in the target calibration image, which correspond to the corners of the urban map calibration plate, using a corner point detection algorithm. For each detected corner point, the server extracts surrounding pixels, and then performs sub-region division and pixel distribution analysis on the pixels. By analyzing the luminance and spatial relationship of the pixels, the server determines which sub-regions contain the actual corner information. Once the server has determined the corner points, the server calculates their coordinate data. These coordinate data represent the locations of the corner points in the image. The server uses the coordinate data of these corner points to construct an initial camera model in order to accurately project the city map on the holographic sand table.
In a specific embodiment, as shown in fig. 3, the process of executing step S206 may specifically include the following steps:
s301, carrying out size information analysis on the calibration plate to obtain size information data corresponding to the calibration plate;
s302, carrying out focal length calculation based on size information data and a corner coordinate set to obtain target focal length data;
s303, performing principal point position estimation on an imaging plane of the holographic sand table based on target focal length data to obtain a center pixel coordinate corresponding to the imaging plane;
s304, calculating distortion parameters through the center pixel coordinates to obtain target distortion parameters, and combining the target focal length data, the center pixel coordinates and the target distortion parameters into camera internal parameters;
s305, analyzing camera position data through the corner coordinate set to obtain camera position parameters, and simultaneously analyzing camera directions through the center pixel coordinate and the corner coordinate set to obtain corresponding camera direction data;
s306, combining the camera position parameters and the camera direction data into camera external parameters, and constructing an initial camera model through the camera internal parameters and the camera external parameters.
The calibration plate is analyzed for size information. This includes measuring the actual dimensions of the calibration plate, such as length and width. This information is critical to camera calibration as it will help the server determine the actual size of objects in the scene. Based on the size information data and the corner coordinate set, the server calculates target focal length data. The focal length refers to the focal length of the camera lens, which is one of the parameters inside the camera. When calculating the target focal length, the server uses the pixel coordinates of the corner points and the known dimensions of the calibration plate to estimate the focal length using the principle of similar triangles. After obtaining the target focal length data, the server estimates a principal point position on the imaging plane of the holographic sand table. The principal point is the center point on the imaging plane, represented by the center pixel coordinates. The estimate of the principal point position is typically related to the focal length and the position of the imaging plane. Subsequently, the server calculates a target distortion parameter. The distortion parameters are used to correct image distortion due to lens distortion. These parameters include radial distortion and tangential distortion. The server estimates these parameters by means of a specific calibration object for correction in subsequent view tracking. And combining the target focal length data, the center pixel coordinates and the target distortion parameters to obtain internal parameters of the camera. These parameters describe the internal characteristics of the camera such as focal length, principal point position and distortion parameters. Using the set of corner coordinates, the server analyzes the position parameters and the orientation parameters of the camera. The camera position parameter describes the position of the camera in three-dimensional space, while the camera orientation parameter describes the direction in which the camera takes an image. These parameters are typically estimated by triangulation and geometric analysis. And combining the camera position parameter and the direction parameter to obtain the external parameter of the camera. These parameters describe the position and orientation of the camera relative to the world coordinate system. By combining the internal and external parameters of the camera together, the server builds an initial camera model. This model can be used to describe the imaging process of the camera in different positions and orientations, thereby enabling view tracking and visualization. For example, assume that the server is using a holographic sand table to simulate city planning. The server calibrates the cameras so that the city planning model is accurately displayed on the sand table. The server first measures the actual dimensions of the calibration plate and determines the corner positions on the calibration plate. The server calculates the focal length of the camera from the corner coordinates and the known calibration plate dimensions. The server estimates the principal point location on the imaging plane, as well as the distortion parameters, to correct for distortion in the image. At the same time, the server also analyzes the position and orientation of the camera to determine the position and shooting orientation of the camera in the simulation. These parameters may help the server project the city planning model more accurately. The server combines the internal and external parameters of the camera together to construct an initial camera model. This camera model allows the server to implement perspective tracking and city planning simulation on the holographic sand table for better understanding and analysis of the city planning scheme.
In a specific embodiment, as shown in fig. 4, the process of performing step S103 may specifically include the following steps:
s401, performing first fitting error calculation on internal parameters of a camera through an objective function in a nonlinear optimization algorithm to obtain first fitting error data;
s402, performing second fitting error calculation on external parameters of the camera through an objective function to obtain second fitting error data;
s403, performing gradient calculation on the objective function through the first fitting error data and the second fitting error data to obtain corresponding objective gradient data;
s404, performing function curvature calculation on the objective function to obtain function curvature data corresponding to the objective function;
s405, performing correction parameter calculation on the initial camera model based on the target gradient data and the function curvature data to obtain a corresponding correction parameter set;
s406, correcting the initial camera model through the correction parameter set to obtain a target camera model.
Specifically, an objective function is defined that measures the fitting error of the camera model. This objective function should include a two-part fitting error: the first part is the fitting error of the camera internal parameters and the second part is the fitting error of the camera external parameters. The objective function is typically of the form: objective function = first partial fitting error + second partial fitting error. The first partial fitting error is calculated using the current camera internal parameters and calibration data. This typically involves mapping the corner points of the calibration plate into the image and comparing their differences with the actually detected corner points. This process will produce a first portion of fitting error data. And calculating a second partial fitting error by using the current external parameters of the camera and calibration data. This usually involves mapping the corner points of the calibration plate from the world coordinate system to the image coordinate system and comparing their differences with the actually detected corner points. This process will produce a second portion of the fitting error data. For non-linear optimization, the gradient of the objective function needs to be calculated. The gradient represents the rate of change of the objective function in the parameter space. For the camera internal and external parameters, their gradients are calculated separately. These gradient data will be used in the subsequent optimization process. The function curvature of the objective function is calculated, which refers to the second derivative of the objective function. The curvature of the function tells the server the shape of the curve of the objective function, which is important for selecting the optimization algorithm and determining the step size. Using the objective function, gradient data and function curvature data, a nonlinear optimization algorithm is applied to calculate correction values for the parameters. This procedure aims at minimizing the objective function, minimizing the fitting error. The optimization algorithm may be a gradient descent method, a Levenberg-Marquardt algorithm, or the like. The calculated set of correction parameters is applied to the initial camera model to update the internal and external parameters of the camera. The above steps are repeated until the value of the objective function converges to a satisfactory minimum or the upper limit of the number of iterations is reached. For example, assume that the server has a holographic sand table for simulation of city planning. The server has calibrated the camera, but due to calibration plate corner extraction errors and illumination variations, the internal and external parameters of the camera model need to be further optimized. The server defines an objective function comprising a first partial fitting error and a second partial fitting error. The first partial fitting error measures the quality of the fit of the camera internal parameters and the second partial fitting error measures the quality of the fit of the camera external parameters. The server calculates the gradient of the objective function and the curvature of the function, and then calculates the correction value of the parameter using a nonlinear optimization algorithm (e.g., the Levenberg-Marquardt algorithm). These correction values are applied to the initial camera model to update the internal and external parameters of the camera. By iterating this process, the server eventually obtains a more accurate camera model, which can be used for more accurate city planning simulation and view tracking. This ensures high accuracy and stability of the holographic sand table in complex environments.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections;
(2) Extracting space points from a plurality of elliptical arc segments to obtain a space point set;
(3) Performing secondary curve mapping on each elliptical arc section to obtain a plurality of secondary curves;
(4) Respectively carrying out plane element conversion on each secondary curve to obtain a unit circle on a conversion plane;
(5) Performing projective geometry relation analysis on the unit circle and the space point set to obtain a target projective geometry relation;
(6) And carrying out distance calculation on the plurality of elliptical arc segments through the target projective geometric relationship to obtain a distance data set, and merging the plurality of elliptical arc segments through the distance data set to obtain target fitting elliptical data.
Specifically, the outline of the elliptical arc segment is extracted from the initial tracking image. This may be achieved by image processing techniques such as edge detection, contour extraction or shape detection. These contours are typically expressed in terms of pixel coordinates. For each elliptical arc segment, points on its contour are mapped into three-dimensional space to obtain a set of spatial points. This mapping process involves a camera model that can be calculated using already calibrated camera parameters. Through this step, the server will obtain a set of spatial points in three-dimensional space. For a spatial point on each elliptical arc segment, a conic mapping is performed to project it onto the transformation plane. This mapping generally involves mapping three-dimensional points to a plane such that the transformed points form a conic on the plane. The representation of each conic is typically a standard unit circle in the transformation plane, since the mathematical representation of an ellipse can typically be normalized to a unit circle. This step will ensure that the data processed by the server is on the same scale, facilitating subsequent analysis. Using the principle of projective geometry, the position and shape of the unit circle on a plane are analyzed to infer the position, orientation, and shape of the elliptical arc segment in three-dimensional space. This analysis process requires consideration of the internal and external parameters of the camera, as well as the geometry of the ellipse on the transformation plane. By means of the projective geometry, the distances between the different elliptical arc segments, i.e. their distances in three-dimensional space, can be calculated. By comparing these distances, the elliptical arc segments that are closer together can be merged into one target elliptical data. This merging process may be performed according to a specific algorithm and threshold. For example, assume that the server uses a holographic sand table for simulation of city planning. The server has an initial tracking image that includes a plurality of elliptical arcs representing different urban structures. The goal of the server is to extract the shape and location information of these buildings. The server extracts the outline of each elliptical arc from the image. The points on these contours are mapped into three-dimensional space according to known camera parameters, resulting in a set of spatial points. The server performs a conic mapping of the spatial points on each elliptical arc segment, projecting them to a unit circle on the transformation plane. This ensures that the data of the servers are all on the same scale. Through projective geometry analysis, the server deduces the position, orientation and shape of the elliptical arc segment represented by each unit circle in three-dimensional space. The server can also calculate the distance between different elliptical arc segments, and combine the elliptical arc segments with closer distances into a target elliptical data according to a certain distance threshold value, which represents a building.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) Carrying out ellipse parameter calculation on the target fitting ellipse data to obtain an ellipse parameter set, wherein the ellipse parameter set comprises: an ellipse major axis dataset, an ellipse minor axis dataset, and an ellipse center coordinate dataset;
(2) Performing motion vector analysis on the target fitting ellipse data through the ellipse major axis data set, the ellipse minor axis data set and the ellipse center coordinate data set to obtain a target motion vector;
(3) Calculating the target motion pose through the target motion vector to obtain motion pose data;
(4) And carrying out tracking view angle calculation on the target camera model through the motion pose data to obtain tracking view angle data, and controlling the target camera model to carry out view angle tracking through the tracking view angle data.
Specifically, parameters of the ellipse, including major axis, minor axis and center coordinates, are extracted from the target fitted ellipse data. These parameters can be used to describe the shape and location of the ellipse. By comparing sets of elliptical parameters in different points in time or frames, the motion vector of the object can be calculated. This vector includes the displacement and directional change of the target in three dimensions. Based on the motion vector of the target, the motion pose of the target, including position and orientation, can be calculated. This calculation generally involves principles of three-dimensional geometry and kinematics, taking into account the elliptical motion trajectories and directional changes. From the motion pose data of the target, the camera view angle for tracking the target can be calculated. This calculation typically requires consideration of the internal and external parameters of the camera, as well as the position and orientation of the target in three-dimensional space. By tracking the view angle data, the camera model can be controlled to follow the movement of the target, maintaining the visibility of the target in the field of view. This can be achieved by adjusting the position and orientation of the camera to ensure that the target is always in the proper position. For example, assume that the server is using a holographic sand table for city planning simulation and needs to track moving vehicles in the city. The server obtains successive tracking images from the sand table, each frame containing the elliptical shape of the vehicle. The server extracts the elliptical parameters of the vehicle in each of the tracked images, including major, minor, and center coordinates. By comparing the parameters in the different frames, the server calculates a motion vector for each vehicle, which tells the server the speed and direction of the vehicle. Based on the motion vectors, the server calculates the motion pose of each vehicle, including its position and orientation. This enables the server to know the position of each vehicle in three dimensions. By tracking the perspective calculation, the server adjusts the virtual camera on the holographic sand table to move with the movement of the vehicle and maintain the visibility of the vehicle in the field of view. This ensures that the server can track the position and movement of vehicles in the city in real time, facilitating decision making in the areas of city planning and traffic management, etc.
Through the steps, the server realizes motion analysis and visual angle tracking based on the target fitting ellipse data in the holographic sand table, and provides important data and visual support for city planning and simulation.
The method for tracking the viewing angle based on the holographic sand table in the embodiment of the present invention is described above, and the following describes the viewing angle tracking device based on the holographic sand table in the embodiment of the present invention, referring to fig. 5, one embodiment of the viewing angle tracking device based on the holographic sand table in the embodiment of the present invention includes:
the processing module 501 is configured to acquire an image of a calibration plate in a preset holographic sand table through a preset image acquisition device, obtain an initial calibration image, and perform preprocessing on the initial calibration image to obtain a target calibration image;
the extracting module 502 is configured to extract corner coordinates of a calibration plate of the target calibration image, obtain a set of corner coordinates corresponding to the target calibration image, and construct an initial camera model according to the set of corner coordinates;
the calculating module 503 is configured to calculate correction parameters of the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correct the initial camera model through the correction parameter set to obtain a target camera model;
The acquisition module 504 is configured to acquire an image of a tracking target preset in the holographic sand table based on the target camera model, so as to obtain an initial tracking image;
the merging module 505 is configured to extract an arc segment contour of the initial tracking image to obtain a plurality of elliptical arc segments, perform distance calculation on the plurality of elliptical arc segments through a preset geometric calculation algorithm to obtain a distance data set, and merge the plurality of elliptical arc segments through the distance data set to obtain target fitting elliptical data;
the analysis module 506 is configured to perform motion vector analysis on the target fitted ellipse data to obtain a target motion vector, perform tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and control the target camera model to perform view angle tracking according to the tracking view angle data.
Through the cooperation of the components, an image acquisition device is used for acquiring an image of a calibration plate in the holographic sand table to obtain an initial calibration image, and the initial calibration image is preprocessed to obtain a target calibration image; extracting corner coordinates of a calibration plate of the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set; carrying out correction parameter calculation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model; based on a target camera model, acquiring an image of a tracking target preset in a holographic sand table to obtain an initial tracking image; extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections, performing distance calculation on the elliptical arc sections through a geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data; and performing motion vector analysis on the target fitting ellipse data to obtain a target motion vector, performing tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to perform view angle tracking through the tracking view angle data. In the scheme of the application, the high-precision camera calibration can be realized by carrying out image acquisition and calibration on the calibration plate in the holographic sand table, and the accuracy of view angle tracking is ensured. And correcting the camera model by using a nonlinear optimization algorithm to realize real-time view tracking. This is critical for virtual reality and augmented reality applications because they require low latency and high frame rates to provide a smooth user experience. The view angle tracking in the scheme is self-adaptive, and the tracking view angle is adjusted in real time according to the target motion vector. Different target movement modes can be adapted, and the scheme realizes data fusion by extracting distance data from a plurality of elliptical arc sections and combining the distance data. This can increase the robustness of tracking, reduce the effects of noise and interference, and improve the accuracy of target detection.
The embodiment of the holographic sand table-based viewing angle tracking device according to the present invention is described in detail above in terms of a modularized functional entity in fig. 5, and the embodiment of the holographic sand table-based viewing angle tracking device according to the present invention is described in detail below in terms of hardware processing.
Fig. 6 is a schematic structural diagram of a holographic sand table based viewing angle tracking device 600 according to an embodiment of the present invention, where the holographic sand table based viewing angle tracking device 600 may have a relatively large difference due to configuration or performance, and may include one or more processors (CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage mediums 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored on the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the holographic-sand-based perspective tracking device 600. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the holographic sand table based viewing angle tracking device 600.
The holographic sand table based viewing angle tracking device 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as WindowsServe, macOSX, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the holographic sand table based viewing angle tracking device structure shown in fig. 6 is not limiting of the holographic sand table based viewing angle tracking device and may include more or fewer components than shown, or may be combined with certain components, or may be arranged in a different arrangement of components.
The invention also provides a holographic sand table-based viewing angle tracking device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the holographic sand table-based viewing angle tracking method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the holographic sand table based perspective tracking method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or passed as separate products, may be stored in a computer readable storage medium. Based on the understanding that the technical solution of the present invention may be embodied in essence or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The viewing angle tracking method based on the holographic sand table is characterized by comprising the following steps of:
image acquisition is carried out on a calibration plate in a preset holographic sand table through a preset image acquisition device to obtain an initial calibration image, and meanwhile, the initial calibration image is preprocessed to obtain a target calibration image;
extracting corner coordinates of a calibration plate of the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set;
carrying out correction parameter calculation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model;
Based on the target camera model, acquiring an image of a tracking target preset in the holographic sand table to obtain an initial tracking image;
extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections, simultaneously, carrying out distance calculation on the elliptical arc sections through a preset geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data;
and carrying out motion vector analysis on the target fitting ellipse data to obtain a target motion vector, carrying out tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to carry out view angle tracking through the tracking view angle data.
2. The holographic sand table-based viewing angle tracking method of claim 1, wherein the image acquisition is performed on a calibration plate in a preset holographic sand table by a preset image acquisition device to obtain an initial calibration image, and the preprocessing is performed on the initial calibration image to obtain a target calibration image, and the method comprises the following steps:
Image acquisition is carried out on a calibration plate in a preset holographic sand table through a preset image acquisition device, so that an initial calibration image is obtained;
performing gray level conversion processing on the initial calibration image to obtain a corresponding gray level image;
denoising the gray level image through a preset Gaussian filtering algorithm to obtain a denoised image;
and carrying out binarization processing on the denoising image through a preset maximum inter-class variance method to obtain the target calibration image.
3. The holographic sand table-based viewing angle tracking method of claim 1, wherein the extracting the corner coordinates of the calibration plate for the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set, comprises:
detecting key feature points of the target calibration image through a preset corner detection algorithm to obtain a plurality of key feature points;
respectively calibrating a pixel extraction area of each key feature point to obtain a pixel extraction area corresponding to each key feature point;
performing region division on the pixel extraction region corresponding to each key feature point to obtain a plurality of division sub-regions corresponding to each pixel extraction region;
Carrying out pixel distribution analysis on a plurality of divided subareas corresponding to each pixel extraction area to obtain pixel distribution data of each pixel extraction area;
screening a plurality of key feature points through pixel distribution data of each pixel extraction area to obtain a plurality of corner points, and respectively carrying out coordinate data calculation on each corner point to obtain a corner point coordinate set corresponding to the target calibration image;
and constructing an initial camera model through the angular point coordinate set.
4. A holographic sand table based view angle tracking method as claimed in claim 3, wherein said constructing an initial camera model from said set of corner coordinates comprises:
performing size information analysis on the calibration plate to obtain size information data corresponding to the calibration plate;
performing focal length calculation based on the size information data and the angular point coordinate set to obtain target focal length data;
performing principal point position estimation on an imaging plane of the holographic sand table based on the target focal length data to obtain a center pixel coordinate corresponding to the imaging plane;
calculating distortion parameters through the central pixel coordinates to obtain target distortion parameters, and combining the target focal length data, the central pixel coordinates and the target distortion parameters into camera internal parameters;
Performing camera position data analysis through the angular point coordinate set to obtain camera position parameters, and performing camera direction analysis through the central pixel coordinate and the angular point coordinate set to obtain corresponding camera direction data;
and merging the camera position parameters and the camera direction data into camera external parameters, and constructing the initial camera model through the camera internal parameters and the camera external parameters.
5. The holographic sand table-based viewing angle tracking method of claim 4, wherein the performing correction parameter calculation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model by the correction parameter set to obtain a target camera model, comprises:
performing first fitting error calculation on the internal parameters of the camera through an objective function in the nonlinear optimization algorithm to obtain first fitting error data;
performing second fitting error calculation on the external parameters of the camera through the objective function to obtain second fitting error data;
carrying out gradient calculation on the objective function through the first fitting error data and the second fitting error data to obtain corresponding objective gradient data;
Performing function curvature calculation on the objective function to obtain function curvature data corresponding to the objective function;
performing correction parameter calculation on the initial camera model based on the target gradient data and the function curvature data to obtain a corresponding correction parameter set;
and correcting the initial camera model through the correction parameter set to obtain a target camera model.
6. The holographic sand table-based viewing angle tracking method of claim 1, wherein the extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections, simultaneously performing distance calculation on the plurality of elliptical arc sections through a preset geometric calculation algorithm to obtain a distance data set, and combining the plurality of elliptical arc sections through the distance data set to obtain target fitting elliptical data, comprises:
extracting the arc section outline of the initial tracking image to obtain a plurality of elliptical arc sections;
extracting space points from a plurality of elliptical arc segments to obtain a space point set;
performing secondary curve mapping on each elliptical arc section to obtain a plurality of secondary curves;
performing plane element conversion on each secondary curve to obtain a unit circle on a conversion plane;
Performing projective geometry relation analysis on the unit circle and the space point set to obtain a target projective geometry relation;
and carrying out distance calculation on a plurality of elliptical arc segments through the target projective geometric relationship to obtain a distance data set, and merging the plurality of elliptical arc segments through the distance data set to obtain target fitting elliptical data.
7. The holographic sand table-based view tracking method of claim 1, wherein the performing motion vector analysis on the target fitting ellipse data to obtain a target motion vector, performing tracking view calculation on the target camera model according to the target motion vector to obtain tracking view data, and controlling the target camera model to perform view tracking according to the tracking view data, comprises:
and carrying out ellipse parameter calculation on the target fitting ellipse data to obtain an ellipse parameter set, wherein the ellipse parameter set comprises: an ellipse major axis dataset, an ellipse minor axis dataset, and an ellipse center coordinate dataset;
performing motion vector analysis on the target fitting ellipse data through an ellipse major axis data set, an ellipse minor axis data set and an ellipse center coordinate data set to obtain a target motion vector;
Calculating the target motion pose according to the target motion vector to obtain motion pose data;
and carrying out tracking view angle calculation on the target camera model according to the motion pose data to obtain tracking view angle data, and controlling the target camera model to carry out view angle tracking according to the tracking view angle data.
8. A holographic sand table-based viewing angle tracking device, comprising:
the processing module is used for acquiring images of the calibration plates in the preset holographic sand table through the preset image acquisition device to obtain an initial calibration image, and preprocessing the initial calibration image to obtain a target calibration image;
the extraction module is used for extracting corner coordinates of the calibration plate of the target calibration image to obtain a corner coordinate set corresponding to the target calibration image, and constructing an initial camera model through the corner coordinate set;
the computing module is used for carrying out correction parameter computation on the initial camera model according to a nonlinear optimization algorithm to obtain a correction parameter set, and correcting the initial camera model through the correction parameter set to obtain a target camera model;
The acquisition module is used for acquiring images of tracking targets preset in the holographic sand table based on the target camera model to obtain initial tracking images;
the merging module is used for extracting arc section contours of the initial tracking images to obtain a plurality of elliptical arc sections, calculating distances of the elliptical arc sections through a preset geometric calculation algorithm to obtain a distance data set, and merging the elliptical arc sections through the distance data set to obtain target fitting elliptical data;
the analysis module is used for carrying out motion vector analysis on the target fitting ellipse data to obtain a target motion vector, carrying out tracking view angle calculation on the target camera model according to the target motion vector to obtain tracking view angle data, and controlling the target camera model to carry out view angle tracking through the tracking view angle data.
9. A holographic sand table based viewing angle tracking device, the holographic sand table based viewing angle tracking device comprising: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invoking the instructions in the memory to cause the holographic sand table based perspective tracking device to perform the holographic sand table based perspective tracking method of any one of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the holographic sand table based perspective tracking method of any one of claims 1-7.
CN202311415264.9A 2023-10-30 2023-10-30 Viewing angle tracking method, device, equipment and storage medium based on holographic sand table Active CN117152272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311415264.9A CN117152272B (en) 2023-10-30 2023-10-30 Viewing angle tracking method, device, equipment and storage medium based on holographic sand table

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311415264.9A CN117152272B (en) 2023-10-30 2023-10-30 Viewing angle tracking method, device, equipment and storage medium based on holographic sand table

Publications (2)

Publication Number Publication Date
CN117152272A true CN117152272A (en) 2023-12-01
CN117152272B CN117152272B (en) 2024-01-19

Family

ID=88884753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311415264.9A Active CN117152272B (en) 2023-10-30 2023-10-30 Viewing angle tracking method, device, equipment and storage medium based on holographic sand table

Country Status (1)

Country Link
CN (1) CN117152272B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037780A (en) * 2024-04-10 2024-05-14 武汉大水云科技有限公司 River surface flow measuring method and measuring device based on video scanning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10504003B1 (en) * 2017-05-16 2019-12-10 State Farm Mutual Automobile Insurance Company Systems and methods for 3D image distification
CN116664620A (en) * 2023-07-12 2023-08-29 深圳优立全息科技有限公司 Picture dynamic capturing method and related device based on tracking system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10504003B1 (en) * 2017-05-16 2019-12-10 State Farm Mutual Automobile Insurance Company Systems and methods for 3D image distification
CN116664620A (en) * 2023-07-12 2023-08-29 深圳优立全息科技有限公司 Picture dynamic capturing method and related device based on tracking system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AKSOY, M ET AL: "Hybrid prospective and retrospective head motion correction to mitigate cross-calibration errors", 《MAGNETIC RESONANCE IN MEDICINE》, pages 1237 - 1251 *
党彦辉 等: "一种基于双目立体视觉追踪眼球运动的方法", 《电子世界》, no. 11, pages 103 - 105 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037780A (en) * 2024-04-10 2024-05-14 武汉大水云科技有限公司 River surface flow measuring method and measuring device based on video scanning

Also Published As

Publication number Publication date
CN117152272B (en) 2024-01-19

Similar Documents

Publication Publication Date Title
JP6095018B2 (en) Detection and tracking of moving objects
CN111899334B (en) Visual synchronous positioning and map building method and device based on point-line characteristics
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
EP2858008B1 (en) Target detecting method and system
KR101837407B1 (en) Apparatus and method for image-based target tracking
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
US10366501B2 (en) Method and apparatus for performing background image registration
CN117152272B (en) Viewing angle tracking method, device, equipment and storage medium based on holographic sand table
CN108198201A (en) A kind of multi-object tracking method, terminal device and storage medium
CN104704384A (en) Image processing method, particularly used in a vision-based localization of a device
KR20120138627A (en) A face tracking method and device
CN108362205B (en) Space distance measuring method based on fringe projection
CN113240656B (en) Visual positioning method and related device and equipment
CN116778094B (en) Building deformation monitoring method and device based on optimal viewing angle shooting
CN117870659A (en) Visual inertial integrated navigation algorithm based on dotted line characteristics
Shi et al. A method for detecting pedestrian height and distance based on monocular vision technology
Fangfang et al. Real-time lane detection for intelligent vehicles based on monocular vision
CN112529943B (en) Object detection method, object detection device and intelligent equipment
KR20220151572A (en) Method and System for change detection and automatic updating of road marking in HD map through IPM image and HD map fitting
CN117115434A (en) Data dividing apparatus and method
CN108596950B (en) Rigid body target tracking method based on active drift correction
Fuersattel et al. Geometric primitive refinement for structured light cameras
KR102049666B1 (en) Method for Estimating 6-DOF Relative Displacement Using Vision-based Localization and Apparatus Therefor
KR101907057B1 (en) Device and Method for Depth Information Compensation by Sphere Surface Modeling
CN113570535B (en) Visual positioning method, and related device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant