CN111540017B - Method, device, equipment and storage medium for optimizing camera position variable - Google Patents

Method, device, equipment and storage medium for optimizing camera position variable Download PDF

Info

Publication number
CN111540017B
CN111540017B CN202010345583.7A CN202010345583A CN111540017B CN 111540017 B CN111540017 B CN 111540017B CN 202010345583 A CN202010345583 A CN 202010345583A CN 111540017 B CN111540017 B CN 111540017B
Authority
CN
China
Prior art keywords
camera
path length
layer
truss
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010345583.7A
Other languages
Chinese (zh)
Other versions
CN111540017A (en
Inventor
洪智慧
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202010345583.7A priority Critical patent/CN111540017B/en
Publication of CN111540017A publication Critical patent/CN111540017A/en
Application granted granted Critical
Publication of CN111540017B publication Critical patent/CN111540017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the field of motion capture, and discloses a method, a device, equipment and a storage medium for optimizing camera position variables, which are used for ensuring the effectiveness and uniformity of random initialization population, reducing the complexity of camera position variables and variable calculation and improving the calculation efficiency of subsequent genetic algorithm cross variation. The method for optimizing the camera position variable comprises the following steps: acquiring a top view of an optical motion capture scene, wherein the top view comprises an inner square frame and an outer square frame which are centrosymmetric, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for arranging a plurality of cameras; establishing an x-y plane coordinate system for the top view; setting a starting point on the outer square frame, and collecting the path length of each camera layer by layer based on the starting point; and acquiring perimeter information of the outer frame, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, and mapping and storing the path length and the three-dimensional space coordinates.

Description

Method, device, equipment and storage medium for optimizing camera position variable
Technical Field
The present invention relates to the field of motion capture, and in particular, to a method, apparatus, device, and storage medium for optimizing camera position variables.
Background
The optical motion capturing system is based on a series of optical cameras installed at different positions and angles, and tracks optical positioning mark points at high speed and precisely by means of computer vision principle technology, so that the capturing of the whole body motion of a human body is completed.
In the existing optical motion capture scene, assuming that a given target is to install n cameras in the optical motion capture scene, the installation of the cameras is to determine the pose of each camera, and each camera pose includes 5 parameters, namely, the X coordinate of the camera, the Y coordinate of the camera, the Z coordinate of the camera, the horizontal angle of the camera and the pitch angle of the camera, which are respectively denoted as CameraX, cameraY, cameraZ, cameraH, cameraV. CameraX, cameraY, cameraZ is a constraint because the cameras are fixed to the truss layers. In the prior art, it is generally adopted to randomly sample CameraX, cameraY, cameraZ in the whole space, then judge whether the sample data is on the truss or not, and if the sample data is not removed on the truss, a large amount of useless data is generated, so that the effectiveness and uniformity of the sample are poor, and the sample collection efficiency is low.
Disclosure of Invention
The invention mainly aims to solve the problem that the effectiveness and uniformity of a sample are poor due to the fact that the existing mode is used for collecting sample data.
The first aspect of the present invention provides a method of optimizing camera position variables, comprising: acquiring a top view of an optical motion capture scene, wherein the top view comprises an inner square frame and an outer square frame which are centrosymmetric, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for laying out a plurality of cameras; setting a coordinate origin for the top view, and establishing an x-y plane coordinate system based on the coordinate origin; setting a starting point on the outer box, and collecting the path length of each camera layer by layer based on the starting point; and acquiring perimeter information of an outer frame, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate the genetic algorithm cross variation.
Optionally, in a first implementation manner of the first aspect of the present invention, the setting an origin of coordinates for the top view, and establishing an x-y plane coordinate system based on the origin of coordinates, includes: acquiring a central symmetry point from the top view, wherein the central symmetry point is positioned in the inner square frame; setting the central symmetry point as a coordinate origin, and establishing an x-y plane coordinate system based on the coordinate origin.
Optionally, in a second implementation manner of the first aspect of the present invention, the setting a starting point on the outer box, and collecting, layer by layer, a path length of each camera based on the starting point includes: setting any vertex on the outer box as a starting point; traversing the outer square frame according to a preset direction and a preset interval, and acquiring the path length from the starting point to each camera layer by layer, wherein the preset direction is clockwise or anticlockwise, and the preset interval is larger than 0.
Optionally, in a third implementation manner of the first aspect of the present invention, the obtaining perimeter information of the outer box, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, and mapping and storing the path length and the three-dimensional space coordinates, where the path length is used to replace the three-dimensional space coordinates to perform calculation of genetic algorithm cross variation, and includes: acquiring length information of an outer frame and width information of the outer frame, and calculating perimeter information of the outer frame according to the length information and the width information; and acquiring the truss layer number corresponding to each camera, calculating the three-dimensional space coordinate of each camera by adopting the x-y plane coordinate system, the perimeter information, the corresponding truss layer number and the path length, mapping and storing the path length and the three-dimensional space coordinate, wherein the path length is used for replacing the three-dimensional space coordinate to calculate the genetic algorithm cross variation.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the obtaining the truss layer number corresponding to each camera, calculating a three-dimensional space coordinate of each camera by using the x-y plane coordinate system, the perimeter information, the corresponding truss layer number, and the path length, and mapping and storing the path length and the three-dimensional space coordinate, where the path length is used to replace the three-dimensional space coordinate to perform calculation of a genetic algorithm cross variation, and includes: acquiring truss layer numbers corresponding to each camera, reading truss height information corresponding to each camera according to the corresponding truss layer numbers, and setting the truss height information corresponding to each camera as the z coordinate of each camera; acquiring the coordinate origin from the x-y plane coordinate system, and calculating an x coordinate and a y coordinate of each camera by adopting the coordinate origin, the perimeter information and the path length; combining the x coordinate, the y coordinate and the z coordinate into three-dimensional space coordinates of each camera, and mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate genetic algorithm cross variation.
A second aspect of the present invention provides an apparatus for optimizing camera position variables, comprising: the acquisition module is used for acquiring a top view of the optical motion capture scene, wherein the top view comprises an inner square frame and an outer square frame which are symmetrical in center, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for arranging a plurality of cameras; the setting module is used for setting a coordinate origin for the top view and establishing an x-y plane coordinate system based on the coordinate origin; the acquisition module is used for setting a starting point on the outer square frame and acquiring the path length of each camera layer by layer based on the starting point; the calculation module is used for acquiring perimeter information of the outer square frame, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, mapping and storing the path length and the three-dimensional space coordinates, and calculating the genetic algorithm cross variation by using the path length instead of the three-dimensional space coordinates.
Optionally, in a first implementation manner of the second aspect of the present invention, the setting module is specifically configured to: acquiring a central symmetry point from the top view, wherein the central symmetry point is positioned in the inner square frame; setting the central symmetry point as a coordinate origin, and establishing an x-y plane coordinate system based on the coordinate origin.
Optionally, in a second implementation manner of the second aspect of the present invention, the acquisition module is specifically configured to: setting any vertex on the outer box as a starting point; traversing the outer square frame according to a preset direction and a preset interval, and acquiring the path length from the starting point to each camera layer by layer, wherein the preset direction is clockwise or anticlockwise, and the preset interval is larger than 0.
Optionally, in a third implementation manner of the second aspect of the present invention, the calculating module includes: the first calculating unit is used for acquiring the length information of the outer square frame and the width information of the outer square frame, and calculating the circumference information of the outer square frame according to the length information and the width information; the second calculation unit is used for obtaining the truss layer number corresponding to each camera, calculating the three-dimensional space coordinate of each camera by adopting the x-y plane coordinate system, the perimeter information, the corresponding truss layer number and the path length, mapping and storing the path length and the three-dimensional space coordinate, and calculating the genetic algorithm cross variation by using the path length instead of the three-dimensional space coordinate.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the second computing unit is specifically configured to: acquiring truss layer numbers corresponding to each camera, reading truss height information corresponding to each camera according to the corresponding truss layer numbers, and setting the truss height information corresponding to each camera as the z coordinate of each camera; acquiring the coordinate origin from the x-y plane coordinate system, and calculating an x coordinate and a y coordinate of each camera by adopting the coordinate origin, the perimeter information and the path length; combining the x coordinate, the y coordinate and the z coordinate into three-dimensional space coordinates of each camera, and mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate genetic algorithm cross variation.
A third aspect of the present invention provides an apparatus for optimizing camera position variables, comprising: a memory and at least one processor, the memory having instructions stored therein, the memory and the at least one processor being interconnected by a line; the at least one processor invokes the instructions in the memory to cause the device that optimizes the camera position variable to perform the method of optimizing the camera position variable described above.
A fourth aspect of the invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the above-described method of optimizing camera position variables.
In the technical scheme provided by the invention, a top view of an optical motion capture scene is obtained, wherein the top view comprises an inner square frame and an outer square frame which are centrosymmetric, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for arranging a plurality of cameras; setting a coordinate origin for the top view, and establishing an x-y plane coordinate system based on the coordinate origin; setting a starting point on the outer box, and collecting the path length of each camera layer by layer based on the starting point; and acquiring perimeter information of an outer frame, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate the genetic algorithm cross variation. In the embodiment of the invention, the path length of the collected samples (each camera) is obtained by sampling in each layer of truss according to the preset direction and the preset interval, so that the effectiveness and uniformity of the random initialization population are ensured, and the efficiency of collecting the samples is improved; meanwhile, three-dimensional space coordinates of the acquired samples are calculated, a mapping relation between the path length and the three-dimensional space coordinates is constructed, the path length is adopted to replace the three-dimensional space coordinates, the complexity of calculating camera position variables and variables is reduced, and the calculation efficiency of the subsequent genetic algorithm cross variation is improved.
Drawings
FIG. 1 is a schematic diagram of one embodiment of a method of optimizing camera position variables in an embodiment of the present invention;
FIG. 2 is a schematic diagram of optimizing camera position variables in an optical motion capture scene in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of another embodiment of a method of optimizing camera position variables in an embodiment of the present invention;
FIG. 4 is a schematic diagram of one embodiment of an apparatus for optimizing camera position variables in an embodiment of the present invention;
FIG. 5 is a schematic diagram of another embodiment of an apparatus for optimizing camera position variables in an embodiment of the present invention;
FIG. 6 is a schematic diagram of one embodiment of an apparatus for optimizing camera position variables in an embodiment of the invention.
Detailed Description
The embodiment of the invention provides a method, a device, equipment and a storage medium for optimizing camera position variables, which are used for obtaining the path length of a collected sample (each camera) by sampling in each layer of truss according to a preset direction and a preset interval, ensuring the effectiveness and uniformity of random initialization population and improving the efficiency of collecting the samples.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and one embodiment of a method for optimizing camera position variables in an embodiment of the present invention includes:
101. and obtaining a top view of the optical motion capture scene, wherein the top view comprises an inner square frame and an outer square frame which are symmetrical in center, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for arranging a plurality of cameras.
The optical motion capture scene monitors and tracks target objects in a motion area of a performer through cameras which are pre-deployed on at least one layer of truss, and then the task of motion capture is completed. The current common optical motion capture mostly resides in the computer vision principle. While the top view of the optical motion capture scene includes a projection of the truss and a projection of the actor's motion area, i.e., as shown in fig. 2, an outer box 201 and an inner box 202 are taken separately, while the outer box 201 includes the inner box 202 and the two boxes are center symmetrical. A plurality of cameras are distributed over at least one truss.
It should be noted that at least one truss layer may be a single truss layer, a two truss layers or a multi-truss layer, and each truss layer has a different height, for example, the two truss layers, in which the truss layers are arranged in the order of height from high to low, the truss layer is 8 meters, and the truss layer is 5 meters. But both project into the same outer square frame in top view of the optical motion capture scene, e.g., the length and width of the single-layer truss are 6 meters and 4 meters, respectively, and the corresponding outer square frame aspect ratio is also 6:4, in meters.
It is to be understood that the execution body of the present invention may be a device for optimizing a camera position variable, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
102. The origin of coordinates is set for the top view and an x-y plane coordinate system is established based on the origin of coordinates.
In an actual application scenario, the target camera is installed in a truss with a corresponding layer number, and the truss may be a single-layer truss, which is not limited herein. Further, the server generates an x-y plane coordinate system based on the top view, where the x-y plane coordinate system includes a coordinate origin, an x axis and a y axis, and the coordinate origin may be set at any vertex in the outer box, or may be set at a central symmetry point in the top view, and in particular, the method is not limited herein, for example, the server acquires the central symmetry point of the inner box; as shown in fig. 2, the server sets the central symmetry point of the inner box as the origin of coordinates O, and establishes an x-y plane coordinate system based on the origin of coordinates O; the server may also obtain any vertex of the outer box, set any vertex of the outer box as the origin of coordinates O, and establish an x-y plane coordinate system based on the origin of coordinates O, which is not limited herein.
103. A starting point is set on the outer box and the path length of each camera is acquired based on the starting point.
Further, as shown in fig. 2, the server sets a starting point for the left lower corner vertex a on the outer frame, traverses each camera along the outer frame according to a preset interval based on the starting point anticlockwise, and counts the current moving times when moving a preset interval from the starting point, wherein the initial value of the current moving times is 0; when the camera moves to the position point of the target camera, the current moving times are obtained, the current moving times are multiplied by the preset spacing to obtain the current moving step length, and the current moving step length is set to be the path length of each camera, therefore, taking a single-layer truss as an example, when the length and the width of an outer square frame are respectively 6 meters and 4 meters, and the preset spacing is 0.5, the server moves 2 preset spacing anticlockwise from the starting point to obtain the path length of the position point a of the camera to be 1 meter; the server moves 15 preset intervals anticlockwise from the starting point to obtain the path length of the camera position point b to be 7.5 meters; the server moves 28 preset intervals anticlockwise from the starting point to obtain the path length of the camera position point c as 14 meters; the server moves 15 preset intervals anticlockwise from the starting point to obtain the path length of the camera position point c as 14 meters; the server moves 37 preset intervals anticlockwise from the starting point to obtain the path length of the camera position point d as 18.5 meters, namely, data sampling is carried out according to the interval of 0.5, and the path length of each camera is obtained.
Further, for a two-layer truss or a multi-layer truss, when the server has not traversed to the corresponding camera position point after traversing one perimeter along the outer square from the starting point, the server will then traverse the truss of the adjacent layer and make a second traversal from the starting point position along the corresponding outer square, and so on, to collect the path length of each camera layer by layer in the order of the height corresponding to the truss layer number from top to bottom or from bottom to top. For example, the two-layer truss, the range of the path length of each camera in the first layer truss is between 0 and the perimeter information of one outer frame, and the range of the path length of each camera in the second layer truss is between the perimeter information of one outer frame and the perimeter information of two outer frames. For example, if the perimeter information of one outer frame is 20 meters, the range of values corresponding to the path length of each camera of the first truss layer is [0, 20 ], and the range of values corresponding to the path length of each camera of the second truss layer is [20, 40).
The camera position variables include camera X-coordinate, camera Y-coordinate, and camera Z-coordinate, which are denoted herein as CameraX, cameraY and CameraZ, respectively. Because the cameras are fixed on the trusses, the height of each truss is fixed, so the camera z is constant, while for multi-layered trusses CameraX, cameraY, cameraZ can be combined into one variable, i.e. the path length of each camera, due to the constraint relationship between camera x and camera y for the same camera.
104. And acquiring perimeter information of the outer square frame, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate the genetic algorithm cross variation.
The server calculates three-dimensional space coordinates of each camera according to an x-y plane coordinate system, perimeter information and path length, for example, the server adopts trus Perimeter to represent perimeter information of an outer square frame, for two-layer trusses, if the path length is between 0 and trus Perimeter, the server determines that the camera is positioned on a first-layer truss, and the server assigns z coordinate of each camera as height information of the first-layer truss; if the path length is between trus per and 2 x trus per, the server determines that the cameras are located on the second truss layer, and the server assigns the z coordinate of each camera to be the height information of the second truss layer. The server can calculate the x-coordinate and y-coordinate of each camera according to the origin of coordinates on different trusses. Taking a range of values between 0 and truss Perimer corresponding to the path length of each camera on the first truss layer as an example, in an x-y plane coordinate system, calculating the three-dimensional space coordinate of each camera according to the perimeter information and the path length. For example, as shown in fig. 2, taking a single-layer frame as an example, starting from a starting point a by adopting a counterclockwise mode, according to the ratio of the path length from a to the starting point to the perimeter information of the whole outer frame, the x coordinate and the y coordinate of a point a of the camera can be determined based on the origin of coordinates, specifically, the server sets the proportional area corresponding to each of four side lengths AB, BC, CD and DA along the counterclockwise direction according to the side length information (length information and width information) of the outer frame and the perimeter information of the outer frame, the perimeter information of the whole outer frame is 20 meters, the length information corresponding to each of the side lengths AB and CD is 6 meters, the width information corresponding to the side length BC and the side length DA is 4 meters, the proportional areas corresponding to the four side lengths AB, BC, CD and DA are respectively [0,0.3], [0.3,0.5], [0.5,0.8], [0.8,1], when the path length from a to the starting point is obtained to be 1, division operation is performed on 1 and 20 to obtain a proportional value of 0.05,0.05 which is between [0,0.3], a is determined to be located on the side length AB, when the central symmetry point O is the origin of coordinates (0, 0), coordinates of the four vertexes A, B, C and D are respectively (-3, -2), (3, 2) and (-3, 2), so that y coordinates corresponding to a are determined to be-2, and x coordinates of a are determined to be-2 according to 0.05, [0,0.3] and each side length information. Further, the server determines three-dimensional space coordinates (x, y, z) of each camera, and maps and stores path lengths and three-dimensional space coordinates, wherein the path lengths are used for replacing the three-dimensional space coordinates to calculate genetic algorithm cross variation.
Optionally, for a two-layer truss or a multi-layer truss, the server obtains the total perimeter information of all trusses; the service performs proportional operation on the path length of each camera and the total perimeter information of all trusses, judges the actual position point of each camera, and determines the corresponding x coordinate and y coordinate based on the actual position point.
It will be appreciated that the uniformity of the random initialisation population can be ensured by the above operations, by which path length Dist is obtained for the crossover variation of the subsequent genetic algorithm, which is a continuous value in the [0, trus level x trus period ], and therefore the uniformity of the crossover variation of the subsequent genetic algorithm (the latter crossover variation being performed on Dist), where trus period represents the perimeter information of the single-layer truss and trus level represents the total number of layers of the truss. Meanwhile, the server restores the actual CameraX, cameraY, cameraZ data to Dist for subsequent calculation. In addition, as the following genetic algorithm performs cross mutation, cameraX, cameraY, cameraZ variables of each camera are only combined into one Dist variable, the number of variables optimized by the genetic algorithm is greatly reduced, the optimization time is saved, and the optimization efficiency is improved.
In the embodiment of the invention, the path length of the collected samples (each camera) is obtained by sampling in each layer of truss according to the preset direction and the preset interval, so that the effectiveness and uniformity of the random initialization population are ensured, and the efficiency of collecting the samples is improved; meanwhile, three-dimensional space coordinates of the acquired samples are calculated, a mapping relation between the path length and the three-dimensional space coordinates is constructed, the path length is adopted to replace the three-dimensional space coordinates, the complexity of calculating camera position variables and variables is reduced, and the calculation efficiency of the subsequent genetic algorithm cross variation is improved.
Referring to fig. 3, another embodiment of a method for optimizing camera position variables in an embodiment of the present invention includes:
301. and obtaining a top view of the optical motion capture scene, wherein the top view comprises an inner square frame and an outer square frame which are symmetrical in center, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for arranging a plurality of cameras.
The optical motion capture scene monitors and tracks target objects in a motion area of a performer through cameras which are pre-deployed on at least one layer of truss, and then the task of motion capture is completed. The current common optical motion capture mostly resides in the computer vision principle. While the top view of the optical motion capture scene includes a projection of the truss and a projection of the actor's motion area, i.e., an outer box and an inner box are taken separately, while the outer box includes an inner box and the two boxes are centrosymmetric. A plurality of cameras are distributed over at least one truss.
It should be noted that at least one truss layer may be a single truss layer, a two truss layers or a multi-truss layer, and each truss layer has a different height, for example, the two truss layers, in which the truss layers are arranged in the order of height from high to low, the truss layer is 8 meters, and the truss layer is 5 meters. But both project into the same outer square frame in top view of the optical motion capture scene, e.g., the length and width of the single-layer truss are 6 meters and 4 meters, respectively, and the corresponding outer square frame aspect ratio is also 6:4, in meters.
302. The origin of coordinates is set for the top view and an x-y plane coordinate system is established based on the origin of coordinates.
Specifically, the server acquires a central symmetry point from the top view, wherein the central symmetry point is positioned in the inner square frame; the server sets the central symmetry point as the origin of coordinates and establishes an x-y plane coordinate system based on the origin of coordinates, e.g., the origin of coordinates is O (0, 0). Further, the server uses the x-axis and the y-axis in the x-y plane coordinate system to represent the position coordinates of each camera.
303. A starting point is set on the outer box and the path length of each camera is acquired based on the starting point.
Specifically, the server sets any one vertex on the outer box as a starting point, for example, the server sets a left lower corner vertex a on the outer box as the starting point; and obtaining the path length from the starting point to each camera according to a preset direction and a preset interval, wherein the preset direction is clockwise or anticlockwise, and the preset interval is larger than 0. Taking a single-layer truss as an example, starting from a starting point on an outer frame, acquiring the path length of each camera at a preset interval of 1 meter in a counterclockwise direction to obtain path lengths of 2 meters, 5 meters, 10 meters and 13 meters corresponding to the B camera, the C camera, the D camera and the E camera respectively.
It should be noted that, for the two-layer truss or the multi-layer truss, after the server collects the path lengths of the cameras on the first layer truss along the outer square frame, it is further required to traverse the second layer truss, the third layer truss, … …, and so on until the path lengths from the start point to each camera are obtained. Further, the server may record the number of truss layers corresponding to each camera, taking two-layer truss as an example, where the path length of each camera on the second-layer truss is greater than the perimeter information of an outer frame.
304. And acquiring the length information of the outer square frame and the width information of the outer square frame, and calculating the circumference information of the outer square frame according to the length information and the width information.
The length information of the outer square frame and the width information of the outer square frame can be equal or unequal, and when the length information of the outer square frame and the width information of the outer square frame are equal, the outer square frame is square; when the length information of the outer square frame is unequal to the width information of the outer square frame, the outer square frame is rectangular, and the length information of the outer square frame is larger than the width information of the outer square frame.
Further, the server calculates the length information and the width information according to a perimeter calculation formula to obtain perimeter information of the outer frame, wherein the perimeter calculation formula is a sum of the length information of 2 times and the width information of 2 times, for example, the length information of the outer frame is 10 meters, the width information of the outer frame is 8 meters, and then the perimeter information of the outer frame is 2×10+2×8, that is, 36 meters.
305. And acquiring the truss layer number corresponding to each camera, calculating the three-dimensional space coordinate of each camera by adopting an x-y plane coordinate system, perimeter information, the corresponding truss layer number and a path length, mapping and storing the path length and the three-dimensional space coordinate, wherein the path length is used for replacing the three-dimensional space coordinate to calculate the genetic algorithm cross variation.
Specifically, the server obtains the truss layer number corresponding to each camera, the truss layer number corresponding to each camera can be read in truss layer number information recorded in the data acquisition process, and can also be obtained by comparing the truss layer number information with perimeter information according to the path length, and the method is not limited in specific points, for example, the perimeter information of an outer box is 20, if the path length of a camera b is 12, the server determines that the camera b is positioned on a first truss layer, and if the path length of the camera b is 30, the server determines that the camera b is positioned on a second truss layer; the server reads truss height information corresponding to each camera according to the corresponding truss layer number, and sets the truss height information corresponding to each camera as the z coordinate of each camera, for example, two layers of trusses, the corresponding truss layer number is 2, the corresponding truss height information is 8 meters, and the z coordinate of each camera is 8; the server obtains a coordinate origin from an x-y plane coordinate system, calculates an x coordinate and a y coordinate of each camera by adopting the coordinate origin, perimeter information and path length, further obtains the total truss layer number, and multiplies the total truss layer number and the perimeter information to obtain total perimeter information; proportional operation is carried out on the path length and the total perimeter information, the edge where the camera position point is located is judged, a coordinate origin is obtained from an x-y plane coordinate system, and the x coordinate and the y coordinate of each camera on the outer square frame are determined based on the coordinate origin; the server combines the x coordinate, the y coordinate and the z coordinate into three-dimensional space coordinates of each camera, and maps and stores the path length and the three-dimensional space coordinates, that is, the path length and the three-dimensional space coordinates have a corresponding mapping relationship, for example, when a lower left corner vertex on the outer box is taken as a coordinate origin, the three-dimensional space coordinates are obtained as (4,0,8), and the corresponding path length is 4 meters.
Further, the server determines the search range of each camera according to the origin of coordinates, the x-coordinate and the y-coordinate of each camera, and stores the path length, the three-dimensional space coordinate and the search range of each camera in an associated manner, and establishes a mapping relation information table so as to facilitate the direct three-dimensional space coordinate and the search range of each camera according to the path length in the subsequent calculation process of the genetic algorithm cross variation. It should be noted that, the server determines a camera horizontal angle CameraH from the search range of each camera; the server determines a camera pitch angle CameraV from-90 degrees to 90 degrees; the server takes the path length, the camera horizontal angle and the camera pitch angle as input variables, and carries out corresponding calculation and processing through a genetic algorithm, so that the number of the input variables is reduced, and the calculation efficiency is improved.
Optionally, the server normalizes the path length according to the perimeter information and the truss layer number to obtain a number between 0 and 1, which is the same as the value range of the camera position variable initialized randomly. And mapping and storing the normalized value, the three-dimensional space coordinates of each camera and the path length. The method is convenient for direct mapping acquisition of each scene data in the subsequent genetic algorithm cross variation, reduces the complexity of calculation and improves the calculation efficiency.
In the embodiment of the invention, the path length of the collected samples (each camera) is obtained by sampling in each layer of truss according to the preset direction and the preset interval, so that the effectiveness and uniformity of the random initialization population are ensured, and the efficiency of collecting the samples is improved; meanwhile, three-dimensional space coordinates of the acquired samples are calculated, a mapping relation between the path length and the three-dimensional space coordinates is constructed, the path length is adopted to replace the three-dimensional space coordinates, the complexity of calculating camera position variables and variables is reduced, and the calculation efficiency of the subsequent genetic algorithm cross variation is improved.
The method for optimizing the camera position variable in the embodiment of the present invention is described above, and the device for optimizing the camera position variable in the embodiment of the present invention is described below, referring to fig. 4, and one embodiment of the device for optimizing the camera position variable in the embodiment of the present invention includes:
the acquisition module 401 is configured to acquire a top view of an optical motion capture scene, where the top view includes an inner square and an outer square that are centrosymmetric, the inner square and the outer square are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for laying out a plurality of cameras;
a setting module 402, configured to set an origin of coordinates for the top view, and establish an x-y plane coordinate system based on the origin of coordinates;
The acquisition module 403 is configured to set a starting point on the outer square frame, and acquire a path length of each camera layer by layer based on the starting point;
the calculation module 404 is configured to obtain perimeter information of the outer frame, calculate three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, map and store the path length and the three-dimensional space coordinates, and use the path length to replace the three-dimensional space coordinates to perform calculation of genetic algorithm cross variation.
In the embodiment of the invention, the path length of the collected samples (each camera) is obtained by sampling in each layer of truss according to the preset direction and the preset interval, so that the effectiveness and uniformity of the random initialization population are ensured, and the efficiency of collecting the samples is improved; meanwhile, three-dimensional space coordinates of the acquired samples are calculated, a mapping relation between the path length and the three-dimensional space coordinates is constructed, the path length is adopted to replace the three-dimensional space coordinates, the complexity of calculating camera position variables and variables is reduced, and the calculation efficiency of the subsequent genetic algorithm cross variation is improved.
Referring to fig. 5, another embodiment of an apparatus for optimizing camera position variables in an embodiment of the present invention includes:
the acquisition module 401 is configured to acquire a top view of an optical motion capture scene, where the top view includes an inner square and an outer square that are centrosymmetric, the inner square and the outer square are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for laying out a plurality of cameras;
A setting module 402, configured to set an origin of coordinates for the top view, and establish an x-y plane coordinate system based on the origin of coordinates;
the acquisition module 403 is configured to set a starting point on the outer square frame, and acquire a path length of each camera layer by layer based on the starting point;
the calculation module 404 is configured to obtain perimeter information of the outer frame, calculate three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, map and store the path length and the three-dimensional space coordinates, and use the path length to replace the three-dimensional space coordinates to perform calculation of genetic algorithm cross variation.
Optionally, the setting module 402 may be further specifically configured to:
acquiring a central symmetry point from the top view, wherein the central symmetry point is positioned in the inner square frame;
the central symmetry point is set as the origin of coordinates, and an x-y plane coordinate system is established based on the origin of coordinates.
Optionally, the acquisition module 403 may be further specifically configured to:
setting any vertex on the outer square frame as a starting point;
traversing the outer square frame according to a preset direction and a preset interval, and acquiring the path length from the starting point to each camera layer by layer, wherein the preset direction is clockwise or anticlockwise, and the preset interval is larger than 0.
Optionally, the computing module 404 includes:
a first calculating unit 4041, configured to obtain length information of the outer frame and width information of the outer frame, and calculate perimeter information of the outer frame according to the length information and the width information;
the second calculating unit 4042 is configured to obtain the truss layer number corresponding to each camera, calculate the three-dimensional space coordinate of each camera by using the x-y plane coordinate system, the perimeter information, the corresponding truss layer number and the path length, and map and store the path length and the three-dimensional space coordinate.
Optionally, the second computing unit 4042 may be further specifically configured to:
acquiring truss layer numbers corresponding to each camera, reading truss height information corresponding to each camera according to the corresponding truss layer numbers, and setting the truss height information corresponding to each camera as the z coordinate of each camera;
acquiring a coordinate origin from an x-y plane coordinate system, and calculating an x coordinate and a y coordinate of each camera by adopting the coordinate origin, perimeter information and path length;
combining the x coordinate, the y coordinate and the z coordinate into three-dimensional space coordinates of each camera, mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate the genetic algorithm cross variation, and the path length is used for replacing the three-dimensional space coordinates to calculate the genetic algorithm cross variation.
In the embodiment of the invention, the path length of the collected samples (each camera) is obtained by sampling in each layer of truss according to the preset direction and the preset interval, so that the effectiveness and uniformity of the random initialization population are ensured, and the efficiency of collecting the samples is improved; meanwhile, three-dimensional space coordinates of the acquired samples are calculated, a mapping relation between the path length and the three-dimensional space coordinates is constructed, the path length is adopted to replace the three-dimensional space coordinates, the complexity of calculating camera position variables and variables is reduced, and the calculation efficiency of the subsequent genetic algorithm cross variation is improved.
The apparatus for optimizing camera position variables in the embodiment of the present invention is described in detail above with reference to fig. 4 and 5 from the point of view of the modularized functional entity, and the device for optimizing camera position variables in the embodiment of the present invention is described in detail below from the point of view of hardware processing.
Fig. 6 is a schematic structural diagram of an apparatus for optimizing camera position variables, where the apparatus 600 for optimizing camera position variables may vary greatly according to configuration or performance, and may include one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage media 630 (e.g., one or more mass storage devices) storing applications 633 or data 632 according to an embodiment of the present invention. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored on the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the apparatus 600 for optimizing camera position variables. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the device 600 that optimizes camera position variables.
The apparatus 600 for optimizing camera position variables may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the configuration of the device shown in fig. 6 to optimize camera position variables does not constitute a limitation on the device to optimize camera position variables, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and which may also be a volatile computer readable storage medium, having stored therein instructions which, when executed on a computer, cause the computer to perform the steps of the method of optimizing camera position variables.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of optimizing camera position variables, the method comprising:
acquiring a top view of an optical motion capture scene, wherein the top view comprises an inner square frame and an outer square frame which are centrosymmetric, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for laying out a plurality of cameras;
setting a coordinate origin for the top view, and establishing an x-y plane coordinate system based on the coordinate origin;
setting a starting point on the outer box, and collecting the path length of each camera layer by layer based on the starting point;
and acquiring perimeter information of an outer frame, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate the genetic algorithm cross variation.
2. The method of optimizing camera position variables according to claim 1, wherein the setting a coordinate origin for the top view and establishing an x-y plane coordinate system based on the coordinate origin comprises:
Acquiring a central symmetry point from the top view, wherein the central symmetry point is positioned in the inner square frame;
setting the central symmetry point as a coordinate origin, and establishing an x-y plane coordinate system based on the coordinate origin.
3. The method of optimizing camera position variables according to claim 1, wherein the setting a start point on the outer box and collecting path lengths of each camera layer by layer based on the start point comprises:
setting any vertex on the outer box as a starting point;
traversing the outer square frame according to a preset direction and a preset interval, and acquiring the path length from the starting point to each camera layer by layer, wherein the preset direction is clockwise or anticlockwise, and the preset interval is larger than 0.
4. A method of optimizing camera position variables according to any one of claims 1-3, wherein the obtaining perimeter information of an outer box, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, and mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for calculating genetic algorithm cross variation instead of the three-dimensional space coordinates, comprises:
Acquiring length information of an outer frame and width information of the outer frame, and calculating perimeter information of the outer frame according to the length information and the width information;
and acquiring the truss layer number corresponding to each camera, calculating the three-dimensional space coordinate of each camera by adopting the x-y plane coordinate system, the perimeter information, the corresponding truss layer number and the path length, mapping and storing the path length and the three-dimensional space coordinate, wherein the path length is used for replacing the three-dimensional space coordinate to calculate the genetic algorithm cross variation.
5. The method of optimizing camera position variables according to claim 4, wherein the obtaining the truss layer number corresponding to each camera, calculating three-dimensional space coordinates of each camera using the x-y plane coordinate system, the perimeter information, the corresponding truss layer number, and the path length, and mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for calculating genetic algorithm cross variation instead of the three-dimensional space coordinates, and includes:
acquiring truss layer numbers corresponding to each camera, reading truss height information corresponding to each camera according to the corresponding truss layer numbers, and setting the truss height information corresponding to each camera as the z coordinate of each camera;
Acquiring the coordinate origin from the x-y plane coordinate system, and calculating an x coordinate and a y coordinate of each camera by adopting the coordinate origin, the perimeter information and the path length;
combining the x coordinate, the y coordinate and the z coordinate into three-dimensional space coordinates of each camera, and mapping and storing the path length and the three-dimensional space coordinates, wherein the path length is used for replacing the three-dimensional space coordinates to calculate genetic algorithm cross variation.
6. An apparatus for optimizing camera position variables, the apparatus for optimizing camera position variables comprising:
the acquisition module is used for acquiring a top view of the optical motion capture scene, wherein the top view comprises an inner square frame and an outer square frame which are symmetrical in center, the inner square frame and the outer square frame are respectively used for indicating a motion area of a performer and at least one layer of truss, and each layer of truss is used for arranging a plurality of cameras;
the setting module is used for setting a coordinate origin for the top view and establishing an x-y plane coordinate system based on the coordinate origin;
the acquisition module is used for setting a starting point on the outer square frame and acquiring the path length of each camera layer by layer based on the starting point;
The calculation module is used for acquiring perimeter information of the outer square frame, calculating three-dimensional space coordinates of each camera according to the x-y plane coordinate system, the perimeter information and the path length, mapping and storing the path length and the three-dimensional space coordinates, and calculating the genetic algorithm cross variation by using the path length instead of the three-dimensional space coordinates.
7. The apparatus for optimizing camera position variables according to claim 6, wherein the setting module is specifically configured to:
acquiring a central symmetry point from the top view, wherein the central symmetry point is positioned in the inner square frame;
setting the central symmetry point as a coordinate origin, and establishing an x-y plane coordinate system based on the coordinate origin.
8. The apparatus for optimizing camera position variables according to claim 6, wherein the acquisition module is specifically configured to:
setting any vertex on the outer box as a starting point;
traversing the outer square frame according to a preset direction and a preset interval, and acquiring the path length from the starting point to each camera layer by layer, wherein the preset direction is clockwise or anticlockwise, and the preset interval is larger than 0.
9. An apparatus for optimizing camera position variables, the apparatus for optimizing camera position variables comprising: a memory and at least one processor, the memory having instructions stored therein, the memory and the at least one processor being interconnected by a line;
the at least one processor invoking the instructions in the memory to cause the device to optimize the camera position variable to perform the method of optimizing the camera position variable of any of claims 1-5.
10. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a method of optimizing a camera position variable according to any of claims 1-5.
CN202010345583.7A 2020-04-27 2020-04-27 Method, device, equipment and storage medium for optimizing camera position variable Active CN111540017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010345583.7A CN111540017B (en) 2020-04-27 2020-04-27 Method, device, equipment and storage medium for optimizing camera position variable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010345583.7A CN111540017B (en) 2020-04-27 2020-04-27 Method, device, equipment and storage medium for optimizing camera position variable

Publications (2)

Publication Number Publication Date
CN111540017A CN111540017A (en) 2020-08-14
CN111540017B true CN111540017B (en) 2023-05-05

Family

ID=71978989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010345583.7A Active CN111540017B (en) 2020-04-27 2020-04-27 Method, device, equipment and storage medium for optimizing camera position variable

Country Status (1)

Country Link
CN (1) CN111540017B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1059030A (en) * 1991-09-10 1992-02-26 北京理工大学 Rotary orient device of intersection survey system
CN107219849A (en) * 2017-05-23 2017-09-29 北京理工大学 A kind of multipath picks up ball and pitching robot control system
WO2018035347A1 (en) * 2016-08-17 2018-02-22 Google Llc Multi-tier camera rig for stereoscopic image capture
CN107967685A (en) * 2017-12-11 2018-04-27 中交第二公路勘察设计研究院有限公司 A kind of bridge pier and tower crack harmless quantitative detection method based on unmanned aerial vehicle remote sensing
CN108961343A (en) * 2018-06-26 2018-12-07 深圳市未来感知科技有限公司 Construction method, device, terminal device and the readable storage medium storing program for executing of virtual coordinate system
CN109682381A (en) * 2019-02-22 2019-04-26 山东大学 Big visual field scene perception method, system, medium and equipment based on omnidirectional vision
CN110478892A (en) * 2018-05-14 2019-11-22 彼乐智慧科技(北京)有限公司 A kind of method and system of three-dimension interaction
CN110509300A (en) * 2019-09-30 2019-11-29 河南埃尔森智能科技有限公司 Stirrup processing feeding control system and control method based on 3D vision guidance

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855405B2 (en) * 2003-04-30 2014-10-07 Deere & Company System and method for detecting and analyzing features in an agricultural field for vehicle guidance

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1059030A (en) * 1991-09-10 1992-02-26 北京理工大学 Rotary orient device of intersection survey system
WO2018035347A1 (en) * 2016-08-17 2018-02-22 Google Llc Multi-tier camera rig for stereoscopic image capture
CN107219849A (en) * 2017-05-23 2017-09-29 北京理工大学 A kind of multipath picks up ball and pitching robot control system
CN107967685A (en) * 2017-12-11 2018-04-27 中交第二公路勘察设计研究院有限公司 A kind of bridge pier and tower crack harmless quantitative detection method based on unmanned aerial vehicle remote sensing
CN110478892A (en) * 2018-05-14 2019-11-22 彼乐智慧科技(北京)有限公司 A kind of method and system of three-dimension interaction
CN108961343A (en) * 2018-06-26 2018-12-07 深圳市未来感知科技有限公司 Construction method, device, terminal device and the readable storage medium storing program for executing of virtual coordinate system
CN109682381A (en) * 2019-02-22 2019-04-26 山东大学 Big visual field scene perception method, system, medium and equipment based on omnidirectional vision
CN110509300A (en) * 2019-09-30 2019-11-29 河南埃尔森智能科技有限公司 Stirrup processing feeding control system and control method based on 3D vision guidance

Also Published As

Publication number Publication date
CN111540017A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
JP6811296B2 (en) Calibration method of relative parameters of collectors, equipment, equipment and media
US9733094B2 (en) Hybrid road network and grid based spatial-temporal indexing under missing road links
Saltenis et al. Indexing of moving objects for location-based services
CN112017134B (en) Path planning method, device, equipment and storage medium
CN107528904B (en) Method and apparatus for data distributed anomaly detection
CN108595613A (en) GIS local maps edit methods and device
CN112434846A (en) Method and apparatus for optimizing camera deployment
CN106327576B (en) A kind of City scenarios method for reconstructing and system
CN109839119B (en) Method and device for acquiring bridge floor area of bridge of cross-road bridge
CN111540017B (en) Method, device, equipment and storage medium for optimizing camera position variable
Elkhrachy Feature extraction of laser scan data based on geometric properties
CN112598737A (en) Indoor robot positioning method and device, terminal equipment and storage medium
CN111416942B (en) Method, device, equipment and storage medium for limiting camera search range
CN111540018B (en) Score calculation method of symmetrical layout mode of camera and related equipment
TW201243486A (en) Building texture extracting apparatus and method thereof
CN111540019B (en) Method, device, equipment and storage medium for determining camera mounting position
CN111767295B (en) Map data processing method, device, computing equipment and medium
CN112182125B (en) Business gathering area boundary identification system
KR20140111907A (en) Apparatus and method for multidimensional memory resource management and allocation
CN112037328A (en) Method, device, equipment and storage medium for generating road edges in map
CN110163210B (en) Point of interest (POI) information acquisition method, device, equipment and storage medium
Zhang et al. Processing of mass aerial digital images and rapid response based on cluster computer system
KR20180046909A (en) Method for processing 3-dimensional data
CN107516101B (en) Boundary data dividing method and device
CN117830547A (en) Ground height calculation method, ground height calculation system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant