CN112911155A - Visual angle moving method, device, equipment and storage medium of space camera - Google Patents

Visual angle moving method, device, equipment and storage medium of space camera Download PDF

Info

Publication number
CN112911155A
CN112911155A CN202110153517.4A CN202110153517A CN112911155A CN 112911155 A CN112911155 A CN 112911155A CN 202110153517 A CN202110153517 A CN 202110153517A CN 112911155 A CN112911155 A CN 112911155A
Authority
CN
China
Prior art keywords
surface point
point
distance
space camera
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110153517.4A
Other languages
Chinese (zh)
Inventor
洪嘉超
萧豪隽
肖德川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Supergame Network Technology Co ltd
Original Assignee
Xiamen Supergame Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Supergame Network Technology Co ltd filed Critical Xiamen Supergame Network Technology Co ltd
Priority to CN202110153517.4A priority Critical patent/CN112911155A/en
Publication of CN112911155A publication Critical patent/CN112911155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Abstract

The embodiment of the invention provides a method, a device, equipment and a storage medium for moving a visual angle of a space camera, and relates to the technical field of space camera movement. The method for moving the visual angle comprises the following steps: s1, acquiring a first surface point of the current space position of the space camera and a second surface point of the target position, and calculating a first distance between the first surface point and the second surface point. And S2, determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera. Wherein, the shooting point is positioned on the connecting line of the first ground surface point and the second ground surface point. And S3, calculating to obtain a third earth surface point corresponding to the shooting point through the optimal visible distance model according to the first earth surface point, the second earth surface point, the first distance and the second distance. And S4, moving the space camera from the current space position to the shooting space position corresponding to the third ground point, so that the space camera shoots the target position at the shooting space position with a preset view angle. The movement is smoother.

Description

Visual angle moving method, device, equipment and storage medium of space camera
Technical Field
The invention relates to the technical field of space camera movement, in particular to a method, a device, equipment and a storage medium for moving a visual angle of a space camera.
Background
In the prior art, many 3D scenes are built based on three-dimensional earth models. In some existing web frameworks of three-dimensional earth models, for example: the threeJS and the babylonJS can form a straight line between the earth center point (geocentric) and any spatial position; the intersection point of the straight line and the earth surface (earth surface) is the earth surface point (expressed by longitude and latitude) corresponding to the space position.
A space camera refers to a virtual camera that takes a picture in a virtual model space, and in a three-dimensional earth model, the position of the space camera is usually represented by a vector; i.e. a vector with the earth's center pointing to the center position of the space camera. The point where the straight line where the vector is located intersects the earth surface is the earth surface point of the space camera.
If a space camera is to take a picture of a target location in a three-dimensional earth model, the prior art usually moves the surface point of the space camera directly to the surface point of the target location. Therefore, the spatial camera can only see the target position from a top view. If the target position needs to be viewed from other angles, the deviation amount calculation conversion is needed, and the space camera is moved to other positions again to shoot the target position from other angles. Therefore, the moving picture of the spatial camera is usually composed of two segments, which cannot meet some specific application scenarios.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for moving a visual angle of a space camera, which aim to solve the problem that the space camera needs to move twice when shooting a target position in the related art.
The first aspect,
The embodiment of the invention provides a visual angle moving method of a space camera, which comprises the following steps:
s1, acquiring a first surface point of the current space position of the space camera and a second surface point of the target position, and calculating a first distance between the first surface point and the second surface point.
And S2, determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera. And the shooting point is positioned on a connecting line of the first ground surface point and the second ground surface point.
And S3, calculating to obtain a third earth surface point corresponding to the shooting point through an optimal visible distance model according to the first earth surface point, the second earth surface point, the first distance and the second distance.
And S4, moving the space camera from the current space position to the shooting space position corresponding to the third ground point, so that the space camera shoots the target position at the shooting space position with the preset view angle.
Optionally, the predetermined perspective comprises a predetermined tilt angle of the space camera. Step S2 is specifically:
and selecting the second distance corresponding to the pitch angle from a plurality of preset distances according to the preset pitch angle of the space camera.
Optionally, the expression of the optimal visible distance model is:
lat3=(pointDistan*(lat1-lat2))/S+lat2
lon3=(pointDistan*(lon1-lon2))/S+lon2
where lon3 is the longitude of the image capture point, lat3 is the latitude of the image capture point, PointDistan is the second distance, lon1 is the longitude of the first surface point, lat1 is the latitude of the first surface point, lon2 is the longitude of the second surface point, lat2 is the latitude of the second surface point, and S is the first distance.
Optionally, step S1 specifically includes:
s1a, acquiring the first ground surface point of the current space position of the space camera.
S1b, acquiring the second ground point of the target position.
S1c, calculating the first distance between the two surface points through a hemipositive vector formula according to the first surface point and the second surface point. Wherein, the expression of the hemiversine formula is as follows:
Figure BDA0002933440630000031
s is a first distance, a is the latitude and longitude difference of the first surface point, lat1 is the latitude of the first surface point, lat2 is the latitude of the second surface point, and b is the latitude and longitude difference of the second surface point.
Optionally, step S1a is specifically:
s1a1, acquiring geocentric vector coordinates of the current space position of the space camera, and converting the geocentric vector coordinates into geodetic reference coordinates through an earth ellipsoid model. Wherein, the expression of the earth ellipsoid model is as follows:
x=rx cosφcosθ,-π/2≤φ≤π/2
y=ry cosφsinθ,-π≤θ≤π
z=rz sinφ
x, y and z are three coordinates of the geodetic reference coordinate, rx、ryAnd rzThe three coordinates are the geocentric vector coordinates respectively, theta is the angle of the plane where the space camera is located, and phi is a fixed value of 0.618.
And S1a2, obtaining the first earth surface point through a cartesian ToCartographic model according to the geodetic reference coordinates.
The second aspect,
The embodiment of the invention provides a visual angle moving device of a space camera, which comprises the following modules:
the first distance module is used for acquiring a first surface point of the current space position of the space camera and a second surface point of the target position, and calculating a first distance between the first surface point and the second surface point.
And the second distance module is used for determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera. And the shooting point is positioned on a connecting line of the first ground surface point and the second ground surface point.
And the third earth surface point module is used for calculating to obtain a third earth surface point corresponding to the shooting point through an optimal visible distance model according to the first earth surface point, the second earth surface point, the first distance and the second distance.
A moving module for moving the space camera from the first surface point to the third surface point so that the space camera photographs the target position at a predetermined view angle.
Optionally, the predetermined perspective comprises a predetermined tilt angle of the space camera. The second distance module is specifically configured to:
and selecting the second distance corresponding to the pitch angle from a plurality of preset distances according to the preset pitch angle of the space camera.
Optionally, the expression of the optimal visible distance model is:
lat3=(pointDis tan*(lat1-lat2))/S+lat2
lon3=(pointDis tan*(lon1-lon2))/S+lon2
where lon3 is the longitude of the image capture point, lat3 is the latitude of the image capture point, PointDistan is the second distance, lon1 is the longitude of the first surface point, lat1 is the latitude of the first surface point, lon2 is the longitude of the second surface point, lat2 is the latitude of the second surface point, and S is the first distance.
Optionally, the first distance module comprises the following units:
and the first earth surface point unit is used for acquiring the first earth surface point of the current space position of the space camera.
And the second surface point unit is used for acquiring the second surface point of the target position.
And the first distance unit is used for calculating the first distance between the two surface points through a hemipositive vector formula according to the first surface point and the second surface point. Wherein, the expression of the hemiversine formula is as follows:
Figure BDA0002933440630000041
s is a first distance, a is the latitude and longitude difference of the first surface point, lat1 is the latitude of the first surface point, lat2 is the latitude of the second surface point, and b is the latitude and longitude difference of the second surface point.
Optionally, the first surface point unit comprises the following sub-units:
and the first conversion subunit is used for acquiring geocentric vector coordinates of the current space position of the space camera and converting the geocentric vector coordinates into geodetic reference coordinates through an earth ellipsoid model. Wherein, the expression of the earth ellipsoid model is as follows:
x=rx cosφcosθ,-π/2≤φ≤π/2
y=ry cosφsinθ,-π≤θ≤π
z=rz sinφ
x, y and z are three coordinates of the geodetic reference coordinate, rx、ryAnd rzThe three coordinates are the geocentric vector coordinates respectively, theta is the angle of the plane where the space camera is located, and phi is a fixed value of 0.618.
And the second conversion subunit is used for obtaining the first surface point through a cartesian ToCartographic model according to the geodetic reference coordinates.
The third aspect,
An embodiment of the present invention provides a perspective mobile device of a space camera, which includes a processor, a memory, and a computer program stored in the memory. The computer program is executable by the processor to implement the method of moving a perspective of a space camera as described in any of the paragraphs of the first aspect.
The fourth aspect,
An embodiment of the present invention provides a computer-readable storage medium. The computer-readable storage medium includes a stored computer program, wherein when the computer program runs, the apparatus in which the computer-readable storage medium is located is controlled to execute the method for moving the angle of view of a space camera according to any one of the first aspect.
By adopting the technical scheme, the invention can obtain the following technical effects:
according to the embodiment of the invention, the third ground surface point of the optimal shooting space position of the camera is determined through the first ground surface point, the second ground surface point, the first distance and the second distance according to the preset visual angle of the space camera. And then, the space camera is directly moved to the shooting space position corresponding to the third earth surface instead of the target position corresponding to the second earth surface and then is converted. The moving distance of the space camera is shorter, the moving speed is higher, and the moving process is smoother and more visual.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a method for moving a viewing angle of a space camera according to a first embodiment of the present invention.
Fig. 2 is a schematic diagram of the movement of a space camera in the prior art according to a first embodiment of the present invention.
FIG. 3 is a schematic diagram of an earth ellipsoid model according to a first embodiment of the present invention.
Fig. 4 is a schematic moving diagram of the space camera in the moving method according to the first embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a perspective moving apparatus of a space camera according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In the embodiments, the references to "first \ second" are merely to distinguish similar objects and do not represent a specific ordering for the objects, and it is to be understood that "first \ second" may be interchanged with a specific order or sequence, where permitted. It should be understood that "first \ second" distinct objects may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced in sequences other than those illustrated or described herein.
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the first embodiment is as follows:
referring to fig. 1, a method for moving a view angle of a space camera according to a first embodiment of the present invention is provided. Which may be performed by a perspective mobile device of a space camera (hereinafter referred to as perspective mobile device). In particular, execution by one or more processors within the perspective mobile device to implement steps S1-S4 as follows.
S1, acquiring a first surface point of the current space position of the space camera and a second surface point of the target position, and calculating a first distance between the first surface point and the second surface point.
In this embodiment, the perspective mobile device may be a local computer, a cloud computer, a mobile computer, or other intelligent device, and the specific type of the perspective mobile device is not limited in the present invention, and the perspective mobile device may be a 3D model including an earth model.
In the three-dimensional earth model, the position of the space camera is represented by a vector. It is understood that the vector has a direction and a size (length), and in this embodiment, the direction of the vector of the space camera is directed from the earth center of the earth model to the geometric center of the space camera; the size (length) of the vector of the space camera is determined by the parameters (such as focal length, angle and the like) of the camera and the height of the shot object, the size (length) of the vector of the space camera is determined according to the parameters, the height of the space camera can be adjusted in a self-adaptive mode in the moving process of the space camera, and therefore how to determine the direction of the vector of the space camera is irrelevant to the technical problem to be solved by the invention, and the invention is not repeated herein.
Of course, in other embodiments, the size of the vector of the space camera may be constant all the time, and may also be manually adjusted according to the operation of the user, which is not limited in the present invention. Preferably, in the present embodiment, the height of the space camera (i.e., the size/length of the vector) can be adaptively adjusted according to the object to be photographed, so as to achieve better photographing effect.
The technical problem to be solved by the invention is that the prior art needs to move twice when adjusting the direction of the vector of the space camera. Therefore, the space camera has long moving time, and the moving process has the problems of turning, discontinuous influence on the visual effect and the like.
In this embodiment, the surface point refers to a connecting line between any spatial position in the three-dimensional earth model and the center of the earth, and an intersection point of the connecting line and the surface of the three-dimensional earth model can be understood as a projection point of the spatial position on the surface of the three-dimensional earth model. With the coordinates of the surface points, the vectors of the spatial positions can be determined by connecting the coordinates with the geocentric.
In this embodiment, the three-dimensional earth model uses the longitude and latitude of the WGS84 to locate the coordinates of the earth's surface. I.e. the coordinates of the surface points are all represented by latitude and longitude. The coordinates of the first surface point a are (lon1, lat1), the coordinates of the second surface point B are (lon2, lat2), and the coordinates of the third surface point C are (lon3, lat 3). Of course, in other embodiments, an existing coordinate system such as Beijing 54 coordinate system or WGS-72 coordinate system may be used, and the present invention is not limited thereto.
On the basis of the above embodiment, in an alternative embodiment of the present invention, step S1 includes step S1a, arrangement S1b and step S1 c.
S1a, acquiring a first ground surface point of the current space position of the space camera.
On the basis of the above embodiment, in an alternative embodiment of the present invention, the step S1a includes the step S1a1 and the step S1a 2.
S1a1, acquiring geocentric vector coordinates of the current space position of the space camera, and converting the geocentric vector coordinates into geodetic reference coordinates through an earth ellipsoid model. The expression of the earth ellipsoid model is as follows:
x=rx cosφcosθ,-π/2≤φ≤π/2
y=ry cosφsinθ,-π≤θ≤π
z=rz sinφ
x, y and z are three coordinates of the geodetic reference coordinate, rx、ryAnd rzThree coordinates which are geocentric vector coordinates respectively, and theta is a space cameraThe angle of the plane is constant 0.618.
Because the earth vector coordinates of the space camera (i.e., the vector representing the space camera's spatial position) cannot be directly converted to latitude and longitude based on the WGS 84. Therefore, in the present embodiment, the earth vector coordinates of the space camera (i.e., the vector representing the spatial position of the space camera) are first converted into geodetic reference coordinates. So that the current spatial position of the spatial camera has a basis for conversion into latitude and longitude.
S1a2, obtaining a first earth surface point through a cartesian ToCartographic model according to the geodetic reference coordinates.
The cartesian ToCartograpic model is a conversion model of converting a Cartesian coordinate system into longitude and latitude, belongs to the prior art, and is not described herein again. The geodetic reference coordinates of the current shooting position of the space camera can be converted into latitude and longitude, i.e., the coordinates of the first surface point, by a cartesian tocartographic model.
And S1b, acquiring a second ground point of the target position.
In this embodiment, since the second surface point is a photographed object, usually a model of a fixed building, plant, or the like, the longitude and latitude thereof, that is, the second surface point is known and can be directly obtained.
S1c, calculating a first distance between the two surface points through a hemipositive vector formula according to the first surface point and the second surface point. Wherein, the expression of the hemiversine formula is as follows:
Figure BDA0002933440630000091
wherein the content of the first and second substances,
a=(lon1*Math.PI)/180-(lat1*Math.PI)/180
b=(lon2*Math.PI)/180-(lat2*Math.PI)/180
where S is the first distance, a is the latitude and longitude difference of the first surface point, lat1 is the latitude of the first surface point, lat2 is the latitude of the second surface point, and b is the latitude and longitude difference of the second surface point.
In the implementation, the distance between the first ground surface point and the second ground surface point can be accurately calculated through a hemiversine formula based on a longitude and latitude coordinate system of WGS84, and the method has a good practical significance.
And S2, determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera. Wherein, the shooting point is positioned on the connecting line of the first ground surface point and the second ground surface point.
It should be noted that the rotation vector of the space camera includes a yaw angle (Heading), a Pitch angle (Pitch), and a roll angle (roll). In the present embodiment, the linear distance of the space camera to the target position is affected due to the Pitch angle (Pitch) of the space camera. Thus, the predetermined perspective comprises a predetermined tilt angle of the space camera.
On the basis of the foregoing embodiment, in an optional embodiment of the present invention, step S2 specifically includes:
and selecting a second distance corresponding to the pitch angle from a plurality of preset distances according to the preset pitch angle of the space camera.
In this embodiment, the pointDistan is dynamically adjusted according to the value of the camera pitch angle. For example:
if the pitch is between-90 and-70, the pointDistan value will automatically adjust to 150 meters;
if the pitch is between-70 and-60, the pointDistan value will automatically adjust to 70 meters;
if pitch is between-60 and-50, the pointDistan value will automatically adjust to 80 meters;
if pitch is between-50 and-40, the pointDistan value will automatically adjust to 120 meters;
if pitch is between-40 and-30, the pointDistan value will automatically adjust to 120 meters;
if pitch is between-20 and 10, the pointDistan value will automatically adjust to 50 meters.
It is understood that in other embodiments, different pitch angles may correspond to other different pointDistan values depending on the size of the three-dimensional earth model, the intersection of the space cameras, or the modeled size of the three-dimensional model. In addition, the distribution range of the pitch may be further narrowed, so as to obtain a better photographing effect, which is not particularly limited by the present invention. It is within the scope of the present invention to select a corresponding distance from a plurality of predetermined distances according to the pitch angle of the space camera.
In the present embodiment, the predetermined tilt angle is the current tilt angle of the space camera, i.e., the tilt angle of the space camera before and after movement is unchanged. In other embodiments, a fixed pitch angle may be set, i.e., the pitch angle is adjusted to a fixed value each time the viewing angle is switched. The predetermined pitch angle is not particularly limited in the present invention.
It is understood that the spatial location is photographed, the point projected on the three-dimensional earth model is the third earth surface point, and the point projected on the line connecting the first earth surface point and the second earth surface point is the photographed point.
And S3, calculating to obtain a third earth surface point corresponding to the shooting point through the optimal visible distance model according to the first earth surface point, the second earth surface point, the first distance and the second distance.
And setting the distance from the shooting point on the connecting line of the first ground point and the second ground point to the second ground point as PointDistan.
In this embodiment, the expression of the optimal visible distance model is:
lat3=(pointDis tan*(lat1-lat2))/S+lat2
lon3=(pointDis tan*(lon1-lon2))/S+lon2
where lon3 is the longitude of the image capture point, lat3 is the latitude of the image capture point, PointDistan is the second distance, lon1 is the longitude of the first surface point, lat1 is the latitude of the first surface point, lon2 is the longitude of the second surface point, lat2 is the latitude of the second surface point, and S is the first distance.
Specifically, according to the proportional relation between the first distance and the second distance, the longitude and latitude of the third surface point are calculated through the longitude and latitude of the first surface point and the longitude and latitude of the second surface point. The calculation process is simple, and the obtained third earth surface point can well meet the shooting requirement of the preset visual angle, so that the effects of simple calculation and clear shooting are achieved, and the method has good practical significance.
And S4, moving the space camera from the current space position to the shooting space position corresponding to the third ground point, so that the space camera shoots the target position at the shooting space position with a preset view angle.
According to the third ground surface point, the vector of the optimal shooting space position where the space camera shoots the target position at the preset visual angle can be determined, so that the space camera is directly moved to the shooting space position from the current space position, and the target position is directly shot at the preset visual angle. The visual angle does not need to be switched in the moving process, the continuous visual angle can bring more comfortable experience to the user, and the method can be applied to more scenes and has good practical significance.
According to the embodiment of the invention, the third ground surface point of the optimal shooting space position of the camera is determined through the first ground surface point, the second ground surface point, the first distance and the second distance according to the preset visual angle of the space camera. And then, the space camera is directly moved to the shooting space position corresponding to the third earth surface instead of the target position corresponding to the second earth surface and then is converted. The moving distance of the space camera is shorter, the moving speed is higher, and the moving process is smoother and more visual.
Example two:
the embodiment of the invention provides a visual angle moving device of a space camera, which comprises the following modules:
the first distance module is used for acquiring a first surface point of the current space position of the space camera and a second surface point of the target position, and calculating a first distance between the first surface point and the second surface point.
And the second distance module is used for determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera. Wherein, the shooting point is positioned on the connecting line of the first ground surface point and the second ground surface point.
And the third earth surface point module is used for calculating to obtain a third earth surface point corresponding to the shooting point through the optimal visible distance model according to the first earth surface point, the second earth surface point, the first distance and the second distance.
And the moving module is used for moving the space camera from the first ground surface point to the third ground surface point so that the space camera shoots the target position at a preset view angle.
Optionally, the predetermined perspective comprises a predetermined tilt angle of the space camera. The second distance module is specifically configured to:
and selecting a second distance corresponding to the pitch angle from a plurality of preset distances according to the preset pitch angle of the space camera.
Optionally, the expression of the optimal visible distance model is:
lat3=(pointDis tan*(lat1-lat2))/S+lat2
lon3=(pointDis tan*(lon1-lon2))/S+lon2
where lon3 is the longitude of the image capture point, lat3 is the latitude of the image capture point, PointDistan is the second distance, lon1 is the longitude of the first surface point, lat1 is the latitude of the first surface point, lon2 is the longitude of the second surface point, lat2 is the latitude of the second surface point, and S is the first distance.
Optionally, the first distance module comprises the following units:
and the first ground surface point unit is used for acquiring a first ground surface point of the current space position of the space camera.
And the second surface point unit is used for acquiring a second surface point of the target position.
And the first distance unit is used for calculating a first distance between the two surface points through a hemipositive vector formula according to the first surface point and the second surface point. Wherein, the expression of the hemiversine formula is as follows:
Figure BDA0002933440630000131
s is a first distance, a is the latitude and longitude difference of the first surface point, lat1 is the latitude of the first surface point, lat2 is the latitude of the second surface point, and b is the latitude and longitude difference of the second surface point.
Optionally, the first surface point unit comprises the following sub-units:
and the first conversion subunit is used for acquiring geocentric vector coordinates of the current space position of the space camera and converting the geocentric vector coordinates into geodetic reference coordinates through an earth ellipsoid model. The expression of the earth ellipsoid model is as follows:
x=rx cosφcosθ,-π/2≤φ≤π/2
y=ry cosφsinθ,-π≤θ≤π
z=rz sinφ
x, y and z are three coordinates of the geodetic reference coordinate, rx、ryAnd rzThe three coordinates are the geocentric vector coordinates respectively, theta is the angle of the plane where the space camera is located, and phi is a fixed value of 0.618.
And the second conversion subunit is used for obtaining the first surface point through a cartesian ToCartograpic model according to the geodetic reference coordinates.
Example three:
an embodiment of the present invention provides a perspective mobile device of a space camera, which includes a processor, a memory, and a computer program stored in the memory. The computer program can be executed by a processor to implement the method of moving the angle of view of a space camera as in any one of the embodiments.
Example four:
an embodiment of the present invention provides a computer-readable storage medium. The computer-readable storage medium includes a stored computer program, wherein the apparatus in which the computer-readable storage medium is located is controlled to perform the method of moving the angle of view of a space camera according to any one of the embodiments when the computer program is run.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for moving a view angle of a space camera, comprising:
acquiring a first earth surface point of a current space position of a space camera and a second earth surface point of a target position, and calculating a first distance between the first earth surface point and the second earth surface point;
determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera; the shooting point is positioned on a connecting line of the first ground surface point and the second ground surface point;
calculating to obtain a third earth surface point corresponding to the shooting point through an optimal visible distance model according to the first earth surface point, the second earth surface point, the first distance and the second distance;
moving the space camera from a current space position to a shooting space position corresponding to the third ground point, so that the space camera shoots the target position at the shooting space position with the predetermined view angle.
2. The method of claim 1, wherein the predetermined perspective includes a predetermined tilt angle of the space camera; determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera, specifically:
and selecting the second distance corresponding to the pitch angle from a plurality of preset distances according to the preset pitch angle of the space camera.
3. The method of claim 1, wherein the optimal viewing distance model is expressed by:
lat3=(pointDistan*(lat1-lat2))/S+lat2
lon3=(pointDistan*(lon1-lon2))/S+lon2
where lon3 is the longitude of the image capture point, lat3 is the latitude of the image capture point, PointDistan is the second distance, lon1 is the longitude of the first surface point, lat1 is the latitude of the first surface point, lon2 is the longitude of the second surface point, lat2 is the latitude of the second surface point, and S is the first distance.
4. The perspective moving method according to claim 1, wherein a first surface point of a current spatial position of the spatial camera and a second surface point of the target position are obtained, and a first distance between the first surface point and the second surface point is calculated, specifically:
acquiring the first earth surface point of the current space position of the space camera;
acquiring the second ground point of the target position;
calculating the first distance between the two surface points according to the first surface point and the second surface point through a hemipositive vector formula; wherein, the expression of the hemiversine formula is as follows:
Figure FDA0002933440620000021
s is a first distance, a is the latitude and longitude difference of the first surface point, lat1 is the latitude of the first surface point, lat2 is the latitude of the second surface point, and b is the latitude and longitude difference of the second surface point.
5. The perspective moving method according to claim 4, wherein the obtaining the first surface point of the current spatial position of the spatial camera specifically includes:
acquiring geocentric vector coordinates of the current space position of the space camera, and converting the geocentric vector coordinates into geodetic reference coordinates through an earth ellipsoid model; wherein, the expression of the earth ellipsoid model is as follows:
x=rxcosφcosθ,-π/2≤φ≤π/2
y=rycosφsinθ,-π≤θ≤π
z=rzsinφ
x, y and z are three coordinates of the geodetic reference coordinate, rx、ryAnd rzThree coordinates of geocentric vector coordinates are respectively, theta is the angle of the plane where the space camera is located, and phi is a fixed value of 0.618;
and obtaining the first earth surface point through a cartesian ToCartographic model according to the geodetic reference coordinates.
6. A device for moving the angle of view of a space camera, comprising:
the first distance module is used for acquiring a first earth surface point of the current space position of the space camera and a second earth surface point of the target position, and calculating a first distance between the first earth surface point and the second earth surface point;
the second distance module is used for determining a second distance from the shooting point to a second ground point according to the preset visual angle of the space camera; the shooting point is positioned on a connecting line of the first ground surface point and the second ground surface point;
the third earth surface point module is used for calculating a third earth surface point corresponding to the shooting point through an optimal visible distance model according to the first earth surface point, the second earth surface point, the first distance and the second distance;
a moving module for moving the space camera from the first surface point to the third surface point so that the space camera photographs the target position at a predetermined view angle.
7. The viewing angle shifting apparatus according to claim 5, wherein the expression of the optimal viewing distance model is:
lat3=(pointDistan*(lat1-lat2))/S+lat2
lon3=(pointDistan*(lon1-lon2))/S+lon2
where lon3 is the longitude of the image capture point, lat3 is the latitude of the image capture point, PointDistan is the second distance, lon1 is the longitude of the first surface point, lat1 is the latitude of the first surface point, lon2 is the longitude of the second surface point, lat2 is the latitude of the second surface point, and S is the first distance.
8. The viewing angle shifting apparatus of claim 5, wherein the first distance module comprises:
the first earth surface point unit is used for acquiring the first earth surface point of the current space position of the space camera;
a second surface point unit for acquiring the second surface point of the target position;
the first distance unit is used for calculating the first distance between the two surface points through a hemipositive vector formula according to the first surface point and the second surface point; wherein, the expression of the hemiversine formula is as follows:
Figure FDA0002933440620000031
s is a first distance, a is the latitude and longitude difference of the first surface point, lat1 is the latitude of the first surface point, lat2 is the latitude of the second surface point, and b is the latitude and longitude difference of the second surface point.
9. A perspective mobile device for a spatial camera comprising a processor, a memory, and a computer program stored in the memory; the computer program is executable by the processor to implement the method of moving the angle of view of a space camera according to any one of claims 1 to 5.
10. A computer-readable storage medium, comprising a stored computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the perspective moving method of the space camera according to any one of claims 1 to 5.
CN202110153517.4A 2021-02-04 2021-02-04 Visual angle moving method, device, equipment and storage medium of space camera Pending CN112911155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110153517.4A CN112911155A (en) 2021-02-04 2021-02-04 Visual angle moving method, device, equipment and storage medium of space camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110153517.4A CN112911155A (en) 2021-02-04 2021-02-04 Visual angle moving method, device, equipment and storage medium of space camera

Publications (1)

Publication Number Publication Date
CN112911155A true CN112911155A (en) 2021-06-04

Family

ID=76122201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110153517.4A Pending CN112911155A (en) 2021-02-04 2021-02-04 Visual angle moving method, device, equipment and storage medium of space camera

Country Status (1)

Country Link
CN (1) CN112911155A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105575214A (en) * 2015-12-23 2016-05-11 成都航训科技有限责任公司 Targeted simulation flight method with whole-course guide
WO2017027172A1 (en) * 2015-08-13 2017-02-16 Google Inc. Systems and methods to transition between viewpoints in a three-dimensional environment
CN108492017A (en) * 2018-03-14 2018-09-04 河海大学常州校区 A kind of product quality information transmission method based on augmented reality
CN110248158A (en) * 2019-06-06 2019-09-17 上海秒针网络科技有限公司 The method of adjustment and device of shooting visual angle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017027172A1 (en) * 2015-08-13 2017-02-16 Google Inc. Systems and methods to transition between viewpoints in a three-dimensional environment
CN105575214A (en) * 2015-12-23 2016-05-11 成都航训科技有限责任公司 Targeted simulation flight method with whole-course guide
CN108492017A (en) * 2018-03-14 2018-09-04 河海大学常州校区 A kind of product quality information transmission method based on augmented reality
CN110248158A (en) * 2019-06-06 2019-09-17 上海秒针网络科技有限公司 The method of adjustment and device of shooting visual angle

Similar Documents

Publication Publication Date Title
US11875537B2 (en) Multi view camera registration
CN107836012B (en) Projection image generation method and device, and mapping method between image pixel and depth value
US11330172B2 (en) Panoramic image generating method and apparatus
EP2807629B1 (en) Mobile device configured to compute 3d models based on motion sensor data
CN105678809A (en) Handheld automatic follow shot device and target tracking method thereof
WO2017045315A1 (en) Method and apparatus for determining location information of tracked target, and tracking apparatus and system
CN104200454B (en) Fisheye image distortion correction method and device
US10235800B2 (en) Smoothing 3D models of objects to mitigate artifacts
CN106027887B (en) For the method, apparatus and electronic equipment of the rifle ball linkage control of rotating mirror holder
CN107851329B (en) Displaying objects based on multiple models
JP2020173795A5 (en)
US20150244930A1 (en) Synthetic camera lenses
CN112004196B (en) Positioning method, positioning device, terminal and computer storage medium
CN110012236A (en) A kind of information processing method, device, equipment and computer storage medium
CN109241233A (en) A kind of coordinate matching method and device
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN112911155A (en) Visual angle moving method, device, equipment and storage medium of space camera
CN112334853A (en) Course adjustment method, ground end equipment, unmanned aerial vehicle, system and storage medium
CN107705307B (en) Shooting composition method and system based on deep learning
KR101825321B1 (en) System and method for providing feedback of real-time optimal shooting composition using mobile camera recognition technology
Sreekaladevi et al. Inferring 3D dimensions of flat objects from a single 2D image
CN109976533A (en) Display control method and device
Maik et al. Automatic top-view transformation for vehicle backup rear-view camera
US11776157B2 (en) Panoramic image based jump position determination method and apparatus and non-transitory computer-readable medium
CN117201705B (en) Panoramic image acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604

RJ01 Rejection of invention patent application after publication