CN109788201B - Positioning method and device - Google Patents

Positioning method and device Download PDF

Info

Publication number
CN109788201B
CN109788201B CN201910113695.7A CN201910113695A CN109788201B CN 109788201 B CN109788201 B CN 109788201B CN 201910113695 A CN201910113695 A CN 201910113695A CN 109788201 B CN109788201 B CN 109788201B
Authority
CN
China
Prior art keywords
observed
target
target camera
camera
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910113695.7A
Other languages
Chinese (zh)
Other versions
CN109788201A (en
Inventor
刘永学
袁宇
刘远志
范骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Grand Wisdom Technology Co ltd
Original Assignee
Sichuan Grand Wisdom Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Grand Wisdom Technology Co ltd filed Critical Sichuan Grand Wisdom Technology Co ltd
Priority to CN201910113695.7A priority Critical patent/CN109788201B/en
Publication of CN109788201A publication Critical patent/CN109788201A/en
Application granted granted Critical
Publication of CN109788201B publication Critical patent/CN109788201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a positioning method and a positioning device, wherein the method comprises the following steps: determining a position to be observed designated by a user on a geographic information system; determining the maximum observation area comprising more than one target camera at the position to be observed from each preset camera according to the recorded maximum observation area of each preset camera in the geographic information system; and adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, so that the position to be observed is positioned in the center of the picture of the target camera. Therefore, the method and the device can not only be limited to the maximum observation area of a single camera, but also be used for positioning the position to be observed in a larger area, and can also be used for positioning the same position to be observed in multiple angles.

Description

Positioning method and device
Technical Field
The application relates to the field of security monitoring, in particular to a positioning method and device.
Background
In the related art, when a high-speed dome camera positions a target area, a user is responded to select a frame of the target area in a real-time image of the high-speed dome camera, and a clear image of the target area is obtained at the center of the real-time image of the high-speed dome camera through rotation of a holder and/or adjustment of lens parameters (such as focal length, magnification and the like). However, the positioning method can only position the target area framed in the maximum observation area of the high-speed dome camera, cannot position the target area outside the maximum observation area of the high-speed dome camera, and cannot position the same target area at multiple angles through linkage between the high-speed dome cameras.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a positioning method and apparatus, so as to at least partially solve the above problems.
In order to achieve the above purpose, the embodiments of the present application propose the following technical solutions:
in a first aspect, an embodiment of the present application provides a positioning method, where the method includes:
determining a position to be observed designated by a user on a geographic information system;
determining the maximum observation area comprising more than one target camera at the position to be observed from each preset camera according to the recorded maximum observation area of each preset camera in the geographic information system;
and adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, so that the position to be observed is positioned in the center of the picture of the target camera.
In a second aspect, an embodiment of the present application provides a positioning apparatus, including:
the system comprises a to-be-observed position determining module, a to-be-observed position determining module and a to-be-observed position determining module, wherein the to-be-observed position determining module is used for determining a to-be-observed position appointed by a user on a geographic information system;
the target camera determination module is used for determining that the maximum observation area comprises more than one target camera at the position to be observed from all the preset cameras according to the recorded maximum observation area of all the preset cameras in the geographic information system;
and the parameter adjusting module is used for adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relation between the position to be observed and the target camera so as to enable the position to be observed to be positioned in the center of the picture of the target camera.
Compared with the prior art, the beneficial effects of the embodiment of the application include:
the embodiment of the application provides a positioning method and a positioning device, wherein the method comprises the following steps: determining a position to be observed designated by a user on a geographic information system; determining the maximum observation area comprising more than one target camera at the position to be observed from each preset camera according to the recorded maximum observation area of each preset camera in the geographic information system; and adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, so that the position to be observed is positioned in the center of the picture of the target camera. Therefore, the method and the device can not only be limited to the maximum observation area of a single camera, but also be used for positioning the position to be observed in a larger area, and can also be used for positioning the same position to be observed in multiple angles.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a positioning method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of the substeps of step S202 in FIG. 2;
FIG. 4 is a schematic diagram of a sub-step of step S203 in FIG. 2;
FIG. 5 is a schematic view of the substeps of obtaining lens parameters and rotation parameters for each sub-region;
FIG. 6 is a schematic diagram of another sub-step of step S203 in FIG. 2;
fig. 7 is a functional block diagram of a positioning apparatus according to an embodiment of the present disclosure.
Icon: 10-a data processing device; 11-a machine-readable storage medium; 12-a processor; 100-a positioning device; 110-a location to be observed determining module; 120-target camera determination module; 130-parameter adjustment module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Referring to fig. 1, fig. 1 is a data processing device 10 provided in an embodiment of the present application, where the data processing device may be any device having a data processing function and a communication function, such as a server, a personal computer, and the like. Wherein the server may be a single server or a server cluster composed of servers communicating with each other.
In this embodiment, the data processing device 10 may be a device operating with a geographic information system, for example, any server in a server cluster operating with a geographic information system, and the data processing device 10 may also be another device communicating with a device operating with a geographic information system.
The Geographic Information System (GIS) is a technical System for collecting, storing, managing, computing, analyzing, displaying and describing relevant Geographic distribution data in the whole or part of the space of the earth surface layer (including the atmosphere) under the support of a computer hardware and software System. In the geographic information system, the real world is reproduced by a three-dimensional image.
The data processing device 10 may include a processor 12 and a machine-readable storage medium 11. The processor 12 and the machine-readable storage medium 11 may communicate via a system bus. Also, the machine-readable storage medium 11 stores machine-executable instructions, and the processor 12 may perform a positioning method to be described below by reading and executing the machine-executable instructions corresponding to the positioning logic in the machine-readable storage medium 11.
The machine-readable storage medium 11 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium 11 may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
It should be understood that the configuration shown in FIG. 1 is merely an example, and that data processing device 10 may include more or fewer components than shown in FIG. 1, or may have a completely different configuration than shown in FIG. 1. Wherein the components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2 again, fig. 2 is a schematic flow chart of a positioning method according to an embodiment of the present disclosure. The positioning method may be applied to the data processing device 10, and the individual steps involved in the method are described in detail below.
Step S201, determining the position to be observed appointed by the user on the geographic information system.
The position to be observed may be a center position of an area designated by the user on the geographic information system, and the center position may be a position where a geometric center of the area is located.
In this embodiment, the geographic information System may be built with a geographic coordinate System for defining positions in the whole or part of the space of the earth's surface layer (including the atmosphere), the geographic coordinate System including beijing 54, west ampere 80, and WGS (World Geodetic System) 84. In this geographical coordinate system, the coordinates of the position to be observed can be determined.
Based on this, the position to be observed can also be determined directly from the coordinates in the geographic coordinate system specified by the user.
The user may designate any position in the whole or part of the space of the earth' S surface layer (including the atmosphere) in the geographic information system as the position to be observed, and then locate the position to be observed through the subsequent steps S202 and S203, which is not limited to the related art that the position to be observed can be designated only in the maximum observation area of a single dome camera.
Step S202, according to the recorded maximum observation area of each preset camera in the geographic information system, determining that the maximum observation area comprises more than one target camera of the position to be observed from each preset camera.
In this embodiment, the obtaining manner of the maximum observation area of each preset camera in the geographic information system may include: and aiming at each preset camera, determining the installation position of the preset camera in the geographic information system, then determining the maximum observation distance of the preset camera, and determining the maximum observation area of the preset camera according to the installation position and the maximum observation distance.
In one case, it may be determined whether each of the preset cameras includes the position to be observed, and all the preset cameras in which the maximum observation area including the position to be observed is determined to be the target cameras. Therefore, as the preset cameras are usually distributed in different directions of the area where the position to be observed is located, multi-angle positioning of the same position can be realized. Under the condition, when the position to be observed is the position of the target to be tracked, multi-angle tracking shooting of the target to be tracked can be realized by positioning the position to be observed in multiple angles. For example, in a scene of arresting a criminal, after the position of a target (i.e., the target to be tracked) determined as a criminal or a suspected criminal is determined as the position to be observed, a plurality of preset cameras position the position to be observed where the target to be tracked is located from a plurality of angles, and then track and shoot the target to be tracked, so that images of the target to be tracked from a plurality of angles are obtained, and the arresting success rate can be effectively improved.
In another case, after determining that the maximum observation area includes one preset camera of the position to be observed, the preset camera may be determined as the target camera alone. In this case, for the currently determined target camera, before step S203 is executed, the installation position of the target camera in the geographic information system may be further determined and it may be determined whether there is an obstacle between the installation position and the position to be observed. If the target camera exists, the target camera needs to be determined again because the position to be observed is shielded in the monitoring picture when the target camera shoots the position to be observed. It should be noted that, for the finally determined target camera, the step of determining the installation position of the target camera in the geographic information system mentioned herein and the step of determining the installation position of the target camera in the geographic information system to be described later may be the same step.
Alternatively, since the user desires to locate the position to be observed from a specific orientation in some scenarios, step S202 may include the substeps shown in fig. 3.
In step S301, a desired viewing direction specified by the user is acquired.
Step S302, determining more than one camera whose maximum observation area includes the position to be observed and whose installation position is in the expected observation direction of the position to be observed from the preset cameras, to obtain the more than one target cameras.
In this embodiment, after the more than one target cameras are determined, the more than one target cameras are enabled to position the position to be observed through step S203.
Step S203, adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, so that the position to be observed is located in the center of the picture of the target camera.
The lens parameter may be a focal length or a magnification. The rotational parameter may be a pan-tilt position of the target camera, which may include a horizontal angle and a vertical angle with respect to an initial position. In particular, the pan-tilt position may be a pan-tilt coordinate comprising the horizontal angle and the vertical angle.
In this embodiment, for each of the more than one target cameras, the lens parameters of the target camera are adjusted to clearly display the position to be observed in the monitoring picture of the target camera, and/or the rotation parameters of the target camera are adjusted to display the position to be observed in the central position (i.e., the picture center) of the monitoring picture of the target camera. It will be appreciated that the center of view may be the geometric center of the surveillance view of the target camera, or may be substantially at the geometric center.
The positioning method provided by the embodiment of the application selects the maximum observation area from the plurality of preset cameras and comprises more than one target camera of the position to be observed to position the position to be observed, not only can the maximum observation area of a single camera be not limited, but also the position to be observed in a larger area can be positioned, and the same position to be observed can be positioned in multiple angles.
In some embodiments, the relative positional relationship includes a positional relationship of the position to be observed and a maximum observation region of the target camera. In detail, step S203 includes the sub-steps shown in fig. 4.
Step S401, determining a sub-region included in the maximum observation region of the target camera according to the recorded sub-regions included in the maximum observation region of each camera.
Step S402, searching a target sub-area where the position to be observed is located from the sub-areas included in the maximum observation area of the target camera.
Step S403, obtaining target lens parameters and target rotation parameters of the target sub-area from the recorded lens parameters and rotation parameters of each sub-area, setting the lens parameters of the target camera as the target lens parameters, and setting the rotation parameters of the target camera as the target rotation parameters.
Alternatively, the lens parameter and the rotation parameter of each sub-region may be obtained by the sub-steps shown in fig. 5.
Step S501, aiming at each preset camera, dividing the maximum observation area of the preset camera in the geographic information system into a plurality of sub-areas;
step S502, determining the installation position of the target camera in the geographic information system.
Step S503, determining the central position of each sub-region, and calculating the direction and distance between the position to be observed and the central position of the sub-region; and calculating a rotation parameter and a lens parameter corresponding to the sub-area according to the direction and the distance, wherein the rotation parameter and the lens parameter can enable the sub-area to be positioned in the center of the picture of the target camera when the target camera shoots by using the rotation parameter and the lens parameter.
Specifically, the rotation parameter corresponding to the sub-region may be calculated according to the direction between the position to be observed and the center position of the sub-region, and the lens parameter corresponding to the sub-region may be calculated according to the distance between the position to be observed and the center position of the sub-region.
It should be noted that the central position of the sub-region may be the position of the geometric center of the sub-region, or may be substantially located at the position of the geometric center of the sub-region.
In this way, when each of the divided sub-regions is sufficiently small, it can be considered that the sub-region as a whole is clearly displayed in the center of the screen of the preset camera when the preset camera performs photographing with the lens parameter and the rotation parameter of the sub-region. In other words, when the target camera uses the lens parameter and the rotation parameter (i.e., the target lens parameter and the target rotation parameter) of the target sub-area where the position to be observed is located, it can be considered that the position to be observed in the target sub-area is clearly displayed in the center of the screen of the target camera.
It is to be understood that the data processing device 10 may obtain the lens parameter and the rotation parameter of each sub-region according to the steps shown in fig. 5 when the position to be observed is located, or may obtain and store the lens parameter and the rotation parameter of each sub-region in advance so as to find the lens parameter and the rotation parameter of the sub-region where the position to be observed is located (i.e., the target sub-region) when the position to be observed is located.
In other embodiments, the relative positional relationship includes a positional relationship between the position to be observed and an installation position of the target camera in the geographic information system. In detail, step S203 includes the sub-steps shown in fig. 6.
Step S601, determining an installation position of the target camera in the geographic information system.
Step S602, calculating the direction and distance between the position to be observed and the installation position, and calculating the rotation parameters and lens parameters required by the target camera when the position to be observed is located at the center of the picture of the target camera according to the direction and distance.
Step S603, setting the rotation parameter of the target camera as the rotation parameter, and setting the lens parameter of the target camera as the lens parameter.
Similarly to the steps shown in fig. 5, this embodiment may directly calculate the positional relationship (including the direction and the distance) between the position to be observed and the installation position of the target camera in the geographic information system, calculate the rotation parameter required by the target camera when the position to be observed is located at the center of the screen of the target camera according to the direction in the positional relationship, and calculate the lens parameter required by the target camera when the position to be observed is located at the center of the screen of the target camera according to the distance in the positional relationship.
Referring to fig. 7, fig. 7 is a functional block diagram of a positioning apparatus 100 according to the present embodiment. The positioning apparatus 100 includes at least one functional module that can be stored in the form of software in a machine-readable storage medium of the data processing device 10. Functionally divided, the positioning apparatus 100 may include a to-be-observed position determining module 110, a target camera determining module 120, and a parameter adjusting module 130.
The to-be-observed position determination module 110 is configured to determine a user-specified to-be-observed position on the geographic information system.
In this embodiment, the detailed description of step S201 shown in fig. 2 may be referred to for the description of the to-be-observed position determining module 110, that is, step S201 may be performed by the to-be-observed position determining module 110.
The target camera determination module 120 is configured to determine, according to the recorded maximum observation area of each preset camera in the geographic information system, that the maximum observation area includes more than one target camera at the position to be observed from the preset cameras.
In this embodiment, reference may be made to the detailed description of step S202 shown in fig. 2 for the description of the target camera determination module 120, that is, step S202 may be performed by the target camera determination module 120.
The parameter adjusting module 130 is configured to adjust lens parameters and/or rotation parameters of the target camera according to a relative position relationship between the position to be observed and the target camera, so that the position to be observed is located in a center of a picture of the target camera.
In this embodiment, reference may be made to the detailed description of step S203 shown in fig. 2 for the description of the parameter adjustment module 130, that is, step S203 may be executed by the parameter adjustment module 130.
Alternatively, the relative positional relationship may include a positional relationship between the position to be observed and a maximum observation region of the target camera.
The parameter adjustment module 130 may be specifically configured to:
determining the sub-area included by the maximum observation area of the target camera according to the recorded sub-area included by the maximum observation area of each camera;
searching a target sub-area where the position to be observed is located from the sub-areas included in the maximum observation area of the target camera;
and acquiring target lens parameters and target rotation parameters of the target subarea from the recorded lens parameters and rotation parameters of each subarea, setting the lens parameters of the target camera as the target lens parameters, and setting the rotation parameters of the target camera as the target rotation parameters.
Optionally, the positioning apparatus 100 may further include a sub-region parameter obtaining module. The sub-region parameter obtaining module is configured to:
for each preset camera, dividing the maximum observation area of the preset camera in the geographic information system into a plurality of sub-areas;
determining an installation location of the target camera in the geographic information system;
for each sub-region, determining the central position of the sub-region, and calculating the direction and the distance between the position to be observed and the central position of the sub-region; and calculating a rotation parameter and a lens parameter corresponding to the sub-area according to the direction and the distance, wherein the rotation parameter and the lens parameter can enable the sub-area to be positioned in the center of the picture of the target camera when the target camera shoots by using the rotation parameter and the lens parameter.
Optionally, the relative positional relationship includes a positional relationship between the position to be observed and an installation position of the target camera in the geographic information system.
The parameter adjusting module 130 is specifically configured to:
determining an installation location of the target camera in the geographic information system;
calculating the direction and the distance between the position to be observed and the installation position, and calculating the rotation parameters and the lens parameters required by the target camera when the position to be observed is positioned at the center of the picture of the target camera according to the direction and the distance;
setting the rotation parameter of the target camera as the rotation parameter, and setting the lens parameter of the target camera as the lens parameter.
In summary, the embodiment of the present application provides a positioning method and apparatus, and the method includes: determining a position to be observed designated by a user on a geographic information system; determining the maximum observation area comprising more than one target camera at the position to be observed from each preset camera according to the recorded maximum observation area of each preset camera in the geographic information system; and adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, so that the position to be observed is positioned in the center of the picture of the target camera. Therefore, the method and the device can not only be limited to the maximum observation area of a single camera, but also be used for positioning the position to be observed in a larger area, and can also be used for positioning the same position to be observed in multiple angles.
In the embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of positioning, the method comprising:
determining a position to be observed designated by a user on a geographic information system;
determining the maximum observation area comprising more than one target camera at the position to be observed from each preset camera according to the recorded maximum observation area of each preset camera in the geographic information system;
acquiring an expected observation direction specified by a user;
determining more than one camera with the maximum observation area including the position to be observed and the installation position in the expected observation direction of the position to be observed from the preset cameras to obtain more than one target camera;
and adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, so that the position to be observed is positioned in the center of the picture of the target camera.
2. The method according to claim 1, wherein the relative positional relationship includes a positional relationship of the position to be observed and a maximum observation region of the target camera;
adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, including:
determining the sub-area included by the maximum observation area of the target camera according to the recorded sub-area included by the maximum observation area of each camera;
searching a target sub-area where the position to be observed is located from the sub-areas included in the maximum observation area of the target camera;
and acquiring target lens parameters and target rotation parameters of the target subarea from the recorded lens parameters and rotation parameters of each subarea, setting the lens parameters of the target camera as the target lens parameters, and setting the rotation parameters of the target camera as the target rotation parameters.
3. The method according to claim 2, wherein the lens parameters and the rotation parameters of each sub-area are obtained by:
for each preset camera, dividing the maximum observation area of the preset camera in the geographic information system into a plurality of sub-areas;
determining an installation location of the target camera in the geographic information system;
for each sub-region, determining the central position of the sub-region, and calculating the direction and the distance between the position to be observed and the central position of the sub-region; and calculating a rotation parameter and a lens parameter corresponding to the sub-area according to the direction and the distance, wherein the rotation parameter and the lens parameter can enable the sub-area to be positioned in the center of the picture of the target camera when the target camera shoots by using the rotation parameter and the lens parameter.
4. The method according to claim 1, wherein the relative positional relationship includes a positional relationship of the position to be observed and an installation position of the target camera in the geographic information system;
adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relationship between the position to be observed and the target camera, including:
determining an installation location of the target camera in the geographic information system;
calculating the direction and the distance between the position to be observed and the installation position, and calculating the rotation parameters and the lens parameters required by the target camera when the position to be observed is positioned at the center of the picture of the target camera according to the direction and the distance;
setting the rotation parameter of the target camera as the rotation parameter, and setting the lens parameter of the target camera as the lens parameter.
5. The method according to any one of claims 1 to 4, wherein before adjusting the lens parameters and/or rotation parameters of the target camera according to the relative positional relationship between the position to be observed and the target camera, the method further comprises:
determining the installation position of the currently determined target camera in the geographic information system;
and judging whether an obstacle exists between the installation position and the position to be observed, and if so, re-determining the target camera.
6. The method according to any one of claims 1-4, wherein said determining that the largest viewing area from among said preset cameras comprises more than one target camera of said location to be viewed comprises:
acquiring an expected observation direction specified by a user;
and determining more than one camera with the maximum observation area including the position to be observed and the installation position in the expected observation direction of the position to be observed from the preset cameras to obtain more than one target camera.
7. A positioning device, the device comprising:
the system comprises a to-be-observed position determining module, a to-be-observed position determining module and a to-be-observed position determining module, wherein the to-be-observed position determining module is used for determining a to-be-observed position appointed by a user on a geographic information system;
a target camera determination module, configured to determine, according to a recorded maximum observation area of each preset camera in the geographic information system, that the maximum observation area includes one or more target cameras at the to-be-observed position from the preset cameras, and determine, according to an expected observation direction specified by a user, one or more cameras whose maximum observation area includes the to-be-observed position and whose installation positions are in the expected observation direction of the to-be-observed position from the preset cameras, to obtain the one or more target cameras;
and the parameter adjusting module is used for adjusting lens parameters and/or rotation parameters of the target camera according to the relative position relation between the position to be observed and the target camera so as to enable the position to be observed to be positioned in the center of the picture of the target camera.
8. The apparatus according to claim 7, wherein the relative positional relationship includes a positional relationship of the position to be observed and a maximum observation region of the target camera;
the parameter adjusting module is specifically configured to:
determining the sub-area included by the maximum observation area of the target camera according to the recorded sub-area included by the maximum observation area of each camera;
searching a target sub-area where the position to be observed is located from the sub-areas included in the maximum observation area of the target camera;
and acquiring target lens parameters and target rotation parameters of the target subarea from the recorded lens parameters and rotation parameters of each subarea, setting the lens parameters of the target camera as the target lens parameters, and setting the rotation parameters of the target camera as the target rotation parameters.
9. The apparatus of claim 8, further comprising a sub-region parameter obtaining module; the sub-region parameter obtaining module is configured to:
for each preset camera, dividing the maximum observation area of the preset camera in the geographic information system into a plurality of sub-areas;
determining an installation location of the target camera in the geographic information system;
for each sub-region, determining the central position of the sub-region, and calculating the direction and the distance between the position to be observed and the central position of the sub-region; and calculating a rotation parameter and a lens parameter corresponding to the sub-area according to the direction and the distance, wherein the rotation parameter and the lens parameter can enable the sub-area to be positioned in the center of the picture of the target camera when the target camera shoots by using the rotation parameter and the lens parameter.
10. The apparatus according to claim 7, wherein the relative positional relationship includes a positional relationship of the position to be observed and an installation position of the target camera in the geographic information system;
the parameter adjusting module is specifically configured to:
determining an installation location of the target camera in the geographic information system;
calculating the direction and the distance between the position to be observed and the installation position, and calculating the rotation parameters and the lens parameters required by the target camera when the position to be observed is positioned at the center of the picture of the target camera according to the direction and the distance;
setting the rotation parameter of the target camera as the rotation parameter, and setting the lens parameter of the target camera as the lens parameter.
CN201910113695.7A 2019-02-14 2019-02-14 Positioning method and device Active CN109788201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910113695.7A CN109788201B (en) 2019-02-14 2019-02-14 Positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910113695.7A CN109788201B (en) 2019-02-14 2019-02-14 Positioning method and device

Publications (2)

Publication Number Publication Date
CN109788201A CN109788201A (en) 2019-05-21
CN109788201B true CN109788201B (en) 2021-04-20

Family

ID=66503519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910113695.7A Active CN109788201B (en) 2019-02-14 2019-02-14 Positioning method and device

Country Status (1)

Country Link
CN (1) CN109788201B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443247A (en) * 2019-08-22 2019-11-12 中国科学院国家空间科学中心 A kind of unmanned aerial vehicle moving small target real-time detecting system and method
CN110807803B (en) * 2019-10-11 2021-02-09 北京文香信息技术有限公司 Camera positioning method, device, equipment and storage medium
CN111046121B (en) * 2019-12-05 2023-10-10 亿利生态大数据有限公司 Environment monitoring method, device and system
CN111586303A (en) * 2020-05-22 2020-08-25 浩鲸云计算科技股份有限公司 Control method and device for dynamically tracking road surface target by camera based on wireless positioning technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101313343A (en) * 2005-11-18 2008-11-26 通用电气公司 Methods and systems for operating a video monitoring system
CN103731630A (en) * 2012-10-16 2014-04-16 华为技术有限公司 Video monitoring method, equipment and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100961719B1 (en) * 2008-07-07 2010-06-10 한국전자통신연구원 Method and apparatus for controlling camera position using of geographic information system
US8194147B2 (en) * 2008-11-06 2012-06-05 Getac Technology Corporation Image presentation angle adjustment method and camera device using the same
IL201129A (en) * 2009-09-23 2014-02-27 Verint Systems Ltd System and method for automatic camera hand off using location measurements
EP2554434B1 (en) * 2011-08-05 2014-05-21 Harman Becker Automotive Systems GmbH Vehicle surround view system
CN102595105A (en) * 2012-03-07 2012-07-18 深圳市信义科技有限公司 Application method based on geographic information system (GIS) map lens angle information configuration
CN104184995A (en) * 2014-08-26 2014-12-03 天津市亚安科技股份有限公司 Method and system for achieving real-time linkage monitoring of networking video monitoring system
CN104639908A (en) * 2015-02-05 2015-05-20 华中科技大学 Control method of monitoring ball machine
CN105611246A (en) * 2015-12-23 2016-05-25 广东中星电子有限公司 Information display method and video monitoring platform
CN105744226B (en) * 2016-02-22 2018-07-06 北京深博达智能系统有限公司 A kind of 1+N rifle ball interlock methods based on camera coordinate system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101313343A (en) * 2005-11-18 2008-11-26 通用电气公司 Methods and systems for operating a video monitoring system
CN103731630A (en) * 2012-10-16 2014-04-16 华为技术有限公司 Video monitoring method, equipment and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
智能视频监控系统关键技术及算法研究;李小斌等;《控制工程》;20160820;第23卷(第S0期);第18-22页 *

Also Published As

Publication number Publication date
CN109788201A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109788201B (en) Positioning method and device
US10165179B2 (en) Method, system, and computer program product for gamifying the process of obtaining panoramic images
US9723203B1 (en) Method, system, and computer program product for providing a target user interface for capturing panoramic images
US10122997B1 (en) Automated matrix photo framing using range camera input
EP2625847B1 (en) Network-based real time registered augmented reality for mobile devices
US8803992B2 (en) Augmented reality navigation for repeat photography and difference extraction
US8331611B2 (en) Overlay information over video
KR20200146040A (en) Self-supervised training of depth estimation system
US20200267309A1 (en) Focusing method and device, and readable storage medium
KR20200130472A (en) Self-supervised training of depth estimation models using depth hints
EP3091735B1 (en) Method and device for extracting surveillance record videos
WO2014082407A1 (en) Method and system for displaying video monitoring image
US20200120275A1 (en) Panoramic sea view monitoring method and device, server and system
WO2019037038A1 (en) Image processing method and device, and server
US20150062287A1 (en) Integrating video with panorama
US20110170800A1 (en) Rendering a continuous oblique image mosaic
CN106289180A (en) The computational methods of movement locus and device, terminal
CN114943773A (en) Camera calibration method, device, equipment and storage medium
GB2537886A (en) An image acquisition technique
WO2018014517A1 (en) Information processing method, device and storage medium
CN113301257A (en) Panoramic image-based subimage acquisition method and device
KR100926231B1 (en) Spatial information construction system and method using spherical video images
CN110503123B (en) Image positioning method, device, computer equipment and storage medium
CN105467741A (en) Panoramic shooting method and terminal
CN112991808B (en) Parking space display method and device for parking area and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant