CN114615410A - Natural disaster panoramic helmet and method for determining shooting posture of image thereof - Google Patents

Natural disaster panoramic helmet and method for determining shooting posture of image thereof Download PDF

Info

Publication number
CN114615410A
CN114615410A CN202210232180.0A CN202210232180A CN114615410A CN 114615410 A CN114615410 A CN 114615410A CN 202210232180 A CN202210232180 A CN 202210232180A CN 114615410 A CN114615410 A CN 114615410A
Authority
CN
China
Prior art keywords
image
fisheye
acquiring
search area
helmet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210232180.0A
Other languages
Chinese (zh)
Other versions
CN114615410B (en
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210232180.0A priority Critical patent/CN114615410B/en
Publication of CN114615410A publication Critical patent/CN114615410A/en
Application granted granted Critical
Publication of CN114615410B publication Critical patent/CN114615410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The application relates to the technical field of shooting images, in particular to a natural disaster panoramic helmet, a method for determining shooting postures of images of the panoramic helmet, and a computer storage medium, wherein the computer storage medium comprises a helmet body, a panoramic fisheye camera arranged at the top of the helmet body, a magnetometer and a data processing system which are arranged in the helmet body, and the panoramic fisheye camera is used for fisheye images at different moments; the data processing system acquires images on different directions at different moments and extracts matched characteristic points from the images at different moments; acquiring a roll angle and a pitch angle according to the position relation of the matched characteristic points at different moments; and determining the variation of the shooting attitude according to the roll angle and the pitch angle. This application has and makes things convenient for operating personnel to carry out natural resources and sweep and look into under specific environment, shoots the effect in order to sweep and look into the record with the fixed gesture to natural disasters simultaneously.

Description

Natural disaster panoramic helmet and method for determining shooting posture of image thereof
Technical Field
The application relates to the technical field of shooting images, in particular to a natural disaster panoramic helmet and a method for determining shooting postures of images of the natural disaster panoramic helmet.
Background
The natural disaster scanning is to observe and monitor the dynamic changes of factors related to the whole process of the inoculation, occurrence, development and fatal damage of the natural disaster by using various measurement means, and the natural disaster scanning tool is a tool for monitoring the state of a target area before, during and after the occurrence of the disaster.
In the related technology, manual inspection or unmanned aerial vehicle inspection is mainly adopted for natural disaster scanning, equipment such as a tablet personal computer and a GPS is manually carried for manual inspection, fixed-point photographing is adopted, the method is low in efficiency, inspection points are easy to omit, and inspection omission places are difficult to verify; and unmanned aerial vehicle inspection then needs to carry on various sensing equipment and collection equipment on unmanned aerial vehicle, is subject to the influence of unmanned aerial vehicle's maximum load, and unmanned aerial vehicle's time of endurance is short, and is with high costs, can't carry out long-time, the natural disasters general investigation in a large scale natural area. In addition, manned/unmanned vehicle-mounted inspection systems are also widely used, but the vehicle-mounted system is limited by road conditions and is not suitable for natural disaster general inspection in natural areas with poor road conditions, such as mountainous areas, forest lands and the like.
Disclosure of Invention
In order to facilitate natural resource scanning of an operator in a specific environment and shoot a natural disaster in a fixed posture for scanning record, the application provides a natural disaster panoramic helmet and a shooting posture determining method of an image of the natural disaster panoramic helmet.
In a first aspect, the application provides a natural disaster scanning helmet, which adopts the following technical scheme:
a natural disaster scanning helmet, the helmet comprising: the panoramic fish-eye helmet comprises a helmet body, a panoramic fish-eye camera arranged at the top of the helmet body, a magnetometer and a data processing system which are arranged in the helmet body; the panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system, wherein,
the panoramic fisheye camera is used for acquiring a first fisheye image at the time of t1 and a second fisheye image at the time of t 2;
the magnetometer is used for acquiring first spatial information of the helmet at a time t1 and second spatial information of the helmet at a time t2, wherein the first spatial information includes orientation information of the helmet at a time t1, and the second spatial information includes orientation information of the helmet at a time t 2;
the data processing system is used for acquiring a first center image and a first side image according to the first fisheye image, wherein the first center image is the projection of the first fisheye image along a preset direction, and the first side image is the projection of the first fisheye image along the orthogonal direction of the preset direction;
the data processing system is further configured to obtain a second center image and a second side image according to the second fisheye image, where the second center image is a projection of the second fisheye image along a preset direction, and the second side image is a projection of the second fisheye image along an orthogonal direction of the preset direction;
the data processing system is further used for extracting at least two first feature points on the first central image and the first side image respectively; extracting at least two second feature points respectively matched with the at least two first feature points on the second center image and the second side image respectively;
the data processing system is further used for acquiring a roll angle according to the position relation between a first characteristic point on the first central image and a second characteristic point on the second central image; acquiring a pitch angle according to the position relation between a first characteristic point on the first side surface image and a second characteristic point on the second side surface image; and determining the variation of the shooting attitude at the time t2 relative to the time t1 according to the roll angle and the pitch angle.
In a second aspect, the method for determining the shooting posture of the natural disaster scanning image adopts the following technical scheme:
a shooting posture determining method of a natural disaster scanning image is applied to a natural disaster scanning helmet and comprises the following steps:
acquiring a first fisheye image at a time t1 and corresponding first spatial information, wherein the first spatial information comprises azimuth information of the time t 1;
acquiring a second fisheye image at the time t2 and corresponding second spatial information, wherein the second spatial information comprises azimuth information of the time t 2;
acquiring a first center image and a first side image corresponding to a first fisheye image, wherein the first center image is a projection of the first fisheye image along a preset direction, the first side image is a projection of the first fisheye image along an orthogonal direction of the preset direction, and the preset direction and the orthogonal direction of the preset direction are both contained in direction information of the t1 moment;
acquiring a second center image and a second side image corresponding to a second fisheye image, wherein the second center image is a projection of the second fisheye image along a preset direction, the second side image is a projection of the second fisheye image along an orthogonal direction of the preset direction, the first side image and the second side image are located at the same direction, and the preset direction and the orthogonal direction of the preset direction are both contained in direction information of the t 2;
extracting at least two first feature points from the first central image and the first side image respectively, and extracting two second feature points from the second central image and the second side image respectively according to the first feature points, wherein the first feature points and the second feature points are matched with each other respectively;
acquiring a roll angle according to the position relation between a first characteristic point on the first central image and a second characteristic point on the second central image;
acquiring a pitch angle according to the position relation between a first characteristic point on the first side surface image and a second characteristic point on the second side surface image;
and determining the variation of the shooting attitude at the t2 moment relative to the t1 moment according to the roll angle and the pitch angle.
In some of these embodiments, the preset orientation is determined by the magnetometer and the preset orientation remains the same orientation at any time.
In some of these embodiments, extracting two second feature points on the second center image and the second side image, respectively, includes:
acquiring a first search area by taking the first feature points as centers, wherein each first feature point corresponds to one first search area;
acquiring a second search area on the second center image and the second side image, wherein the second search area is arranged corresponding to the first search area;
and acquiring second characteristic points corresponding to the first characteristic points in the first search area in the second search area.
In some embodiments, obtaining a second feature point corresponding to the first feature point in the first search area in the second search area further includes:
and when a second feature point corresponding to the first feature point in the first search area cannot be acquired in the second search area, increasing the size of the search area of the first search area, and correspondingly, increasing the size of the search area of the second search area.
In some embodiments, obtaining the roll angle according to the position relationship between the first feature point on the first central image and the second feature point on the second central image includes:
connecting the first characteristic points with corresponding second characteristic points to obtain a rolling characteristic line;
and calculating the absolute value of the included angle between the roll characteristic line and the horizontal direction to obtain the absolute value of the roll angle, and taking the roll characteristic line to roll right along the longitudinal axis as positive.
In some embodiments, obtaining the pitch angle according to the position relationship between the first feature point on the first side image and the second feature point on the second side image includes:
connecting the first characteristic points with the corresponding second characteristic points to obtain a pitching characteristic line;
and calculating the absolute value of the included angle between the pitching characteristic line and the horizontal direction to obtain the absolute value of the pitching angle, and taking the direction along the vertical direction as positive.
In a third aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium storing a shooting attitude determination method that can be loaded by a processor and executed by any one of the above natural disaster scanning images.
By the adoption of the natural disaster panoramic helmet and the method for determining the shooting posture of the image thereof provided by the embodiment of the application, when some places where the patrol vehicle cannot pass are met in the natural disaster scanning process and the unmanned aerial vehicle cannot be used due to the limitation of the maximum load and the electric quantity of people, needs manual patrol, reduces the inconvenience of actions caused by holding shooting equipment by hands by an operator to search by wearing the natural disaster panoramic helmet, automatically shoots and scans natural disasters by the panoramic fisheye camera on the panoramic helmet, meanwhile, the shooting attitude is adjusted by a method for determining the shooting attitude of the image shot in the natural disaster scanning process, the panoramic image shot in the preset time period is subjected to space transformation to obtain the panoramic image with the preset fixed posture, so that the working effect of natural disaster scanning is improved to a certain extent.
Drawings
Fig. 1 is a schematic overall step diagram of a method for determining a shooting posture of a natural disaster scanned image in an embodiment of the present application;
fig. 2 is a schematic diagram of a step of extracting two second feature points on the second center image and the second side image, respectively;
FIG. 3 is a schematic diagram of the steps for obtaining roll angle;
fig. 4 is a schematic diagram illustrating the steps of acquiring the pitch angle.
Detailed Description
For a clearer understanding of the objects, aspects and advantages of the present application, reference is made to the following description and accompanying drawings. However, it will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In some instances, well known methods, procedures, systems, components, and/or circuits have been described at a higher level without undue detail in order to avoid obscuring aspects of the application with unnecessary detail. It will be apparent to those of ordinary skill in the art that various changes can be made to the embodiments disclosed herein, and that the general principles defined herein may be applied to other embodiments and applications without departing from the principles and scope of the present application. Thus, the present application is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the scope of the present application as claimed.
Unless defined otherwise, technical or scientific terms referred to herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application, the terms "a," "an," "the," and the like do not denote a limitation of quantity, but rather are used in the singular or the plural. The terms "comprises," "comprising," "has," "having" and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus.
Reference to "a plurality" in this application means two or more. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The terms "system," "engine," "unit," "module," and/or "block" referred to herein is a method for distinguishing, by level, different components, elements, parts, components, assemblies, or functions of different levels. These terms may be replaced with other expressions capable of achieving the same purpose. In general, reference herein to a "module," "unit," or "block" refers to a collection of logic or software instructions embodied in hardware or firmware. The "modules," "units," or "blocks" described herein may be implemented as software and/or hardware, and in the case of implementation as software, they may be stored in any type of non-volatile computer-readable storage medium or storage device.
In some embodiments, software modules/units/blocks may be compiled and linked into an executable program. It will be appreciated that software modules may be invokable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on a computing device may be provided on a computer-readable storage medium, such as a compact disc, digital video disc, flash drive, magnetic disk, or any other tangible medium, or downloaded as digital (and may be initially stored in a compressed or installable format that requires installation, decompression, or decryption prior to execution). Such software code may be stored partially or wholly on a storage device of the executing computing device and applied in the operation of the computing device. The software instructions may be embedded in firmware, such as an EPROM. It will also be appreciated that the hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or may be included in programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functions described herein may be implemented as software modules/units/blocks, and may also be represented in hardware or firmware. Generally, the modules/units/blocks described herein may be combined with other modules/units/blocks or, although they are physically organized or stored, may be divided into sub-modules/sub-units/sub-blocks. The description may apply to the system, the engine, or a portion thereof.
It will be understood that when an element, engine, module or block is referred to as being "on," "connected to" or "coupled to" another element, engine, module or block, it can be directly on, connected or coupled to or in communication with the other element, engine, module or block, or intervening elements, engines, modules or blocks may be present, unless the context clearly dictates otherwise. In this application, the term "and/or" may include any one or more of the associated listed items or combinations thereof.
The present application is described in further detail below with reference to figures 1-4.
The embodiment of the application discloses natural disasters scanning helmet, including helmet body, panorama fisheye camera, magnetometer and data processing system. The panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system.
Helmet body can adopt the material that has certain rigidity intensity to constitute, because of sweeping the in-process at natural disasters, if meet bad weather such as strong wind or hail, this physical stamina of comparatively hard helmet carries out guard action to a certain extent to operating personnel's head.
Simultaneously, the enough light that the weight of helmet body also can be done for the helmet has certain portability when hard, like aluminum alloy or carbon fiber material. The inside of helmet body can the fixedly connected with protection pad, and when operating personnel wore the helmet for a long time, the protection pad can alleviate the burden of helmet to the head to a certain extent.
The panoramic fisheye camera can be a common fisheye camera on the market.
The fisheye panoramic camera is used for acquiring a first fisheye image at the time t1 and a second fisheye image at the time t 2.
The magnetometer is also called as geomagnetic sensor and magnetic sensor, can be used for testing the strength and the direction of a magnetic field and positioning the direction of equipment, has the principle similar to that of a compass, and can be used for measuring the included angles between the current equipment and four directions of south, east, west and north.
The magnetometer is used for acquiring first spatial information of the helmet at the time t1 and second spatial information of the helmet at the time t 2. Wherein the first spatial information comprises orientation information of the helmet at time t1, and the second spatial information comprises orientation information of the helmet at time t 2.
The data processing system comprises a memory in which a computer program is stored and a processor arranged to run the computer program to perform a method of use in a natural disaster scanning process.
The panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system.
The data processing system is used for acquiring a first center image and a first side image according to the first fisheye image, the first center image is the projection of the first fisheye image along the preset direction, and the first side image is the projection of the first fisheye image along the orthogonal direction of the preset direction.
The data processing system is further used for acquiring a second center image and a second side image according to the second fisheye image, the second center image is the projection of the second fisheye image along the preset direction, and the second side image is the projection of the second fisheye image along the orthogonal direction of the preset direction.
The data processing system is further used for respectively extracting at least two first feature points on the first central image and the first side image, and respectively extracting at least two second feature points matched with the at least two first feature points on the second central image and the second side image.
The data processing system is also used for acquiring a roll angle according to the position relation of the first characteristic point on the first central image and the second characteristic point on the second central image; acquiring a pitch angle according to the position relation of the second characteristic points on the first side surface image and the second side surface image; and determining the variation of the shooting attitude at the time t2 relative to the time t1 according to the roll angle and the pitch angle.
Further, still including setting up in the inside satellite positioning module of helmet body, satellite positioning module can use GPS positioner or big dipper positioner, according to actual need select can.
The satellite positioning module is used for acquiring longitude and latitude position information of the helmet at the time t1 and t2, and is convenient for the background to perform positioning tracking and position real-time acquisition on the helmet.
Furthermore, a small battery is carried on the panoramic fisheye camera, and a replaceable pocket type battery bin is externally connected. The small battery mainly plays a UPS role, the pocket type battery compartment is used for supplying power to the helmet at ordinary times, and when the pocket type battery compartment is used up, the small battery can be used for ensuring short-time endurance so as to replace the pocket type battery compartment without interrupting collection of the system.
The image that panorama fish-eye camera gathered can be embedded into the ArcGIS map according to the locating signal to need to rectify the image of panorama fish-eye camera to fixed shooting gesture. Different from the prior art that the carrier attitude of the panoramic fisheye camera is obtained by adopting an acceleration sensor and a magnetometer so as to correct the shooting attitude of the panoramic fisheye camera, the carrier attitude information of the panoramic fisheye camera is obtained by adopting a characteristic point local matching method of orthogonal projection images.
As shown in fig. 1, an embodiment of the present application further discloses a method for determining a shooting posture of a natural disaster scanning image, which is applied to the natural disaster scanning helmet, and includes:
s100, a first fisheye image at the time t1 and corresponding first spatial information are acquired.
S200, a second fisheye image and corresponding second spatial information at time t2 are obtained.
S300, acquiring a first center image and a first side image corresponding to the first fisheye image.
S400, acquiring a second center image and a second side image corresponding to the second fisheye image.
S500, at least two first feature points are respectively extracted from the first central image and the first side image, two second feature points are respectively extracted from the second central image and the second side image according to the first feature points, and the first feature points and the second feature points are matched with each other.
S600, obtaining a roll angle according to the position relation of the first characteristic point on the first central image and the second characteristic point on the second central image.
And S700, acquiring a pitch angle according to the position relation of the first characteristic point on the first side surface image and the second characteristic point on the second side surface image.
And S800, determining the variation of the shooting posture at the t2 time relative to the t1 time according to the roll angle and the pitch angle.
In the embodiment of the present application, the time of the default t2 is after the time t1, and the time interval between t1 and t2 is shorter.
The first spatial information includes azimuth information at time t1 and longitude and latitude information at time t 1. The azimuth information is acquired by the magnetometer, and the longitude and latitude information is acquired by the satellite positioning module. Similarly, the second spatial information includes azimuth information at time t2 and latitude and longitude information at time t 2.
The first center image is a projection of the first fisheye image along a preset direction, and the first side image is a projection of the first fisheye image along an orthogonal direction of the preset direction. The predetermined orientation and the orthogonal direction to the predetermined orientation are included in the orientation information at the time t 1.
The predetermined orientation is manually selected after being acquired by the magnetometer, for example, the magnetic north orientation may be defined as the predetermined orientation, the first center image is a projection of the first fisheye image along the magnetic north orientation, and the first side image is a projection of the first fisheye image along an orthogonal direction of the magnetic north orientation, that is, a projection of the magnetic west or magnetic east orientation, and the predetermined orientation maintains the same orientation at any time.
Correspondingly, the second center image is a projection of the second fisheye image along the preset direction, and the second side image is a projection of the second fisheye image along the orthogonal direction of the preset direction. The predetermined orientation and the orthogonal direction to the predetermined orientation are included in the orientation information at the time t 2. Meanwhile, the selection of the preset orientation is the same as the rule, which is not described herein again.
It should be noted that the orientations of the first side image and the second side image are identical, that is, in the case where the predetermined orientation is determined to be the magnetic north orientation, the orientations of the first side image and the second side image are both the magnetic west orientation or the magnetic east orientation orthogonal to the magnetic north orientation.
Before the fisheye image is projected, in order to improve the efficiency of projection, the fisheye image may be corrected, that is, the fisheye image may be subjected to distortion removal. After correction, one fisheye image can be converted into a plurality of images, the images cover the visual field of the fisheye image from different visual angles, and the fisheye image is generally considered to be only subjected to geometric deformation and not distorted.
The nature of the correction of fisheye images is a spatial transformation (geometric transformation, geometric operations) of the image processing, copying only points on the image without modification. The spatial transformation is seen as a shift of the image point within the image, i.e. point a (u, v) on the fisheye image is corrected to point a (x, y) on the image.
The mapping method comprises backward mapping and forward mapping.
Mapping backwards: using a similarity filling method, after obtaining corresponding fisheye image coordinates through mapping transformation according to the coordinates of the correction image, copying an image component at the fisheye image coordinates to the coordinates (u, v) T = F (x, y) of the correction image, and obtaining an image component of a sub-pixel coordinate point (fisheye image) from a pixel coordinate point (correction image).
Mapping forwards: and (3) obtaining a corrected image coordinate from the fisheye image coordinate through mapping transformation, copying an image component of the fisheye image coordinate to the corrected image coordinate, (x, y) T = G (u, v), and calculating a pixel coordinate point from the pixel sub-pixel coordinate point.
In image processing, the mapped inputs are all pixel coordinates (integer coordinates) and the mapped outputs are sub-pixel coordinates (non-integer coordinates).
At least two first feature points are respectively extracted on the first center image and the first side image, and it should be noted that when the number of feature points is more than two, in order to improve the accuracy, a plurality of feature points are not collinear with each other.
And extracting second feature points matched with the first feature points on the second central image and the second side image. If the tower tip of the high tower and the window corner of the automobile on the first central image are taken as first feature points, the tree tip and the street lamp tip on the first side image are taken as first feature points, the tower tip of the high tower and the window corner of the automobile after a period of shooting angle change are extracted from a second central image at a time t2 and defined as second feature points, and the tree tip and the street lamp tip are extracted from the second side image and taken as second feature points.
The feature extraction method of the feature points can be selected from common methods in the related art, such as SIFT, HOG, SURF, ORB, LBP, HAAP, neural network learning and the like.
As shown in fig. 2, further, the extracting two second feature points on the second center image and the second side image respectively further includes:
s510, a first search area is obtained by taking the first feature points as centers, and each first feature point corresponds to one first search area.
S520, a second search area is obtained on the second center image and the second side image.
S530, acquiring a second characteristic point corresponding to the first characteristic point in the first search area in the second search area.
Further, the second search area is disposed opposite to the first search area, in other words, the positions of the first search area and the second search area on the image are the same and may coincide with each other with respect to the first center image and the second center image of the same size and the first side image and the second side image of the same size.
Setting the first search area and the second search area can further reduce the workload in extracting the first feature point and the second feature point. Assuming that the first search area is not provided and the first central image is an image of 1000 × 1000 pixels, after the first search area is obtained with the first feature point as the center, the size of the first search area may be only an area of 50 × 50 pixels or may be an area of 100 × 100 pixels, but this also greatly reduces the amount of calculation in feature point extraction.
After the first search area is obtained, because the time interval between t1 and t2 is short, the variation amount of the second fisheye image relative to the first fisheye image is small, and because the second search area is arranged opposite to the first search area, the feature points in the second search area on the second center image and the second side image are the same as the first feature points, and may be obtained by translating, rotating and rolling the first feature points, and the feature points are defined as the second feature points. The second feature point is obtained by the first feature point through a certain degree of translation, rotation and roll.
Further, acquiring a second feature point corresponding to the first feature point in the first search area in the second search area further includes: when the second characteristic point corresponding to the first characteristic point in the first search time zone cannot be acquired in the second search area, the size of the search area of the first search area is increased, and correspondingly, the size of the search area of the second search area is increased.
The size of the search area refers to the size of a closed space surrounded by polygons, and the larger the closed space is, the more the number of pixel points in the closed space is, and the larger the extraction range of the feature points is. The shape of the enclosed space can be set to any shape, and is generally set to be a rectangular enclosed space.
Since the position and size of the second search area correspond to the position and size of the first search area, when the first search area increases, the second search area correspondingly increases.
When the moving range is large in the period from t1 to t2, and thus the second feature point corresponding to the first feature point cannot be extracted from the second search area corresponding to the first search area, the size of the search area is appropriately enlarged, and the second feature point corresponding to the first feature point can be extracted from the enlarged second search area with a certain amount of calculation.
As shown in fig. 3, obtaining the roll angle according to the position relationship between the first feature point on the first center image and the second feature point on the second center image includes the following steps:
s610, connecting the first characteristic point with the second characteristic point matched and corresponding to the first characteristic point to obtain a rolling characteristic line.
S620, calculating an absolute value of an included angle between the roll characteristic line and the horizontal direction to obtain an absolute value of a roll angle, and taking the roll characteristic line to roll right along the longitudinal axis as positive.
And connecting the first characteristic point and the second characteristic point which are matched and corresponding to each other to obtain a rolling characteristic line, wherein the rolling characteristic line is the motion trail of the characteristic point at the time t1 and the time t 2. The transverse rolling angle is an included angle between the transverse axis of the carrier and the horizontal line. And the roll characteristic line obtained by moving the transverse coordinate point and the vertical coordinate point represented by the motion tracks of the characteristic points at the time t1 and the time t2 represents the final change form of the transverse axis of the vehicle on the central image, so that the absolute value of the included angle between the roll characteristic line and the horizontal line is the absolute value of the roll angle.
As shown in fig. 4, acquiring a pitch angle from a positional relationship between a first feature point on a first side image and a second feature point on a second side image includes the steps of:
and S710, connecting the first characteristic point with the corresponding second characteristic point to obtain a pitching characteristic line.
S720, calculating the absolute value of the included angle between the pitching characteristic line and the horizontal direction to obtain the absolute value of the pitching angle, and taking the vertical upward direction as positive.
And connecting the first characteristic point and the second characteristic point corresponding to the home town matching to obtain a pitching characteristic line, wherein the pitching characteristic line is the motion track of the characteristic point at the time t1 and the time t 2. The pitch angle is the angle between the longitudinal axis of the vehicle and the horizontal. And the roll characteristic line obtained by moving the transverse coordinate point and the vertical coordinate point represented by the motion tracks of the characteristic points at the time t1 and the time t2 represents the final change form of the longitudinal axis of the vehicle on the side image, so that the absolute value of the included angle between the pitch characteristic line and the horizontal azimuth is the absolute value of the pitch angle.
The vehicle attitude variation at time t2 from time t1 can be known from the roll angle and the pitch angle. After the posture change of the panoramic fisheye camera from the time t1 to the time t2 is obtained, the second fisheye image shot at the time t2 can be subjected to space transformation, and a panoramic image with a preset fixed posture is obtained.
Further, according to the longitude and latitude information and the azimuth information, the corrected panoramic image is embedded into an ArcGIS map to form natural disaster scanning data.
The embodiment of the application also discloses a computer readable storage medium which stores a shooting posture determining method capable of being loaded by a processor and executing the natural disaster scanning image.
The above are preferred embodiments of the present application, and the scope of protection of the present application is not limited thereto, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (8)

1. A natural disaster scanning helmet is characterized in that: the helmet comprises: the panoramic fish-eye helmet comprises a helmet body, a panoramic fish-eye camera arranged at the top of the helmet body, a magnetometer and a data processing system which are arranged in the helmet body; the panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system, wherein,
the panoramic fisheye camera is used for acquiring a first fisheye image at the time t1 and a second fisheye image at the time t 2;
the magnetometer is used for acquiring first spatial information of the helmet at a time t1 and second spatial information of the helmet at a time t2, wherein the first spatial information includes orientation information of the helmet at a time t1, and the second spatial information includes orientation information of the helmet at a time t 2;
the data processing system is used for acquiring a first center image and a first side image according to the first fisheye image, wherein the first center image is the projection of the first fisheye image along a preset direction, and the first side image is the projection of the first fisheye image along the orthogonal direction of the preset direction;
the data processing system is further configured to obtain a second center image and a second side image according to the second fisheye image, where the second center image is a projection of the second fisheye image along a preset direction, and the second side image is a projection of the second fisheye image along an orthogonal direction of the preset direction;
the data processing system is further used for extracting at least two first feature points on the first central image and the first side image respectively; extracting at least two second feature points respectively matched with the at least two first feature points on the second center image and the second side image respectively;
the data processing system is further used for acquiring a roll angle according to the position relation between a first characteristic point on the first central image and a second characteristic point on the second central image; acquiring a pitch angle according to the position relation between a first characteristic point on the first side surface image and a second characteristic point on the second side surface image; and determining the variation of the shooting attitude at the time t2 relative to the time t1 according to the roll angle and the pitch angle.
2. A method for determining a shooting posture of a natural disaster scanning image, which is applied to the natural disaster scanning helmet according to claim 1, comprising:
acquiring a first fisheye image at a time t1 and corresponding first spatial information, wherein the first spatial information comprises azimuth information of the time t 1;
acquiring a second fisheye image at the time t2 and corresponding second spatial information, wherein the second spatial information comprises azimuth information of the time t 2;
acquiring a first center image and a first side image corresponding to a first fisheye image, wherein the first center image is a projection of the first fisheye image along a preset direction, the first side image is a projection of the first fisheye image along an orthogonal direction of the preset direction, and the preset direction and the orthogonal direction of the preset direction are both contained in direction information of the t1 moment;
acquiring a second center image and a second side image corresponding to a second fisheye image, wherein the second center image is a projection of the second fisheye image along a preset direction, the second side image is a projection of the second fisheye image along an orthogonal direction of the preset direction, the first side image and the second side image are located at the same direction, and the preset direction and the orthogonal direction of the preset direction are both contained in direction information of the t 2;
extracting at least two first feature points on the first central image and the first side image respectively, and extracting two second feature points on the second central image and the second side image respectively according to the first feature points, wherein the first feature points and the second feature points are matched with each other respectively;
acquiring a roll angle according to the position relation between a first characteristic point on the first central image and a second characteristic point on the second central image;
acquiring a pitch angle according to the position relation between a first characteristic point on the first side surface image and a second characteristic point on the second side surface image;
and determining the variation of the shooting attitude at the t2 moment relative to the t1 moment according to the roll angle and the pitch angle.
3. The method for determining the shooting posture of the natural disaster scanning image as claimed in claim 2, wherein: the predetermined orientation is determined by the magnetometer and remains the same at any time.
4. The method for determining the shooting posture of the natural disaster scanning image as claimed in claim 2, wherein: extracting two second feature points on the second center image and the second side image respectively, including:
acquiring a first search area by taking the first feature points as centers, wherein each first feature point corresponds to one first search area;
acquiring a second search area on the second center image and the second side image, wherein the second search area is arranged corresponding to the first search area;
and acquiring second feature points corresponding to the first feature points in the first search area in the second search area.
5. The method for determining the shooting posture of the natural disaster scanning image as claimed in claim 4, wherein: acquiring a second feature point corresponding to the first feature point in the first search area in the second search area, and further comprising the following steps:
and when a second feature point corresponding to the first feature point in the first search area cannot be acquired in the second search area, increasing the size of the search area of the first search area, and correspondingly, increasing the size of the search area of the second search area.
6. The method for determining the capturing pose of a natural disaster scanning image according to claim 1, wherein the method comprises the steps of: obtaining a roll angle according to a position relation between a first feature point on the first central image and a second feature point on the second central image, comprising the following steps:
connecting the first characteristic points with corresponding second characteristic points to obtain a rolling characteristic line;
and calculating the absolute value of the included angle between the roll characteristic line and the horizontal direction to obtain the absolute value of the roll angle, and taking the roll characteristic line to roll right along the longitudinal axis as positive.
7. The method for determining the shooting posture of the natural disaster scanning image as claimed in claim 1, wherein: acquiring a pitch angle according to the position relation between a first characteristic point on the first side image and a second characteristic point on the second side image, comprising the following steps:
connecting the first characteristic points with the corresponding second characteristic points to obtain a pitching characteristic line;
and calculating the absolute value of the included angle between the pitching characteristic line and the horizontal direction to obtain the absolute value of the pitching angle, and taking the direction along the vertical direction as positive.
8. A computer-readable storage medium characterized by: a shooting attitude determination method capable of being loaded by a processor and executing the natural disaster scanning image according to any one of claims 2 to 7 is stored.
CN202210232180.0A 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet Active CN114615410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210232180.0A CN114615410B (en) 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210232180.0A CN114615410B (en) 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet

Publications (2)

Publication Number Publication Date
CN114615410A true CN114615410A (en) 2022-06-10
CN114615410B CN114615410B (en) 2023-05-02

Family

ID=81861875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210232180.0A Active CN114615410B (en) 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet

Country Status (1)

Country Link
CN (1) CN114615410B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
JP2014062932A (en) * 2012-09-19 2014-04-10 Sharp Corp Attitude control device, operation device, attitude control system, control method, control program, and recording medium
JP2016005263A (en) * 2014-06-19 2016-01-12 Kddi株式会社 Image generation system, terminal, program, and method that generate panoramic image from plurality of photographed images
CN108052058A (en) * 2018-01-31 2018-05-18 广州市建筑科学研究院有限公司 Construction project site safety of the one kind based on " internet+" patrols business flow system of running affairs
CN109362234A (en) * 2016-04-28 2019-02-19 深圳市大疆创新科技有限公司 System and method for obtaining Spherical Panorama Image
CN109660723A (en) * 2018-12-18 2019-04-19 维沃移动通信有限公司 A kind of panorama shooting method and device
CN109766010A (en) * 2019-01-14 2019-05-17 青岛一舍科技有限公司 A kind of unmanned submersible's control method based on head pose control
CN209862435U (en) * 2019-01-30 2019-12-31 陈华 Intelligent video helmet
CN113124883A (en) * 2021-03-01 2021-07-16 浙江国自机器人技术股份有限公司 Off-line punctuation method based on 3D panoramic camera
CN113273172A (en) * 2020-08-12 2021-08-17 深圳市大疆创新科技有限公司 Panorama shooting method, device and system and computer readable storage medium
WO2021180932A2 (en) * 2020-03-13 2021-09-16 Navvis Gmbh Method and device for precisely selecting a spatial coordinate by means of a digital image
CN114004890A (en) * 2021-11-04 2022-02-01 北京房江湖科技有限公司 Attitude determination method and apparatus, electronic device, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014062932A (en) * 2012-09-19 2014-04-10 Sharp Corp Attitude control device, operation device, attitude control system, control method, control program, and recording medium
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
JP2016005263A (en) * 2014-06-19 2016-01-12 Kddi株式会社 Image generation system, terminal, program, and method that generate panoramic image from plurality of photographed images
CN109362234A (en) * 2016-04-28 2019-02-19 深圳市大疆创新科技有限公司 System and method for obtaining Spherical Panorama Image
CN108052058A (en) * 2018-01-31 2018-05-18 广州市建筑科学研究院有限公司 Construction project site safety of the one kind based on " internet+" patrols business flow system of running affairs
CN109660723A (en) * 2018-12-18 2019-04-19 维沃移动通信有限公司 A kind of panorama shooting method and device
CN109766010A (en) * 2019-01-14 2019-05-17 青岛一舍科技有限公司 A kind of unmanned submersible's control method based on head pose control
CN209862435U (en) * 2019-01-30 2019-12-31 陈华 Intelligent video helmet
WO2021180932A2 (en) * 2020-03-13 2021-09-16 Navvis Gmbh Method and device for precisely selecting a spatial coordinate by means of a digital image
CN113273172A (en) * 2020-08-12 2021-08-17 深圳市大疆创新科技有限公司 Panorama shooting method, device and system and computer readable storage medium
WO2022032538A1 (en) * 2020-08-12 2022-02-17 深圳市大疆创新科技有限公司 Panorama photographing method, apparatus and system, and computer-readable storage medium
CN113124883A (en) * 2021-03-01 2021-07-16 浙江国自机器人技术股份有限公司 Off-line punctuation method based on 3D panoramic camera
CN114004890A (en) * 2021-11-04 2022-02-01 北京房江湖科技有限公司 Attitude determination method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114615410B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN109978755B (en) Panoramic image synthesis method, device, equipment and storage medium
US10488198B2 (en) Surveying system
US10594941B2 (en) Method and device of image processing and camera
US20200092475A1 (en) Apparatus and methods for image alignment
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN103456171B (en) A kind of based on fish-eye vehicle flow detection system, method and method for correcting image
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN111242987B (en) Target tracking method and device, electronic equipment and storage medium
CN116385504A (en) Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration
CN116030194A (en) Air-ground collaborative live-action three-dimensional modeling optimization method based on target detection avoidance
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
CN117036666B (en) Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN114615410B (en) Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet
CN107767366A (en) A kind of transmission line of electricity approximating method and device
CN116228860A (en) Target geographic position prediction method, device, equipment and storage medium
CN113850905A (en) Panoramic image real-time splicing method for circumferential scanning type photoelectric early warning system
Mirisola et al. Trajectory recovery and 3d mapping from rotation-compensated imagery for an airship
Han et al. A polarized light compass aided place recognition system
CN112215048A (en) 3D target detection method and device and computer readable storage medium
CN116718165B (en) Combined imaging system based on unmanned aerial vehicle platform and image enhancement fusion method
CN117036932A (en) Near-distance real-time pose monitoring method for offshore target
Jin et al. A study on near-real-time geometric correction system of drones image
CN116817929A (en) Method and system for simultaneously positioning multiple targets on ground plane by unmanned aerial vehicle
CN114454137A (en) Steel structure damage intelligent inspection method and system based on binocular vision and robot
CN115841636A (en) Video frame processing method for photovoltaic panel defect detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant