CN114615410B - Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet - Google Patents

Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet Download PDF

Info

Publication number
CN114615410B
CN114615410B CN202210232180.0A CN202210232180A CN114615410B CN 114615410 B CN114615410 B CN 114615410B CN 202210232180 A CN202210232180 A CN 202210232180A CN 114615410 B CN114615410 B CN 114615410B
Authority
CN
China
Prior art keywords
image
helmet
acquiring
fisheye
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210232180.0A
Other languages
Chinese (zh)
Other versions
CN114615410A (en
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210232180.0A priority Critical patent/CN114615410B/en
Publication of CN114615410A publication Critical patent/CN114615410A/en
Application granted granted Critical
Publication of CN114615410B publication Critical patent/CN114615410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The application relates to the technical field of shooting images, in particular to a natural disaster panoramic helmet, a shooting gesture determining method of the natural disaster panoramic helmet and an image thereof, and a computer storage medium, wherein the natural disaster panoramic helmet comprises a helmet body, a panoramic fisheye camera arranged at the top of the helmet body, a magnetometer and a data processing system arranged in the helmet body, wherein the panoramic fisheye camera is used for fisheye images at different moments; the magnetometer is used for acquiring the spatial information of the helmet at different moments, and the data processing system extracts matched characteristic points from the images at different moments according to the acquired images at different positions at different moments; acquiring roll angles and pitch angles according to the position relations of the characteristic points matched at different moments; and determining the variation of the shooting attitude according to the roll angle and the pitch angle. The method has the advantages that an operator can conveniently scan natural resources under a specific environment, and meanwhile, the natural disasters are shot in a fixed posture to scan and record.

Description

Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet
Technical Field
The application relates to the technical field of shooting images, in particular to a natural disaster panoramic helmet and a shooting gesture determining method of the natural disaster panoramic helmet.
Background
The natural disaster scanning is to observe and monitor dynamic changes of related factors in the whole processes of natural disaster inoculation, occurrence, development and fatal damage by using various measuring means, and the natural disaster scanning tool is a tool for monitoring the states of target areas before, during and after disaster time.
In the related art, manual inspection or unmanned aerial vehicle inspection is mainly adopted for natural disaster scanning, and equipment such as a tablet personal computer, a GPS (global positioning system) and the like are manually carried for manual inspection, so that the method is low in efficiency, inspection points are easy to miss, and verification is difficult for places where inspection is missed; the unmanned aerial vehicle inspection needs to carry various sensing devices and collecting devices on the unmanned aerial vehicle, is limited by the influence of the maximum load of the unmanned aerial vehicle, has short endurance time and high cost, and cannot execute natural disaster general inspection in a long-time and large-range natural area. In addition, the manned/unmanned vehicle-mounted inspection system is widely applied, but the vehicle-mounted system is limited by road condition influence and is not suitable for natural disaster general inspection in natural areas with poor road conditions such as mountain areas, forest lands and the like.
Disclosure of Invention
In order to facilitate an operator to scan natural resources in a specific environment, and at the same time, shoot natural disasters in a fixed posture to scan and record, the application provides a natural disaster panoramic helmet and a shooting posture determining method of images of the natural disaster panoramic helmet.
In a first aspect, the present application provides a natural disaster scanning helmet, which adopts the following technical scheme:
a natural disaster scanning helmet, the helmet comprising: the helmet comprises a helmet body, a panoramic fisheye camera arranged at the top of the helmet body, and a magnetometer and a data processing system arranged in the helmet body; the panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system, wherein,
the panoramic fisheye camera is used for acquiring a first fisheye image at the time t1 and a second fisheye image at the time t 2;
the magnetometer is configured to obtain first spatial information of the helmet at a time t1 and second spatial information of the helmet at a time t2, where the first spatial information includes azimuth information of the helmet at the time t1, and the second spatial information includes azimuth information of the helmet at the time t 2;
the data processing system is used for acquiring a first center image and a first side image according to the first fisheye image, wherein the first center image is the projection of the first fisheye image along a preset azimuth, and the first side image is the projection of the first fisheye image along an orthogonal direction of the preset azimuth;
the data processing system is further configured to obtain a second center image and a second side image according to the second fisheye image, where the second center image is a projection of the second fisheye image along a preset azimuth, and the second side image is a projection of the second fisheye image along an orthogonal direction of the preset azimuth;
the data processing system is further used for respectively extracting at least two first characteristic points on the first center image and the first side image; extracting at least two second feature points respectively matched with the at least two first feature points on the second center image and the second side image;
the data processing system is further used for acquiring a roll angle according to the position relation between the first characteristic point on the first center image and the second characteristic point on the second center image; acquiring a pitch angle according to the position relation between the first characteristic point on the first side image and the second characteristic point on the second side image; and determining the variation of the shooting posture at the moment t2 relative to the moment t1 according to the roll angle and the pitch angle.
In a second aspect, the present application provides a method for determining a shooting pose of a natural disaster scanning image, which adopts the following technical scheme:
a shooting gesture determining method of a natural disaster scanning image is applied to a natural disaster scanning helmet and comprises the following steps:
acquiring a first fisheye image at a time t1 and corresponding first space information, wherein the first space information comprises azimuth information of the time t 1;
acquiring a second fisheye image at the time t2 and corresponding second spatial information, wherein the second spatial information comprises azimuth information of the time t 2;
acquiring a first center image and a first side image corresponding to a first fisheye image, wherein the first center image is a projection of the first fisheye image along a preset direction, the first side image is a projection of the first fisheye image along an orthogonal direction of the preset direction, and the preset direction and the orthogonal direction of the preset direction are both contained in direction information of the moment t 1;
acquiring a second center image and a second side image corresponding to a second fish-eye image, wherein the second center image is a projection of the second fish-eye image along a preset direction, the second side image is a projection of the second fish-eye image along an orthogonal direction of the preset direction, the first side image and the second side image are positioned in the same direction, and the orthogonal directions of the preset direction and the preset direction are both contained in direction information of the t2 moment;
respectively extracting at least two first characteristic points on the first center image and the first side image, respectively extracting two second characteristic points on the second center image and the second side image according to the first characteristic points, wherein the first characteristic points and the second characteristic points are respectively matched with each other;
acquiring a roll angle according to the position relation between the first characteristic point on the first center image and the second characteristic point on the second center image;
acquiring a pitch angle according to the position relation between the first characteristic point on the first side image and the second characteristic point on the second side image;
and determining the variation of the shooting posture at the moment t2 relative to the moment t1 according to the roll angle and the pitch angle.
In some of these embodiments, the predetermined position is determined by the magnetometer and the predetermined position remains the same position at any time.
In some of these embodiments, extracting two second feature points on the second center image and the second side image, respectively, includes:
acquiring a first search area by taking the first characteristic points as the center, wherein each first characteristic point corresponds to one first search area;
acquiring a second search area on the second center image and the second side image, wherein the second search area is arranged corresponding to the first search area;
and acquiring second characteristic points corresponding to the first characteristic points in the first search area in the second search area.
In some of these embodiments, obtaining, in the second search area, a second feature point corresponding to the first feature point in the first search area, further includes the steps of:
and when the second characteristic points corresponding to the first characteristic points in the first search area cannot be acquired in the second search area, increasing the search area size of the first search area, and correspondingly, increasing the search area size of the second search area.
In some of these embodiments, the step of acquiring the roll angle according to the positional relationship between the first feature point on the first center image and the second feature point on the second center image includes the steps of:
connecting the first characteristic points with the corresponding second characteristic points to obtain a roll characteristic line;
and calculating the absolute value of the included angle between the transverse rolling characteristic line and the horizontal direction to obtain the absolute value of the transverse rolling angle, and taking the transverse rolling characteristic line to roll rightwards along the longitudinal axis as positive.
In some embodiments, the method for acquiring the pitch angle according to the position relation between the first characteristic point on the first side image and the second characteristic point on the second side image comprises the following steps:
connecting the first characteristic points with the corresponding second characteristic points to obtain pitching characteristic lines;
and calculating the absolute value of the included angle between the pitching characteristic line and the horizontal direction to obtain the absolute value of the pitching angle, and taking the direction along the vertical direction as positive.
In a third aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer-readable storage medium storing a shooting attitude determination method capable of being loaded by a processor and executing any one of the above natural disaster scanning images.
According to the natural disaster panoramic helmet and the shooting gesture determining method for the images of the natural disaster panoramic helmet, in the natural disaster scanning process, places where some patrol vehicles cannot pass are encountered, when unmanned aerial vehicles cannot be used due to the maximum load and electric quantity of people, manual patrol is needed, operators search through wearing the natural disaster panoramic helmet, inconvenience in actions caused by the need of holding shooting equipment is reduced, the panoramic fisheye camera on the panoramic helmet automatically shoots and scans natural disasters, meanwhile, shooting gestures are adjusted through the shooting gesture determining method for the images shot in the natural disaster scanning process, panoramic images shot in a preset fixed gesture are obtained through spatial transformation of the panoramic images shot in a preset time period, and the working effect of natural disaster scanning is improved to a certain extent.
Drawings
Fig. 1 is a schematic diagram of overall steps of a shooting gesture determining method of a natural disaster scanning image in an embodiment of the present application;
FIG. 2 is a schematic illustration of steps for extracting two second feature points on a second center image and a second side image, respectively;
FIG. 3 is a schematic diagram of the steps for acquiring roll angle;
fig. 4 is a schematic diagram of a step of acquiring a pitch angle.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, the present application is described and illustrated below with reference to the accompanying drawings and examples. However, it will be apparent to one of ordinary skill in the art that the present application may be practiced without these details. In some instances, well-known methods, procedures, systems, components, and/or circuits have been described at a high-level so as not to obscure aspects of the present application with unnecessary description. It will be apparent to those having ordinary skill in the art that various changes can be made to the embodiments disclosed herein and that the general principles defined herein may be applied to other embodiments and applications without departing from the principles and scope of the present application. Thus, the present application is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the scope claimed herein.
Unless defined otherwise, technical or scientific terms used herein shall have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the terms "a," "an," "the," "these," and the like are not intended to be limiting in number, but rather are singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in the present application, are intended to cover a non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus.
Reference to "a plurality" in this application means two or more. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this application, merely distinguish similar objects and do not represent a particular ordering of objects.
The terms "system," "engine," "unit," "module," and/or "block" as used herein are one method for differentiating between different levels of different components, elements, parts, components, assemblies, or functions by level. These terms may be replaced by other expressions which achieve the same purpose. In general, reference herein to "a module," "unit," or "block" refers to a collection of logic or software instructions embodied in hardware or firmware. The "modules," "units," or "blocks" described herein may be implemented as software and/or hardware, and where implemented as software, they may be stored in any type of non-volatile computer-readable storage medium or storage device.
In some embodiments, software modules/units/blocks may be compiled and linked into an executable program. It will be appreciated that a software module may be callable from other modules/units/blocks or from itself, and/or may be invoked in response to a detected event or interrupt. The software modules/units/blocks configured to execute on the computing device may be provided on a computer readable storage medium, such as an optical disk, digital video disk, flash drive, magnetic disk, or any other tangible medium, or as a digital download (and may be initially stored in a compressed or installable format that requires installation, decompression, or decryption prior to execution). Such software code may be stored in part or in whole on a storage device of the executing computing device and applied in the operation of the computing device. The software instructions may be embedded in firmware, such as EPROM. It will also be appreciated that the hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or may be included in programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functions described herein may be implemented as software modules/units/blocks, and may also be represented in hardware or firmware. In general, the modules/units/blocks described herein may be combined with other modules/units/blocks or, although physically organized or stored, may be divided into sub-modules/sub-units/sub-blocks. The description may apply to a system, an engine, or a portion thereof.
It will be understood that when an element, engine, module or block is referred to as being "on," "connected to" or "coupled to" another element, engine, module or block, it can be directly on, connected or coupled to or in communication with the other element, engine, module or block, or intervening elements, engines, modules or blocks may be present unless the context clearly indicates otherwise. In this application, the term "and/or" may include any one or more of the associated listed items or combinations thereof.
The present application is described in further detail below in conjunction with figures 1-4.
The embodiment of the application discloses a natural disaster scanning helmet, which comprises a helmet body, a panoramic fisheye camera, a magnetometer and a data processing system. The panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system.
The helmet body can be made of materials with certain rigidity intensity, and can protect the head of an operator to a certain extent if severe weather such as strong wind or hail is encountered in the natural disaster scanning process.
Meanwhile, the weight of the helmet body can be light enough, so that the helmet is hard and has certain portability, such as aluminum alloy or carbon fiber materials. The inside of helmet body can fixedly connected with protection pad, and when operating personnel wears the helmet for a long time, protection pad can alleviate the burden of helmet to the head to a certain extent.
The panoramic fisheye camera can be a common fisheye camera in the market.
The fisheye panoramic camera is used for acquiring a first fisheye image at the time t1 and a second fisheye image at the time t 2.
Magnetometers are also called geomagnetism and magnetic sensors, can be used for testing the intensity and the azimuth of a magnetic field, locate equipment azimuth, and can measure the included angles between current equipment and four azimuth directions of southeast, southwest and northwest, and the principle of the magnetometers is similar to a compass.
The magnetometer is used for acquiring first space information of the helmet at the time t1 and second space information of the helmet at the time t 2. Wherein the first spatial information comprises the azimuth information of the helmet at the time t1 and the second spatial information comprises the azimuth information of the helmet at the time t 2.
The data processing system comprises a memory in which a computer program is stored and a processor arranged to run the computer program to perform a method of use in a natural disaster screening process.
The panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system.
The data processing system is used for acquiring a first center image and a first side image according to the first fisheye image, wherein the first center image is the projection of the first fisheye image along the preset azimuth, and the first side image is the projection of the first fisheye image along the orthogonal direction of the preset azimuth.
The data processing system is further used for acquiring a second center image and a second side image according to the second fisheye image, wherein the second center image is the projection of the second fisheye image along the preset azimuth, and the second side image is the projection of the second fisheye image along the orthogonal direction of the preset azimuth.
The data processing system is further used for respectively extracting at least two first characteristic points on the first center image and the first side image, and respectively extracting at least two second characteristic points matched with the at least two first characteristic points on the second center image and the second side image.
The data processing system is also used for acquiring a roll angle according to the position relation between the first characteristic point on the first center image and the second characteristic point on the second center image; acquiring a pitch angle according to the position relation of the second characteristic points on the first side image and the second side image; and determining the variation of the shooting attitude at the moment t2 relative to the moment t1 according to the roll angle and the pitch angle.
Further, the helmet further comprises a satellite positioning module arranged inside the helmet body, and the satellite positioning module can be selected according to actual needs by using a GPS positioning device or a Beidou positioning device.
The satellite positioning module is used for acquiring longitude and latitude position information of the helmet at the time of t1 and t2, and is convenient for a background to carry out positioning tracking on the helmet and acquiring the position in real time.
Furthermore, a small battery is also mounted on the panoramic fisheye camera and is externally connected with a replaceable pocket battery compartment. The small battery mainly plays a role of UPS, and the pocket type battery compartment is used for supplying power for the helmet at ordinary times, when the electricity of the pocket type battery compartment is used up, the small battery can guarantee short-time endurance, so that the pocket type battery compartment can be replaced conveniently, and the system does not need to interrupt collection.
The images collected by the panoramic fisheye camera are embedded into the ArcGIS map according to the positioning signals, and the images of the panoramic fisheye camera need to be corrected to a fixed shooting posture. Unlike the prior art that the carrier gesture of the panoramic fisheye camera is obtained by adopting an acceleration sensor and a magnetometer to correct the shooting gesture of the panoramic fisheye camera, the carrier gesture information of the panoramic fisheye camera is obtained by adopting a method of locally matching characteristic points of orthogonal projection images.
As shown in fig. 1, the embodiment of the application also discloses a shooting gesture determining method of the natural disaster scanning image, which is applied to the natural disaster scanning helmet, and includes:
s100, acquiring a first fisheye image at a time t1 and corresponding first spatial information.
S200, acquiring a second fisheye image at the time t2 and corresponding second spatial information.
S300, a first center image and a first side image corresponding to the first fish-eye image are acquired.
S400, a second center image and a second side image corresponding to the second fish-eye image are acquired.
S500, respectively extracting at least two first characteristic points on the first center image and the first side image, respectively extracting two second characteristic points on the second center image and the second side image according to the first characteristic points, and respectively matching the first characteristic points and the second characteristic points.
S600, acquiring the roll angle according to the position relation between the first characteristic point on the first center image and the second characteristic point on the second center image.
And S700, acquiring a pitch angle according to the position relation between the first characteristic point on the first side image and the second characteristic point on the second side image.
S800, determining the change amount of the shooting attitude at the moment t2 relative to the moment t1 according to the roll angle and the pitch angle.
In this embodiment, the default time t2 is located after the time t1, and the time interval between t1 and t2 is shorter.
The first spatial information comprises azimuth information at time t1 and longitude and latitude information at time t 1. The azimuth information is acquired by a magnetometer, and the longitude and latitude information is acquired by a satellite positioning module. Similarly, the second spatial information includes azimuth information at time t2 and longitude and latitude information at time t 2.
The first center image is the projection of the first fisheye image along the preset azimuth, and the first side image is the projection of the first fisheye image along the orthogonal direction of the preset azimuth. The preset azimuth and the orthogonal direction of the preset azimuth are contained in azimuth information at the time t 1.
The preset azimuth is manually selected after being acquired by the magnetometer, for example, the magnetic north azimuth can be defined as the preset azimuth, the first center image is the projection of the first fisheye image along the magnetic north azimuth, the first side image is the projection of the first fisheye image along the orthogonal direction of the magnetic north azimuth, namely the magnetic west or the magnetic east azimuth, and the preset azimuth is kept at the same azimuth in any time.
Correspondingly, the second center image is the projection of the second fisheye image along the preset direction, and the second side image is the projection of the second fisheye image along the orthogonal direction of the preset direction. The preset azimuth and the orthogonal direction of the preset azimuth are contained in azimuth information at the time t 2. Meanwhile, the selection of the preset azimuth is the same as the rule, and the details are not repeated here.
It should be noted that the orientations of the first side image and the second side image are identical, that is, in the case where the preset orientation is determined to be the magnetic north orientation, the orientations of the first side image and the second side image are both the magnetic west orientation or the magnetic east orientation orthogonal to the magnetic north orientation.
Before the fisheye image is projected, in order to improve the effective efficiency of projection, the fisheye image may be corrected, that is, the fisheye image is de-distorted. After correction, a fisheye image can be converted into a plurality of images, and the images cover the field of view of the fisheye image from different visual angles, so that the fisheye image is generally considered to be only geometrically deformed and not distorted.
The correction of the fisheye image is essentially a spatial transformation (geometrical transformation, geometrical operation) of the image processing, copying only the points on the image without modification. The spatial transformation is seen as the movement of the image point within the image, i.e. the point a (u, v) on the fisheye image is corrected to the point a (x, y) on the image.
Wherein the mapping method has backward mapping and forward mapping.
Backward mapping: using a similar filling method, obtaining corresponding fisheye image coordinates from the coordinates of the corrected image by mapping, and copying the image component at the fisheye image coordinates to (u, v) t=at the corrected image coordinates
F
(x, y), the image component of the sub-pixel coordinate point (fisheye image) is found from the pixel coordinate point (corrected image).
Forward mapping: after the corrected image coordinates are obtained by mapping transformation from the fisheye image coordinates, the image components of the fisheye image coordinates are copied to the corrected image coordinates, (x, y) T =
G
(u, v) calculating the pixel coordinate point from the pixel sub-pixel coordinate point.
In image processing, the inputs to the mapping are pixel coordinates (integer coordinates) and the outputs are sub-pixel coordinates (non-integer coordinates).
At least two first feature points are extracted from the first center image and the first side image, respectively, and it is noted that when the number of feature points is greater than two, a plurality of feature points are not collinear with each other in order to improve accuracy.
And extracting second characteristic points matched with the first characteristic points on the second center image and the second side image. If the tower tip of the high tower and the window corner of the automobile on the first center image are used as first characteristic points, the tree top end and the street lamp top end on the first side image are used as first characteristic points, the tower tip of the high tower and the window corner of the automobile after a period of shooting angle change are extracted on the second center image at the time t2, the tower tip and the window corner of the automobile are defined as second characteristic points, and the tree top end and the street lamp top end are extracted on the second side image and are used as second characteristic points.
The feature extraction method of the feature points can be SIFT, HOG, SURF, ORB, LBP, HAAP, neural network learning and other methods commonly used in related technologies.
As shown in fig. 2, further, extracting two second feature points on the second center image and the second side image respectively further includes:
s510, acquiring a first search area by taking the first feature points as the center, wherein each first feature point corresponds to one first search area.
And S520, acquiring a second search area on the second center image and the second side image.
S530, acquiring second feature points corresponding to the first feature points in the first search area in the second search area.
Further, the second search area is disposed opposite to the first search area, in other words, the positions of the first search area and the second search area on the image are identical and can be overlapped with respect to the first center image and the second center image of the same size, and the first side image and the second side image of the same size.
Setting the first search area and the second search area can further reduce the workload when extracting the first feature point and the second feature point. Assuming that the first search area is not set, and the first center image is an image of 1000×1000 pixels, after the first search area is acquired by taking the first feature point as the center, the size of the first search area may be an area of only 50×50 pixels, or may be an area of 100×100 pixels, which also greatly reduces the calculation amount in the feature point extraction.
After the first search area is acquired, the time interval between t1 and t2 is short, so that the variation of the second fisheye image relative to the first fisheye image is small, and the characteristic points with high probability in the second search area on the second center image and the second side image are identical to the first characteristic points, possibly obtained by translating, rotating and rolling the first characteristic points, and are defined as second characteristic points because the second search area is opposite to the first search area. The second feature point may be said to be obtained by translating, rotating, and rolling the first feature point to a certain extent.
Further, acquiring, in the second search area, a second feature point corresponding to the first feature point in the first search area further includes: when the second characteristic point corresponding to the first characteristic point in the first search area cannot be acquired in the second search area, the search area of the first search area is increased in size, and the search area of the second search area is increased in size relatively.
The size of the search area refers to the size of a closed space surrounded by polygons, and the larger the closed space is, the larger the number of pixel points in the closed space is, and the larger the extraction range of the feature points is. The shape of the enclosed space can be set to any shape, and is generally set to be a rectangular enclosed space.
Since the position and size of the second search area correspond to those of the first search area, when the first search area increases, the second search area increases accordingly.
When the movement range is large in the period from t1 to t2, and the second feature point corresponding to the first feature point cannot be extracted in the second search area corresponding to the first search area, the size of the search area is appropriately enlarged, and on the premise of increasing a certain calculation amount, the second feature point corresponding to the first feature point can be extracted in the enlarged second search area.
As shown in fig. 3, the roll angle is obtained according to the positional relationship between the first feature point on the first center image and the second feature point on the second center image, and includes the steps of:
and S610, connecting the first characteristic points with the second characteristic points corresponding to the first characteristic points in a matching way to obtain the roll characteristic line.
S620, calculating the absolute value of the included angle between the transverse rolling characteristic line and the horizontal direction, obtaining the absolute value of the transverse rolling angle, and taking the right rolling of the transverse rolling characteristic line along the vertical axis as positive.
And connecting the first characteristic points and the second characteristic points which are matched with each other to obtain a transverse rolling characteristic line, wherein the transverse rolling characteristic line is the motion track of the characteristic points at the time t1 and the time t 2. The roll angle is the angle between the transverse axis of the carrier and the horizontal line. The roll characteristic line obtained by the movement of the transverse coordinate point and the movement of the vertical coordinate point represented by the movement track of the characteristic point at the time t1 and the time t2 represents the final change form of the transverse axis of the carrier on the central image, so the absolute value of the included angle between the roll characteristic line and the horizontal line is the absolute value of the roll angle.
As shown in fig. 4, the pitch angle is obtained according to the positional relationship between the first feature point on the first side image and the second feature point on the second side image, and includes the steps of:
and S710, connecting the first characteristic points with the corresponding second characteristic points to obtain pitching characteristic lines.
S720, calculating the absolute value of an included angle between the pitching characteristic line and the horizontal direction, obtaining the absolute value of a pitching angle, and taking the direction along the vertical direction as positive.
And connecting the first characteristic points and the second characteristic points which are matched correspondingly, and obtaining pitching characteristic lines, wherein the pitching characteristic lines are the motion tracks of the characteristic points at the time t1 and the time t 2. The pitch angle is the angle between the longitudinal axis of the carrier and the horizontal. The transversal characteristic line obtained by the movement of the horizontal coordinate point and the vertical coordinate point represented by the movement track of the characteristic point at the time t1 and the time t2 represents the final change form of the longitudinal axis of the carrier on the side image, so the absolute value of the included angle between the pitching characteristic line and the horizontal azimuth is the absolute value of the pitch angle.
The carrier posture change quantity at the moment t2 relative to the moment t1 can be obtained according to the roll angle and the pitch angle. After the posture change of the panoramic fisheye camera from the time t1 to the time t2 is obtained, the second fisheye image shot at the time t2 can be subjected to space transformation, and a panoramic image with a preset fixed posture is obtained.
Further, according to longitude and latitude information and azimuth information, the corrected panoramic image is embedded into a map ArcGIS map to form natural disaster scanning data.
The embodiment of the application also discloses a computer readable storage medium which stores a shooting gesture determining method capable of being loaded by a processor and executing the natural disaster scanning image.
The foregoing are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in any way, therefore: all equivalent changes in structure, shape and principle of this application should be covered in the protection scope of this application.

Claims (8)

1. A natural disaster scanning helmet, characterized in that: the helmet comprises: the helmet comprises a helmet body, a panoramic fisheye camera arranged at the top of the helmet body, and a magnetometer and a data processing system arranged in the helmet body; the panoramic fisheye camera and the magnetometer are respectively and electrically connected with the data processing system, wherein,
the panoramic fisheye camera is used for acquiring a first fisheye image at the time t1 and a second fisheye image at the time t 2;
the magnetometer is configured to obtain first spatial information of the helmet at a time t1 and second spatial information of the helmet at a time t2, where the first spatial information includes azimuth information of the helmet at the time t1, and the second spatial information includes azimuth information of the helmet at the time t 2;
the data processing system is used for acquiring a first center image and a first side image according to the first fisheye image, wherein the first center image is the projection of the first fisheye image along a preset azimuth, and the first side image is the projection of the first fisheye image along an orthogonal direction of the preset azimuth;
the data processing system is further configured to obtain a second center image and a second side image according to the second fisheye image, where the second center image is a projection of the second fisheye image along a preset azimuth, and the second side image is a projection of the second fisheye image along an orthogonal direction of the preset azimuth;
the data processing system is further used for respectively extracting at least two first characteristic points on the first center image and the first side image; extracting at least two second feature points respectively matched with the at least two first feature points on the second center image and the second side image;
the data processing system is further used for acquiring a roll angle according to the position relation between the first characteristic point on the first center image and the second characteristic point on the second center image; acquiring a pitch angle according to the position relation between the first characteristic point on the first side image and the second characteristic point on the second side image; and determining the variation of the shooting posture at the moment t2 relative to the moment t1 according to the roll angle and the pitch angle.
2. A method for determining shooting postures of natural disaster scanning images, which is applied to the natural disaster scanning helmet as claimed in claim 1, and is characterized by comprising the following steps:
acquiring a first fisheye image at a time t1 and corresponding first space information, wherein the first space information comprises azimuth information of the time t 1;
acquiring a second fisheye image at the time t2 and corresponding second spatial information, wherein the second spatial information comprises azimuth information of the time t 2;
acquiring a first center image and a first side image corresponding to a first fisheye image, wherein the first center image is a projection of the first fisheye image along a preset direction, the first side image is a projection of the first fisheye image along an orthogonal direction of the preset direction, and the preset direction and the orthogonal direction of the preset direction are both contained in direction information of the moment t 1;
acquiring a second center image and a second side image corresponding to a second fish-eye image, wherein the second center image is a projection of the second fish-eye image along a preset direction, the second side image is a projection of the second fish-eye image along an orthogonal direction of the preset direction, the first side image and the second side image are positioned in the same direction, and the orthogonal directions of the preset direction and the preset direction are both contained in direction information of the t2 moment;
respectively extracting at least two first characteristic points on the first center image and the first side image, respectively extracting two second characteristic points on the second center image and the second side image according to the first characteristic points, wherein the first characteristic points and the second characteristic points are respectively matched with each other;
acquiring a roll angle according to the position relation between the first characteristic point on the first center image and the second characteristic point on the second center image;
acquiring a pitch angle according to the position relation between the first characteristic point on the first side image and the second characteristic point on the second side image;
and determining the variation of the shooting posture at the moment t2 relative to the moment t1 according to the roll angle and the pitch angle.
3. The method for determining the shooting attitude of a natural disaster scanning image according to claim 2, wherein: the preset azimuth is determined by the magnetometer, and the preset azimuth is kept at the same azimuth at any time.
4. The method for determining the shooting attitude of a natural disaster scanning image according to claim 2, wherein: extracting two second feature points on the second center image and the second side image respectively, including:
acquiring a first search area by taking the first characteristic points as the center, wherein each first characteristic point corresponds to one first search area;
acquiring a second search area on the second center image and the second side image, wherein the second search area is arranged corresponding to the first search area;
and acquiring second characteristic points corresponding to the first characteristic points in the first search area in the second search area.
5. The method for determining the shooting attitude of a natural disaster scanning image according to claim 4, wherein: acquiring a second feature point corresponding to the first feature point in the first search area in the second search area, and further comprising the following steps:
and when the second characteristic points corresponding to the first characteristic points in the first search area cannot be acquired in the second search area, increasing the search area size of the first search area, and correspondingly, increasing the search area size of the second search area.
6. The method for determining the shooting attitude of a natural disaster scanning image according to claim 2, wherein: acquiring a roll angle according to the position relation between a first characteristic point on the first center image and a second characteristic point on the second center image, wherein the roll angle comprises the following steps:
connecting the first characteristic points with the corresponding second characteristic points to obtain a roll characteristic line;
and calculating the absolute value of the included angle between the transverse rolling characteristic line and the horizontal direction to obtain the absolute value of the transverse rolling angle, and taking the transverse rolling characteristic line to roll rightwards along the longitudinal axis as positive.
7. The method for determining the shooting attitude of a natural disaster scanning image according to claim 2, wherein: acquiring a pitch angle according to the position relation between a first characteristic point on the first side image and a second characteristic point on the second side image, wherein the pitch angle comprises the following steps:
connecting the first characteristic points with the corresponding second characteristic points to obtain pitching characteristic lines;
and calculating the absolute value of the included angle between the pitching characteristic line and the horizontal direction to obtain the absolute value of the pitching angle, and taking the direction along the vertical direction as positive.
8. A computer-readable storage medium, characterized by: a shooting attitude determination method in which a natural disaster relief image as claimed in any one of claims 2 to 7 can be loaded and executed by a processor is stored.
CN202210232180.0A 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet Active CN114615410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210232180.0A CN114615410B (en) 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210232180.0A CN114615410B (en) 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet

Publications (2)

Publication Number Publication Date
CN114615410A CN114615410A (en) 2022-06-10
CN114615410B true CN114615410B (en) 2023-05-02

Family

ID=81861875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210232180.0A Active CN114615410B (en) 2022-03-09 2022-03-09 Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet

Country Status (1)

Country Link
CN (1) CN114615410B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
JP2014062932A (en) * 2012-09-19 2014-04-10 Sharp Corp Attitude control device, operation device, attitude control system, control method, control program, and recording medium
JP2016005263A (en) * 2014-06-19 2016-01-12 Kddi株式会社 Image generation system, terminal, program, and method that generate panoramic image from plurality of photographed images
CN108052058A (en) * 2018-01-31 2018-05-18 广州市建筑科学研究院有限公司 Construction project site safety of the one kind based on " internet+" patrols business flow system of running affairs
CN109362234A (en) * 2016-04-28 2019-02-19 深圳市大疆创新科技有限公司 System and method for obtaining Spherical Panorama Image
CN109660723A (en) * 2018-12-18 2019-04-19 维沃移动通信有限公司 A kind of panorama shooting method and device
CN109766010A (en) * 2019-01-14 2019-05-17 青岛一舍科技有限公司 A kind of unmanned submersible's control method based on head pose control
CN209862435U (en) * 2019-01-30 2019-12-31 陈华 Intelligent video helmet
CN113124883A (en) * 2021-03-01 2021-07-16 浙江国自机器人技术股份有限公司 Off-line punctuation method based on 3D panoramic camera
CN113273172A (en) * 2020-08-12 2021-08-17 深圳市大疆创新科技有限公司 Panorama shooting method, device and system and computer readable storage medium
WO2021180932A2 (en) * 2020-03-13 2021-09-16 Navvis Gmbh Method and device for precisely selecting a spatial coordinate by means of a digital image
CN114004890A (en) * 2021-11-04 2022-02-01 北京房江湖科技有限公司 Attitude determination method and apparatus, electronic device, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014062932A (en) * 2012-09-19 2014-04-10 Sharp Corp Attitude control device, operation device, attitude control system, control method, control program, and recording medium
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
JP2016005263A (en) * 2014-06-19 2016-01-12 Kddi株式会社 Image generation system, terminal, program, and method that generate panoramic image from plurality of photographed images
CN109362234A (en) * 2016-04-28 2019-02-19 深圳市大疆创新科技有限公司 System and method for obtaining Spherical Panorama Image
CN108052058A (en) * 2018-01-31 2018-05-18 广州市建筑科学研究院有限公司 Construction project site safety of the one kind based on " internet+" patrols business flow system of running affairs
CN109660723A (en) * 2018-12-18 2019-04-19 维沃移动通信有限公司 A kind of panorama shooting method and device
CN109766010A (en) * 2019-01-14 2019-05-17 青岛一舍科技有限公司 A kind of unmanned submersible's control method based on head pose control
CN209862435U (en) * 2019-01-30 2019-12-31 陈华 Intelligent video helmet
WO2021180932A2 (en) * 2020-03-13 2021-09-16 Navvis Gmbh Method and device for precisely selecting a spatial coordinate by means of a digital image
CN113273172A (en) * 2020-08-12 2021-08-17 深圳市大疆创新科技有限公司 Panorama shooting method, device and system and computer readable storage medium
WO2022032538A1 (en) * 2020-08-12 2022-02-17 深圳市大疆创新科技有限公司 Panorama photographing method, apparatus and system, and computer-readable storage medium
CN113124883A (en) * 2021-03-01 2021-07-16 浙江国自机器人技术股份有限公司 Off-line punctuation method based on 3D panoramic camera
CN114004890A (en) * 2021-11-04 2022-02-01 北京房江湖科技有限公司 Attitude determination method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114615410A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
US11054258B2 (en) Surveying system
CN111289533A (en) Fan blade inspection method and device, unmanned aerial vehicle and storage medium
CN111966133A (en) Visual servo control system of holder
KR102217549B1 (en) Method and system for soar photovoltaic power station monitoring
JP6479296B2 (en) Position / orientation estimation apparatus and position / orientation estimation method
CN111247389B (en) Data processing method and device for shooting equipment and image processing equipment
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN113624231A (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN111242987B (en) Target tracking method and device, electronic equipment and storage medium
CN116385504A (en) Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration
CN114071008A (en) Image acquisition device and image acquisition method
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
CN114615410B (en) Natural disaster panoramic helmet and shooting gesture determining method for images of natural disaster panoramic helmet
CN116228860A (en) Target geographic position prediction method, device, equipment and storage medium
CN115379390A (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN111460898B (en) Skyline acquisition method based on monocular camera image of lunar surface inspection tour device
Han et al. A polarized light compass aided place recognition system
CN113850905A (en) Panoramic image real-time splicing method for circumferential scanning type photoelectric early warning system
Mirisola et al. Trajectory recovery and 3d mapping from rotation-compensated imagery for an airship
CN116839595B (en) Method for creating unmanned aerial vehicle route
CN116228598B (en) Geometric distortion correction device for remote sensing image of mountain unmanned aerial vehicle and application
CN116718165B (en) Combined imaging system based on unmanned aerial vehicle platform and image enhancement fusion method
CN116817929A (en) Method and system for simultaneously positioning multiple targets on ground plane by unmanned aerial vehicle
CN114454137A (en) Steel structure damage intelligent inspection method and system based on binocular vision and robot
CN116152679A (en) Cognitive positioning method based on AI vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant