CN117425046A - Method for multi-target high-speed searching and cutting in video - Google Patents

Method for multi-target high-speed searching and cutting in video Download PDF

Info

Publication number
CN117425046A
CN117425046A CN202311734596.3A CN202311734596A CN117425046A CN 117425046 A CN117425046 A CN 117425046A CN 202311734596 A CN202311734596 A CN 202311734596A CN 117425046 A CN117425046 A CN 117425046A
Authority
CN
China
Prior art keywords
target
searched
longitude
latitude
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311734596.3A
Other languages
Chinese (zh)
Inventor
王宣
左羽佳
徐芳
白冠冰
刘成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202311734596.3A priority Critical patent/CN117425046A/en
Publication of CN117425046A publication Critical patent/CN117425046A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of target retrieval, in particular to a method for multi-target quick retrieval and cutting in video. According to the invention, the time period of the target in the video is automatically searched, so that the processing time of the aerial photographing data is reduced, the processing efficiency of the aerial photographing data is improved, and the workload of aerial photographing data processing personnel is obviously reduced.

Description

Method for multi-target high-speed searching and cutting in video
Technical Field
The invention relates to the technical field of target retrieval, in particular to a method for multi-target quick retrieval and cutting in video.
Background
The time of the long-endurance unmanned aerial vehicle is up to more than 72 hours, and after the unmanned aerial vehicle returns to the voyage, mass high-definition flight aerial videos are stored in an airborne data recorder. For aerial data processing personnel, in aerial videos of 72 hours, the operation of manually looking up the videos to find targets with known longitudes and latitudes and calculating the positioning accuracy one by one is complicated, and the time for manually looking up the targets is too long, so that reporting of disaster situations or other important information is delayed. Therefore, in the process of processing aerial video with large data volume, the operation of manually searching for a target and manually calculating the positioning accuracy of the target should be avoided, and in order to reduce the data processing time, a method capable of automatically searching for a time period when a plurality of known longitude and latitude targets appear in the aerial video and automatically calculating the positioning accuracy is required.
Disclosure of Invention
The invention provides a method for searching and cutting multiple targets in video at high speed, which can automatically search time periods of multiple known longitude and latitude targets in aerial video with large data volume and automatically calculate positioning accuracy.
The invention provides a method for searching and cutting multiple targets in a video, which comprises the following steps:
s1: writing UTC time, height measurement values and longitude and latitude measurement values of all target points in an SEI data section of an H.264 file in the process of exposing the visible light load, and storing the SEI data section into an airborne data recorder;
s2: searching a frame header of an SEI data section in an H.264 file, and independently storing UTC time, longitude and latitude measurement values and height measurement values of each frame of video;
s3: inputting longitude and latitude true values and height true values of a target to be searched, calculating the distance between the spatial position of the target to be searched and the measurement position of each target point, and when the distance between the measurement position of a certain target point and the spatial position of the target to be searched is smaller than or equal to a judgment threshold value, calibrating the target point as the target to be cut, and obtaining positioning accuracy according to the distance;
the space position of the target point to be searched is determined by a longitude and latitude true value and a height true value, and the measuring position of the target point is determined by a height measuring value and a longitude and latitude measuring value;
s4: and cutting out each frame of video containing the object to be cut according to the UTC time sequence.
Preferably, the unmanned aerial vehicle is used for carrying the visible light load, after receiving the video recording signal, the unmanned aerial vehicle obtains UTC time, longitude and latitude of the unmanned aerial vehicle, height of the unmanned aerial vehicle and azimuth pitch and roll angle of the unmanned aerial vehicle, synchronously triggers the visible light load to expose, carries out laser ranging on all target points and collects azimuth pitch angles of the visible light load.
Preferably, in S2, the frame header of the SEI data segment in the h.264 file is searched, and a POS array is established, and each time a frame header is searched, the file pointer position of the frame header is separately stored in a row in the POS array;
establishing a Data array, reading the file pointer position of each frame head in the POS array, reading the UTC time of each frame of video from the H.264 file, and independently storing the UTC time, the height measurement value and the longitude and latitude measurement value into one row in the Data array.
Preferably, in S3, a TURE array is established, and the longitude and latitude true value and the altitude true value of the target to be retrieved are input and stored in the TURE array.
Preferably, the TURE array includes N rows, each storing a true longitude and latitude value and a true altitude value of the target to be retrieved.
Preferably, the method for calculating the distance between the spatial position of the target to be retrieved and the measured position of the target point is as follows:
calculating the curvature radius N of the global ellipsoidal mortise unitary ring of the space position of the target to be searched 1 The method comprises the following steps:
wherein,length of major half axis of the earth's ellipsoid, +.>Representing a first eccentricity of the earth's ellipsoid, < >>Representing the latitude true value of the target to be retrieved;
calculating the curvature radius N of the global ellipsoidal unitary mortise for measuring positions of target points 2 The method comprises the following steps:
wherein,a latitude measurement value representing the target point;
converting the target to be searched and the target point into a space coordinate system from a ground surface coordinate system;
in the space coordinate system, the space coordinate X of the object to be searched 1 、Y 1 、Z 1 The method comprises the following steps of:
wherein,true value of longitude representing the object to be retrieved +.>Representing the height true value of the target to be retrieved;
in space coordinatesIn the system, the spatial coordinates X of the target point 2 、Y 2 、Z 2 The method comprises the following steps of:
wherein,longitude measurement representing target point,/>A height measurement representing the target point;
the distance between the space position of the target to be searched and the measuring position of the target point is as follows:
preferably, the judgment threshold is 20 meters.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the target video is obtained through unmanned aerial vehicle aerial photography, the time period of the target with known longitude and latitude in the aerial video can be quickly searched, and the video segment containing the target is cut, so that the problem that the prior art can only screen by naked eyes is solved, the retrieval speed is not influenced by the video time length, the retrieval speed is extremely high, the data processing time is obviously reduced, and important information can be reported in the first time when dealing with disaster or other sudden accidents.
The invention can lead the video data into the ground computer for operation, reduces the pressure on the onboard computer, has smaller calculation amount in the target retrieval process, and can simultaneously and automatically calculate the positioning precision.
Drawings
Fig. 1 is a flowchart of a method for multi-target fast retrieval and cropping in video according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, like modules are denoted by like reference numerals. In the case of the same reference numerals, their names and functions are also the same. Therefore, a detailed description thereof will not be repeated.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limiting the invention.
As shown in fig. 1, the method for multi-target quick search and cutting in video provided by the embodiment of the invention mainly performs automatic search on single target or multi-target with known longitude and latitude information in video with large data volume, and cuts video clips containing targets, and the specific steps are as follows:
s1: utilize unmanned aerial vehicle to carry visible light load and carry out video shooting to appointed region and space, visible light load adopts the visible light camera of high resolution, still is provided with GPS receiver, laser rangefinder and angle acquisition system on the unmanned aerial vehicle to carry out unified control through the main control board, the main control board sets up to 1 way input trigger channel and 3 way output trigger channel. When the unmanned aerial vehicle receives the video recording signal, the unmanned aerial vehicle records the longitude, latitude and altitude of the unmanned aerial vehicle, and the flying azimuth, pitch angle and roll angle of the unmanned aerial vehicle. The GPS receiver sends a pulse-per-second signal to the main control board, and the main control board synchronously outputs 3 paths of trigger signals according to the input trigger signals, and specifically:
the first path of trigger signal is output to the visible light camera, the visible light camera is triggered to expose, video data is recorded, and UTC time of each frame of video is recorded.
The second path of trigger signal is output to the laser ranging device, and the laser ranging device measures laser ranging values between the unmanned aerial vehicle and all targets.
And outputting the third trigger signal to an angle acquisition system to acquire the azimuth angle and the pitch angle of the visible light camera on the unmanned aerial vehicle.
The data are acquired at the same moment, and according to UTC time, longitude, latitude, altitude, azimuth angle, pitch angle and roll angle of the aircraft, and azimuth angle, pitch angle and laser ranging value of a visible light camera, longitude and latitude measuring values and altitude measuring values of a target point can be calculated in real time, and the data are stored in a buffer A. The specific calculation process of the longitude and latitude measurement value and the height measurement value of the target point is traditional algorithm calculation, and can be realized on the existing unmanned plane.
And sending the video obtained by aerial photography into an H.264 compressor, outputting an H.264 compressed code stream, and obtaining VLC data segments (variable length codes), wherein the VLC data segments are video data in the H.264 code stream and contain compressed representations of video frames. Reading out data information of the video from the buffer A, writing UTC time, height measurement values and longitude and latitude measurement values of all target points on each frame of video when shooting into SEI data segments of H.264 compressed code streams to obtain H.264 files, wherein the data formats are as follows: UTC time, target longitude and latitude measurements, and altitude measurements. The format of the SEI segment data is shown in table 1, and the combination of the SEI data segments and the VLC data segments is described, and the frame header of the SEI data segments is 0, 01, 22.
Table 1 SEI data section and VLC data section
Segment names Name of the name Length of Content
SEI Frame header 1 byte 0
SEI Frame header 1 byte 0
SEI Frame header 1 byte 0
SEI Frame header 1 byte 1
SEI Frame header 1 byte 0
SEI Frame header 1 byte 22
SEI Frame header 1 byte 5
SEI UTC time 10 bytes
SEI Target longitude measurement 4 bytes
SEI Target latitude measurement 4 bytes
SEI Target height measurement 4 bytes
VLC Frame header 1 byte 0
VLC Frame header 1 byte 0
VLC Frame header 1 byte 0
VLC Frame header 1 byte 1
VLC Standby for use 1 byte
VLC Standby for use 1 byte
VLC H.264 video streaming Length of indefinite length
And storing the compressed data into an airborne data recorder.
As a preferred embodiment, the data in the onboard data recorder can be directly calculated by the onboard computer and then transmitted to the ground computer for observation, or after the unmanned aerial vehicle falls down, the data is imported into the ground computer for calculation.
S2: the unmanned aerial vehicle lands after completing shooting tasks, and the existing long-endurance unmanned aerial vehicle can reach more than 72 hours in time of air-stagnation cruising. After the unmanned aerial vehicle falls, data in the airborne data recorder are exported to a ground computer, and batch automatic retrieval is carried out on the ground computer aiming at a plurality of targets to be retrieved. Specifically, the computer establishes a blank POS array and searches for the header 0, 01, 22 of the SEI data field in the h.264 file in sequence. And storing the file pointer position of the frame header into a new row of the POS array every time the frame header is searched, and storing the file pointer position into one row of the POS array every time the frame header is searched until the whole H.264 file is searched. The data format of the POS array is shown in table 2.
TABLE 2 data for POS array
1 File pointer position of 1 st SEI segment header appearing in file
2 File pointer position of header of 2 nd SEI segment appearing in file
3 File pointer position of frame header of 3 rd SEI segment appearing in file
n File pointer position of frame header of nth SEI segment appearing in file
Establishing a Data array, reading the positions of frame heads 0,0,0,0,1 and 22 of SEI Data segments from the POS array, reading UTC time of each frame of video, height measurement value and longitude and latitude measurement value of a target point from an H.264 code stream, recording the pointer position of a frame head file of the SEI Data segments, and storing a new row of the Data array every time when one frame of information is read. The Data format of the Data array is shown in table 3.
TABLE 3 Data of Data array
1 SEI segment 1 frame appearing in a file UTC time of header SEI segment 1 header appearing in a file Target warp and weft height measurement The 1 st SEI segment header in the file Part pointer position
2 SEI segment frame 2 appearing in file UTC time of header Header of SEI segment 2 appearing in file Target warp and weft height measurement The 2 nd SEI segment header in the file Part pointer position
3 SEI segment frame 3 appearing in file UTC time of header Header of 3 rd SEI section appearing in file Target warp and weft height measurement The 3 rd SEI segment header in the file Part pointer position
n N-th SEI segment frame appearing in file UTC time of header The nth SEI segment header appearing in the file Target warp and weft height measurement The text of the frame header of the nth SEI segment appearing in the file Part pointer position
S3: after the data set of the aerial video is processed, interaction is carried out through a computer, and a longitude and latitude true value and a height true value of a target to be retrieved are input. For example, a video clip of a building is required to be searched, and the actual longitude, latitude and altitude of the building are input into a computer, so that the search can be started.
Specifically, a TURE array is established, and each row of the TURE array stores a true value L of the longitude of the target to be retrieved 1 True value M of latitude 1 And a height true value H 1 In this embodiment, there are N targets to be searched, and the TURE array is set to N rows. The data format of the TURE array is shown in Table 4.
TABLE 4 data for TURE array
1 1 st target warp and weft high true value to be searched
2 2 nd target warp and weft high true value to be searched
3 3 rd target longitude and latitude high true value to be searched
k The kth target longitude and latitude high true value to be searched
It should be noted that, the true value is opposite to the measured value, the true value refers to the data of longitude, latitude, altitude, and the like of the objective reality, and the measured value refers to the data obtained by measurement. The spatial position of the target point to be searched is true value data, and is determined by longitude and latitude true value and height true value; the target point measurement position is measurement data and is determined by the altitude measurement value and the longitude and latitude measurement value.
Because a plurality of target points exist in the aerial video, target screening and correspondence are needed, and input targets to be searched are marked from the plurality of target points in the video, in this embodiment, target screening and judgment are performed by calculating the distance between the spatial position of the targets to be searched and the measurement position of each target point, and a judgment threshold is designed as a basis, wherein the judgment threshold is 20 meters.
Firstly, since the measured value and the true value are both data obtained relative to the ground surface, the object to be retrieved and the target point need to be converted from the ground surface coordinate system to the space coordinate system, and the specific process is as follows:
calculating the curvature radius N of the global ellipsoidal mortise unitary ring of the space position of the target to be searched 1 The method comprises the following steps:
wherein,length of major half axis of the earth's ellipsoid, +.>=6378137,/>Representing the latitude true value of the target to be retrieved;
a first eccentricity representing an ellipsoid of the earth, the calculation being:
wherein,=6356752。
calculating the curvature radius N of the global ellipsoidal unitary mortise for measuring positions of target points 2 The method comprises the following steps:
wherein,representing latitude measurements of the target point.
Converting the target to be searched and the target point into a space coordinate system from a ground surface coordinate system:
in the space coordinate system, the space coordinate X of the object to be searched 1 、Y 1 、Z 1 The method comprises the following steps of:
wherein,longitude true representing target to be retrievedValue of->Representing the height true value of the target to be retrieved;
in the spatial coordinate system, the spatial coordinate X of the target point 2 、Y 2 、Z 2 The method comprises the following steps of:
wherein,longitude measurement representing target point,/>A height measurement representing the target point;
calculating a distance between the spatial position of the target to be searched and the measuring position of the target point in a spatial coordinate system, wherein the calculation formula is as follows:
and (3) performing target screening according to the relation between the distance between the spatial position of the target to be searched and the measuring position of the target point and the judging threshold, and when the distance is smaller than or equal to 20 meters, determining the target point as the corresponding target to be searched, calibrating the target point as the target to be cut, and obtaining the positioning accuracy according to the distance, wherein the distance is the distance deviation.
For the objects to be cut obtained through screening, a targetK array is built for each object to be cut, the array name of the object to be cut 1 is target1, the array name of the object to be cut 2 is target2, and so on, UTC time, longitude and latitude measured values and height measured values of the object points, file pointer positions of frame heads of SEI data segments and distance distances of the object points meeting the requirement of a judgment threshold are all stored in the corresponding targetK arrays, and as the same object to be cut occurs in multi-frame videos, different frame video information of the same object to be cut is stored in different rows in the same targetK array. For any target to be cut, the UTC time of the number of frames appearing for the first time in the video is the starting UTC time of the target; the UTC time of the last occurring frame number is the ending UTC time of the target.
S4: and reading the starting UTC time and the ending UCT time of the target to be cut from the targetK array. And the target video clip is rapidly cut out from the H.264 code stream by the following method:
and determining the pointer position of the SEI section frame header file of the first row in the targetK array (the first frame in which the target appears) and the pointer position of the SEI section frame header file of the last row in the targetK array (the last frame in which the target appears), reading out the binary code stream from the H.264 file, cutting out video fragments corresponding to each target to be cut in the video from the starting UTC time to the ending UCT time in which the target disappears according to the UTC time sequence, storing the video fragments as files targetK, storing the targetK array as a table, forming K report outputs as shown in table 5, and completing multi-target fast retrieval and cutting in the video.
Table 5 targetK array table of kth object to be cut
1 The kth object searches in the file Distance 1 of the rope SEI data of less than 20 metersSegment(s) UTC time The kth object searches in the file To 1 st distance less than Target of SEI data segments of 20 meters Warp and weft height measurement The kth object searches in the file To 1 st distance less than Files of SEI data segments of 20 meters Pointer position The kth object is searched in the file Is less than 20 meters Distance of SEI data segments of (C)
2 The kth object searches in the file Distance from cable to 2 SEI data segment less than 20 meters UTC time of (F) The kth object searches in the file To a 2 nd distance less than Target of SEI data segments of 20 meters Warp and weft height measurement The kth object searches in the file To a 2 nd distance less than Files of SEI data segments of 20 meters Pointer position The kth object is searched in the file Is less than 20 meters Distance of SEI data segments of (C)
3 The kth object searches in the file Distance 3 of the rope SEI data segment less than 20 meters UTC time of (F) The kth object searches in the file To 3 rd distance less than Target of SEI data segments of 20 meters Warp and weft height measurement The kth object searches in the file To 3 rd distance less than Files of SEI data segments of 20 meters Pointer position The kth object is searched in the file Is less than 20 meters Distance of SEI data segments of (C)
n The kth object searches in the file The nth distance of the cable SEI data segment less than 20 meters UTC time of (F) The kth object searches in the file To the nth distance less than Target of SEI data segments of 20 meters Warp and weft height measurement The kth object searches in the file To the nth distance less than Files of SEI data segments of 20 meters Pointer position The kth object is searched in the file Is less than 20 meters Distance of SEI data segments of (C)
While embodiments of the present invention have been illustrated and described above, it will be appreciated that the above described embodiments are illustrative and should not be construed as limiting the invention. Variations, modifications, alternatives and variations of the above-described embodiments may be made by those of ordinary skill in the art within the scope of the present invention.
The above embodiments of the present invention do not limit the scope of the present invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.

Claims (7)

1. The method for multi-target high-speed searching and cutting in the video is characterized by comprising the following steps:
s1: writing UTC time, height measurement values and longitude and latitude measurement values of all target points in an SEI data section of an H.264 file in the process of exposing the visible light load, and storing the SEI data section into an airborne data recorder;
s2: searching a frame header of an SEI data section in an H.264 file, and independently storing UTC time, longitude and latitude measurement values and height measurement values of each frame of video;
s3: inputting longitude and latitude true values and height true values of a target to be searched, calculating the distance between the spatial position of the target to be searched and the measurement position of each target point, and when the distance between the measurement position of a certain target point and the spatial position of the target to be searched is smaller than or equal to a judgment threshold value, calibrating the target point as the target to be cut, and obtaining positioning accuracy according to the distance;
the space position of the target point to be searched is determined by a longitude and latitude true value and a height true value, and the measuring position of the target point is determined by a height measuring value and a longitude and latitude measuring value;
s4: and cutting out each frame of video containing the object to be cut according to the UTC time sequence.
2. The method for multi-target high-speed retrieval and cutting in video according to claim 1, wherein the unmanned aerial vehicle carries a visible light load, and after receiving a video recording signal, the unmanned aerial vehicle acquires UTC time, unmanned aerial vehicle longitude and latitude, unmanned aerial vehicle height and unmanned aerial vehicle azimuth pitch roll angle, synchronously triggers the visible light load to expose, carries out laser ranging on all target points and acquires the azimuth pitch angle of the visible light load.
3. The method for multi-target retrieval and cropping in video according to claim 2, wherein in S2, the frame header of the SEI data segment in the h.264 file is searched, and a POS array is established, and each time a frame header is searched, the file pointer position of the frame header is separately stored in one row in the POS array;
establishing a Data array, reading the file pointer position of each frame head in the POS array, reading the UTC time of each frame of video from the H.264 file, and independently storing the UTC time, the height measurement value and the longitude and latitude measurement value into one row in the Data array.
4. The method for multi-target fast searching and cropping in video according to claim 1, wherein in S3, a TURE array is established, and longitude and latitude true values and height true values of the target to be searched are input and stored in the TURE array.
5. The method of multi-target fast retrieval and cropping in video according to claim 4, wherein the TURE array comprises N rows, each row storing a true value of longitude and latitude and a true value of altitude of an object to be retrieved.
6. The method for multi-target fast searching and cropping in video according to claim 4, wherein the distance calculation method between the spatial position of the target to be searched and the measured position of the target point is as follows:
calculating the curvature radius N of the global ellipsoidal mortise unitary ring of the space position of the target to be searched 1 The method comprises the following steps:
wherein,length of major half axis of the earth's ellipsoid, +.>Representing a first eccentricity of the earth's ellipsoid, < >>Representing the latitude true value of the target to be retrieved;
calculating the curvature radius N of the global ellipsoidal unitary mortise for measuring positions of target points 2 The method comprises the following steps:
wherein,a latitude measurement value representing the target point;
converting the target to be searched and the target point into a space coordinate system from a ground surface coordinate system;
in the space coordinate system, the space coordinate X of the object to be searched 1 、Y 1 、Z 1 The method comprises the following steps of:
wherein,true value of longitude representing the object to be retrieved +.>Representing the height true value of the target to be retrieved;
in the spatial coordinate system, the spatial coordinate X of the target point 2 、Y 2 、Z 2 The method comprises the following steps of:
wherein,longitude measurement representing target point,/>A height measurement representing the target point;
the distance between the space position of the target to be searched and the measuring position of the target point is as follows:
7. the method for multi-target high-speed retrieval and cropping in video according to claim 1 or 6, wherein the judgment threshold is 20 meters.
CN202311734596.3A 2023-12-18 2023-12-18 Method for multi-target high-speed searching and cutting in video Pending CN117425046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311734596.3A CN117425046A (en) 2023-12-18 2023-12-18 Method for multi-target high-speed searching and cutting in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311734596.3A CN117425046A (en) 2023-12-18 2023-12-18 Method for multi-target high-speed searching and cutting in video

Publications (1)

Publication Number Publication Date
CN117425046A true CN117425046A (en) 2024-01-19

Family

ID=89528690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311734596.3A Pending CN117425046A (en) 2023-12-18 2023-12-18 Method for multi-target high-speed searching and cutting in video

Country Status (1)

Country Link
CN (1) CN117425046A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156199A (en) * 2015-04-22 2016-11-23 清华大学 A kind of video monitoring image memory search method
CN108287924A (en) * 2018-02-28 2018-07-17 福建师范大学 One kind can the acquisition of positioning video data and organizing search method
CN108680143A (en) * 2018-04-27 2018-10-19 南京拓威航空科技有限公司 Object localization method, device based on long-distance ranging and unmanned plane
CN109640057A (en) * 2018-12-30 2019-04-16 广东电网有限责任公司 A kind of transmission line of electricity video monitoring method and relevant apparatus
CN114200387A (en) * 2022-02-15 2022-03-18 北京航空航天大学东营研究院 Flight verification and evaluation method for TACAN space signal field pattern
CN116797716A (en) * 2022-03-17 2023-09-22 武汉地大信息工程股份有限公司 Real-time acquisition method, device and system for mapping model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156199A (en) * 2015-04-22 2016-11-23 清华大学 A kind of video monitoring image memory search method
CN108287924A (en) * 2018-02-28 2018-07-17 福建师范大学 One kind can the acquisition of positioning video data and organizing search method
CN108680143A (en) * 2018-04-27 2018-10-19 南京拓威航空科技有限公司 Object localization method, device based on long-distance ranging and unmanned plane
CN109640057A (en) * 2018-12-30 2019-04-16 广东电网有限责任公司 A kind of transmission line of electricity video monitoring method and relevant apparatus
CN114200387A (en) * 2022-02-15 2022-03-18 北京航空航天大学东营研究院 Flight verification and evaluation method for TACAN space signal field pattern
CN116797716A (en) * 2022-03-17 2023-09-22 武汉地大信息工程股份有限公司 Real-time acquisition method, device and system for mapping model

Similar Documents

Publication Publication Date Title
US8064640B2 (en) Method and apparatus for generating a precision fires image using a handheld device for image based coordinate determination
CN108562279B (en) Unmanned aerial vehicle surveying and mapping method
Peterman Landslide activity monitoring with the help of unmanned aerial vehicle
George et al. JOANNE: Joint dropsonde Observations of the Atmosphere in tropical North atlaNtic meso-scale Environments
CN110675448B (en) Ground lamplight remote sensing monitoring method, system and storage medium based on civil airliner
US20140133824A1 (en) System and method for simulataneous display of multiple geo-tagged videos of a particular geographical location
CN111083309B (en) Time alignment method of multi-sensor data and data acquisition equipment
US6803878B2 (en) Methods and apparatus for terrain correlation
US20040236535A1 (en) Method, apparatus and program for determining growth of trees
WO2011024116A2 (en) System and method for virtual range estimation
CA2485707A1 (en) Methods and apparatus for radar data processing
JP6828448B2 (en) Information processing equipment, information processing systems, information processing methods, and information processing programs
CN117425046A (en) Method for multi-target high-speed searching and cutting in video
CN116592850B (en) Method for correcting sky measurement precision by using star view velocity in seamless spectrum observation
KR102260240B1 (en) Terrain following flight method
CN112949411A (en) Spectral image correction method and device
CN108920553B (en) Data recording method for airborne multi-sensor platform
CN109141466B (en) Comprehensive display processing device for ship equipment test data
CN112995524A (en) High-precision acquisition vehicle, and photo exposure information generation system, method and synchronization device thereof
CN110906922A (en) Unmanned aerial vehicle pose information determining method and device, storage medium and terminal
CN113298113B (en) Rail-following environment classification method based on train-mounted satellite positioning observation data
CN113989659A (en) Ship target rapid detection method facing GEO remote sensing satellite
CN110646792B (en) Radar search window setting method based on observation whistle digital telescope
KR102183867B1 (en) System for providing fishing point information
CN110771183B (en) Information processing method, aircraft, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination