CN115131717B - Early warning method and system based on image analysis - Google Patents

Early warning method and system based on image analysis Download PDF

Info

Publication number
CN115131717B
CN115131717B CN202211043803.6A CN202211043803A CN115131717B CN 115131717 B CN115131717 B CN 115131717B CN 202211043803 A CN202211043803 A CN 202211043803A CN 115131717 B CN115131717 B CN 115131717B
Authority
CN
China
Prior art keywords
images
offset
characteristic
image
spliced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211043803.6A
Other languages
Chinese (zh)
Other versions
CN115131717A (en
Inventor
张奇
刘军
程佐斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xiangyi Aviation Technology Co Ltd
Original Assignee
Zhuhai Xiangyi Aviation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xiangyi Aviation Technology Co Ltd filed Critical Zhuhai Xiangyi Aviation Technology Co Ltd
Priority to CN202211043803.6A priority Critical patent/CN115131717B/en
Publication of CN115131717A publication Critical patent/CN115131717A/en
Application granted granted Critical
Publication of CN115131717B publication Critical patent/CN115131717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids

Abstract

The application relates to an early warning method and system based on image analysis, which relate to the technical field of aircrafts and comprise the steps of acquiring a running video of a flight device running in a runway in real time; splicing images based on the driving video to obtain a plurality of moving images; inputting a plurality of moving images into a first deep learning model trained in advance, and screening out a plurality of characteristic images comprising a first characteristic object and a second characteristic object; inputting all the characteristic images into a pre-trained second deep learning model to obtain a first labeling frame containing a first characteristic object and a second labeling frame containing a second characteristic object; obtaining the pixel spacing of the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame based on the first characteristic point and the second characteristic point; obtaining the offset of the flight equipment in the runway based on the pixel distance; and sending an early warning signal when the offset is greater than a preset first value. The method and the device have the effect of automatically analyzing whether the airplane deviates from the center position of the runway.

Description

Early warning method and system based on image analysis
Technical Field
The application relates to the technical field of aircrafts, in particular to an early warning method and system based on image analysis.
Background
Minimum operating speed is the minimum speed at which the critical engines suddenly fail during takeoff or approach of the aircraft, flight control can be restored using only the aerodynamic control system, and flight can be continued safely using normal flight skills and rudder operating forces not exceeding 150 pounds. If the minimum operating speed is too high, the front wheel is easy to leave the foundation to take off, and if the minimum operating speed is too low, the front wheel is easy to deviate from the central line of the runway.
In the related art, a video of the aircraft during takeoff is shot, and whether the aircraft deviates from the center position of the runway is judged through manual analysis of the shot video, so that whether the minimum operating speed meets the requirement is judged. How to automatically analyze whether the plane deviates from the center of the runway is a technical difficulty to be overcome.
Disclosure of Invention
In order to automatically analyze whether the airplane deviates from the center position of the runway, the application provides an early warning method and system based on image analysis.
In a first aspect, the following technical scheme is adopted in the early warning method based on image analysis provided by the application.
An early warning method based on image analysis comprises the following steps:
acquiring a running video of the flight equipment running in a runway in real time; the driving video comprises a first characteristic object in flight equipment and a second characteristic object in the runway;
performing image splicing based on the driving video to obtain a plurality of moving images;
inputting a plurality of moving images into a first deep learning model trained in advance, and screening out a plurality of characteristic images comprising a first characteristic object and a second characteristic object;
inputting all the characteristic images into a pre-trained second deep learning model to obtain a first labeling frame containing a first characteristic object and a second labeling frame containing a second characteristic object;
obtaining the pixel spacing of the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame based on the first characteristic point and the second characteristic point;
obtaining an offset of the flight device in the runway based on the pixel spacing; and the number of the first and second groups,
and sending an early warning signal when the offset is greater than a preset first value.
By adopting the technical scheme, when the offset is greater than the preset first value, the flight equipment is represented to be off tracking, and the processor sends the early warning signal to the external equipment.
Optionally, the method for obtaining the moving image by image stitching based on the driving video includes:
sequentially acquiring a frame of image of the driving video;
setting a tracking area of the image; the tracking area includes a first feature object and a second feature object;
tracking the tracking area to further obtain the movement amount of the first characteristic object in two adjacent frames;
obtaining matched images to be spliced based on the movement amount;
after a preset number of images to be spliced are obtained, splicing the preset number of images to be spliced to obtain one of the moving images; and the number of the first and second groups,
and after one of the moving images is obtained, obtaining the next moving image.
Optionally, the stitching the to-be-stitched images with the preset number includes:
storing the obtained image to be spliced;
obtaining the sum of the memory occupied by the preset number of images to be spliced after obtaining the preset number of images to be spliced;
applying for memory space based on the total occupied memory; the applied memory space is not less than the sum of the memories; and (c) a second step of,
and copying the images to be spliced to the memory space in sequence according to the sequence of the frames, and splicing the images.
Optionally, the method further includes:
obtaining the average offset value and the maximum offset of the N groups of offsets based on the current N groups of offsets;
obtaining the times N of the offset average value between two adjacent groups of data in the N groups of offset data based on the current N groups of offsets;
when the number n is greater than a preset number, the maximum offset is greater than a preset offset threshold value, and the average offset value is greater than a preset first offset value, determining that the flight equipment is in a first driving state; and the number of the first and second groups,
and sending the information that the flying equipment is in the first running state to external equipment.
Optionally, after acquiring one frame of image of the driving video, the method further includes:
and carrying out distortion correction on the acquired frame image based on the internal and external parameters of the image acquisition equipment and the lens distortion parameter.
Optionally, after performing image stitching based on the driving video to obtain a plurality of moving images, the method further includes: configuring time attribute information for the moving image according to the generation time of the moving image,
the method further comprises the following steps:
generating an offset curve based on the offset and the corresponding time attribute information;
comparing the current offset curve with a plurality of offset curves in an offset curve library to obtain a corresponding matching degree;
weighting and summing all the matching degrees to obtain the matching characteristic value of the current offset curve;
and when the matching characteristic value is larger than a preset value, storing the offset curve to the offset curve library and sending the offset curve to an external terminal.
In a second aspect, the image analysis-based early warning method provided by the application adopts the following technical scheme.
An image analysis based early warning system comprising:
the first acquisition module is used for acquiring a running video of the flight equipment running in the runway in real time; the driving video comprises a first characteristic object in flight equipment and a second characteristic object in the runway;
the first processing module is used for carrying out image splicing on the basis of the running video to obtain a plurality of moving images;
the second processing module is used for inputting the moving images into a first deep learning model trained in advance to screen out a plurality of characteristic images comprising a first characteristic object and a second characteristic object;
the third processing module is used for inputting all the characteristic images into a pre-trained second deep learning model to obtain a first labeling frame containing a first characteristic object and a second labeling frame containing a second characteristic object;
the fourth processing module is used for obtaining the pixel space between the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame based on the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame;
a fifth processing module, configured to obtain an offset of the flight device in the runway based on the pixel distance;
and the sixth processing module is used for sending an early warning signal when the offset is greater than a preset first value.
Optionally, the first processing module includes:
the first processing submodule is used for sequentially acquiring one frame of image of the driving video;
the second processing submodule is used for setting a tracking area of the image; the tracking area includes a first feature object and a second feature object;
the third processing submodule is used for tracking the tracking area so as to obtain the movement amount of the first characteristic object in two adjacent frames;
the fourth processing submodule is used for obtaining matched images to be spliced based on the movement amount;
the fifth processing submodule is used for splicing the images to be spliced with the preset number to obtain one of the moving images after the preset number of the images to be spliced are obtained;
and after one of the moving images is obtained, acquiring the next moving image.
In a third aspect, the present application discloses a computer device comprising a memory and a server, the memory having stored thereon a computer program that is loaded by the server and that performs any of the methods described above.
In a fourth aspect, the present application discloses a computer readable storage medium storing a computer program that can be loaded by a server and execute any of the methods described above.
Drawings
FIG. 1 is a flow chart of one embodiment of an early warning method based on image analysis according to the present application;
FIG. 2 is a flow chart of another embodiment of a warning method based on image analysis according to the present application;
FIG. 3 is a flow chart of another embodiment of a method for image analysis based early warning according to the present application;
FIG. 4 is a flow chart of another embodiment of a method for image analysis based early warning according to the present application;
fig. 5 is a system block diagram of an early warning system based on image analysis according to an embodiment of the present application;
in the figure, 501, a first obtaining module; 502. a first processing module; 503. a second processing module; 504. a third processing module; 505. a fourth processing module; 506. a fifth processing module; 507. and a sixth processing module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to fig. 1-5 and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The embodiment of the application discloses an early warning method based on image analysis. Referring to fig. 1, as an embodiment of an early warning method based on image analysis, an early warning method based on image analysis includes the steps of:
step 101, acquiring a running video of the flight equipment running in the runway in real time.
The driving video comprises a first characteristic object in the flight equipment and a second characteristic object in the runway.
Specifically, a running video of the flight device running on the runway can be collected through a high-speed camera mounted on the lower belly of the body of the flight device, the high-speed camera can be communicated with the processor, and the communication mode can be wireless or wired. The high-speed camera sends the shot video image to the processor in real time, and then the processor can obtain the running video of the flight equipment running in the runway in real time. The first feature may be a grounding point of a main landing gear of the flight device and the second feature may be a pattern of markings affixed or painted in the runway.
And 102, splicing the images based on the driving video to obtain a plurality of moving images.
Specifically, since the moving speed of the flight device is high, the error rate obtained by analyzing data by directly using the frame image in the driving video is high, and therefore, the driving video needs to be processed to obtain a plurality of moving images, so as to improve the accuracy of the identification result.
Step 103, inputting a plurality of moving images into a first deep learning model trained in advance, and screening out a plurality of characteristic images comprising a first characteristic object and a second characteristic object.
In particular, a deep learning model is a branch of machine learning, which is an algorithm that performs high-level abstraction on data using multiple processing layers containing complex structures or consisting of multiple nonlinear transformations. When the first deep learning model is trained, a large number of images containing the first feature object and the second feature object are required to be trained and corrected, and when the accuracy of the first deep learning model reaches a preset requirement, it can be judged that the first deep learning model is trained. And inputting the moving images into a first deep learning model trained in advance, and screening and outputting a plurality of characteristic images comprising a first characteristic object and a second characteristic object by a first deep learning module.
And 104, inputting all the characteristic images into a pre-trained second deep learning model to obtain a first labeling frame containing the first characteristic object and a second labeling frame containing the second characteristic object.
Specifically, the first deep learning model and the second deep learning model may be two sub-parts of a total learning model, or two different deep learning models, or the same deep learning model. The processor inputs all the characteristic images into a pre-trained second deep learning model, and the second deep learning model outputs a first labeling frame containing a first characteristic object and a second labeling frame containing a second characteristic object after processing.
And 105, obtaining the pixel space of the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame based on the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame.
Specifically, the first feature point may be a center point of the first labeling frame, the second feature point may be a center point of the second labeling frame, and a pixel distance between the first feature point and the second feature point is obtained by reading a pixel position of the first feature point in the feature image and a pixel position of the second feature point in the feature image.
And 106, obtaining the offset of the flight equipment in the runway based on the pixel distance.
Specifically, since the position of the high-speed camera is fixed, the size of the image captured each time is uniform, and therefore the offset amount of the flying apparatus in the runway can be obtained from the above-described pixel distance. For example, before the flying equipment takes off, a worker measures the distance between the center of the first characteristic object and the center of the second characteristic object in advance, then measures or calculates the pixel distance between the center of the first characteristic object and the center of the second characteristic object in the shot image, further obtains the proportion between the center of the first characteristic object and the center of the second characteristic object, obtains the corresponding proportion after measuring and carrying out error processing for multiple times, further obtains the pixel distance between the first characteristic point in the first marking frame and the second characteristic point in the second marking frame, can obtain the offset between the first characteristic object and the second characteristic object, and further obtains the offset of the flying equipment in the runway.
And step 107, sending an early warning signal when the offset is larger than a preset first value.
Specifically, the preset first value is preset by the system, the preset first value may be 9m, or may be another value, and the preset first value may be configured to be a different value according to different specifications and different requirement coefficients. When the offset is larger than a preset first value, the flight equipment is represented to be off tracking, and at the moment, the processor sends an early warning signal to external equipment, wherein the external equipment can be the central control of the flight equipment, or the central control of a ground command center or the central control of a test center.
Referring to fig. 2, as another embodiment of the early warning method based on image analysis, a method for obtaining a moving image by image stitching based on a driving video includes:
step 201, one frame of image of the driving video is acquired in sequence.
Setting a tracking area of the image; the tracking area includes a first feature object and a second feature object.
Step 202, tracking the tracking area to further obtain the moving amount of the first feature object in two adjacent frames.
And step 203, obtaining the matched images to be spliced based on the movement amount.
And 204, after the preset number of images to be spliced are obtained, splicing the preset number of images to be spliced to obtain one of the moving images.
Step 205, after one of the moving images is obtained, obtaining the next moving image.
Specifically, the driving video is composed of a plurality of frame images, and the processor sequentially acquires the images of the driving video in the order of the frames. The method of setting the tracking area may be: and (3) firstly, manually defining a tracking area, and carrying out background modeling on the areas where the first characteristic object and the second characteristic object are located. And for the images of the subsequent frames, performing image difference processing and image binarization processing on the images and the image modeled by the background to determine a tracking area in the corresponding image. Through the processing, the obtained moving image can be clearer, and the result obtained when the moving image is input into the deep learning model is more accurate.
Referring to fig. 3, as another embodiment of the image analysis-based early warning method, stitching images to be stitched in a preset number includes:
step S301, storing the obtained images to be spliced to obtain a preset number of images to be spliced, and then obtaining the sum of memories occupied by the preset number of images to be spliced;
step S302, memory space is applied based on total occupied memory; the applied memory space is not less than the sum of the memories; and the number of the first and second groups,
and step S303, copying the images to be spliced to a memory space in sequence according to the sequence of the frames, and splicing the images.
Specifically, after each image to be spliced is obtained, only the images to be spliced are stored without being spliced directly, and after a preset number of images to be spliced are obtained, image splicing is performed, so that an application memory is not required for each splicing, and occupied memory is saved; on the other hand, the application of the memory takes time, so that the image splicing efficiency can be greatly improved.
Referring to fig. 4, as another embodiment of the warning method based on image analysis, the method further includes:
s401, obtaining an offset average value and a maximum offset of N groups of offsets based on the current N groups of offsets;
s402, obtaining the number N of times that the offset average value is positioned between two adjacent groups of data in the N groups of offset data based on the current N groups of offsets;
step S403, when the number n of times is greater than a preset number of times, the maximum offset is greater than a preset offset threshold value, and the average offset value is greater than a preset first offset value, determining that the flight equipment is in a first driving state; and (c) a second step of,
and S404, sending the information that the flight equipment is in the first running state to external equipment.
Specifically, the first driving state may be that the flying device is affected by external weather, for example, a strong airflow, when the number of times n is greater than a preset number of times, the maximum offset is greater than a preset offset threshold, and the average value of the offsets is greater than a preset first offset, the processor determines that the flying device is in the first driving state, and then the processor sends the first driving state information to the external device, so that the external device determines whether the current data is used for reference to determine the offset degree of the flying device.
After acquiring one frame of image of the driving video, the method further comprises the following steps:
and carrying out distortion correction on the acquired frame image based on the internal and external parameters of the image acquisition equipment and the lens distortion parameter.
Specifically, in order to make the image shot by the height camera more realistic, the acquired frame image is subjected to distortion correction based on the internal parameters, the external parameters and the lens distortion parameters of the image acquisition device.
As another embodiment of an early warning method based on image analysis, after image stitching is performed based on a driving video to obtain a plurality of moving images, the method further includes: configuring time attribute information for the moving image according to the generation time of the moving image, the method further comprising:
generating an offset curve based on the offset and the corresponding time attribute information thereof;
comparing the current offset curve with a plurality of offset curves in an offset curve library to obtain corresponding matching degrees;
weighting and summing all the matching degrees to obtain the matching characteristic value of the offset curve at the current time;
and when the matching characteristic value is larger than the preset value, storing the offset curve into an offset curve library and sending the offset curve to an external terminal.
Referring to fig. 5, the present application further provides an early warning system based on image analysis, including:
the first obtaining module 501 is configured to obtain a driving video of a flight device driving in a runway in real time; the driving video comprises a first characteristic object in the flight equipment and a second characteristic object in the runway;
the first processing module 502 is configured to perform image stitching based on the driving video to obtain a plurality of moving images;
the second processing module 503 is configured to input the moving images into a first deep learning model trained in advance, and screen out a plurality of feature images including a first feature object and a second feature object;
a third processing module 504, configured to input all feature images into a second deep learning model trained in advance, so as to obtain a first labeling frame including a first feature object and a second labeling frame including a second feature object;
a fourth processing module 505, configured to obtain a pixel distance between the first feature point in the first labeling frame and the second feature point in the second labeling frame based on the first feature point in the first labeling frame and the second feature point in the second labeling frame;
a fifth processing module 506, configured to obtain an offset of the flight device in the runway based on the pixel pitch;
the sixth processing module 507 sends the early warning signal when the offset is greater than the preset first value.
As one embodiment of the first processing module 502, the first processing module 502 includes:
the first processing submodule is used for sequentially acquiring one frame of image of the driving video;
the second processing submodule is used for setting a tracking area of the image; the tracking area comprises a first characteristic object and a second characteristic object;
the third processing submodule is used for tracking the tracking area so as to obtain the movement amount of the first characteristic object in two adjacent frames;
the fourth processing submodule is used for obtaining matched images to be spliced based on the movement amount;
the fifth processing submodule is used for splicing the images to be spliced with the preset number to obtain one moving image after the images to be spliced with the preset number are obtained;
wherein, after one of the moving images is obtained, the next moving image is obtained.
As one embodiment of the fourth processing sub-module, the fourth processing sub-module includes a splicing sub-module, and the splicing sub-module is configured to: storing the obtained images to be spliced;
obtaining the sum of the memory occupied by the preset number of images to be spliced after obtaining the preset number of images to be spliced;
applying for memory space based on total occupied memory; the applied memory space is not less than the sum of the memories; and the number of the first and second groups,
and copying the images to be spliced to a memory space in sequence according to the sequence of the frames, and splicing the images.
As one embodiment of the warning system based on image analysis, the system further includes a seventh processing module, where the seventh processing module is configured to: obtaining the average offset value and the maximum offset of N groups of offsets based on the current N groups of offsets;
obtaining the times N of the offset average value between two adjacent groups of data in the N groups of offset data based on the current N groups of offsets;
when the number n is greater than the preset number, the maximum offset is greater than a preset offset threshold value, and the average offset value is greater than a preset first offset, judging that the flight equipment is in a first running state; and the number of the first and second groups,
and sending the information that the flight device is in the first running state to the external device.
As another embodiment of the warning system based on image analysis, after acquiring one frame of image of the driving video, the method further includes: and carrying out distortion correction on the acquired frame image based on the internal and external parameters of the image acquisition equipment and the lens distortion parameter.
As another implementation mode of the early warning system based on image analysis, after image splicing is carried out based on a driving video to obtain a plurality of moving images, time attribute information is configured for the moving images according to the generation time of the moving images,
the system further comprises an eighth processing module configured to: generating an offset curve based on the offset and the corresponding time attribute information; comparing the current offset curve with a plurality of offset curves in an offset curve library to obtain corresponding matching degrees; weighting and summing all the matching degrees to obtain the matching characteristic value of the offset curve at the current time; and when the matching characteristic value is larger than the preset value, storing the offset curve into an offset curve library and sending the offset curve to an external terminal.
The embodiment of the application also discloses computer equipment.
Specifically, the device comprises a memory and a server, wherein the memory stores a computer program which can be loaded by the server and executes any one of the flight early warning methods based on image analysis.
The embodiment of the application also discloses a computer readable storage medium.
Specifically, the computer-readable storage medium stores a computer program that can be loaded by a server and executes any one of the image analysis-based flight warning methods described above, and includes, for example: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The foregoing is a preferred embodiment of the present application and is not intended to limit the scope of the application in any way, and any features disclosed in this specification (including the abstract and drawings) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.

Claims (5)

1. An early warning method based on image analysis is characterized by comprising the following steps:
acquiring a running video of the flight equipment running in a runway in real time; the driving video comprises a first characteristic object in flight equipment and a second characteristic object in the runway;
performing image splicing based on the driving video to obtain a plurality of moving images;
inputting a plurality of moving images into a first deep learning model trained in advance, and screening out a plurality of characteristic images comprising a first characteristic object and a second characteristic object;
inputting all the characteristic images into a pre-trained second deep learning model to obtain a first labeling frame containing a first characteristic object and a second labeling frame containing a second characteristic object;
obtaining the pixel space between the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame based on the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame;
obtaining an offset of the flight device in the runway based on the pixel spacing; and (c) a second step of,
when the offset is larger than a preset first value, sending an early warning signal;
the running video of the flight equipment running in the runway is collected through a high-speed camera arranged on the lower belly part of the body of the flight equipment, the high-speed camera can be communicated with a processor, the high-speed camera sends a shot video image to the processor in real time, and then the processor can acquire the running video of the flight equipment running in the runway in real time; the first characteristic object is a grounding point of a main landing gear of the flight equipment, and the second characteristic object is a mark pattern pasted or drawn in a runway;
step 202, tracking the tracking area to further obtain the movement amount of the first characteristic object in two adjacent frames;
step 203, obtaining matched images to be spliced based on the movement amount;
step 204, after a preset number of images to be spliced are obtained, splicing the preset number of images to be spliced to obtain one moving image;
step 205, after one of the moving images is obtained, obtaining the next moving image;
the running video consists of a plurality of frame images, and the processor sequentially acquires the images of the running video according to the sequence of the frames; the method for setting the tracking area comprises the following steps: firstly, manually demarcating a tracking area, and carrying out background modeling on areas where a first characteristic object and a second characteristic object are located; for the images of the subsequent frames, performing image difference processing and image binarization processing on the images and the image modeled by the background to determine a tracking area in the corresponding image;
step S301, storing the obtained images to be spliced to obtain a preset number of images to be spliced, and then obtaining the sum of memories occupied by the preset number of images to be spliced;
step S302, memory space is applied based on total occupied memory; the applied memory space is not less than the sum of the memories; and the number of the first and second groups,
step S303, copying the images to be spliced to a memory space in sequence according to the sequence of the frames, and splicing the images;
after each image to be spliced is obtained, only the images to be spliced are stored without being spliced directly, and after a preset number of images to be spliced are obtained, image splicing is carried out;
s401, obtaining an offset average value and a maximum offset of N groups of offsets based on the current N groups of offsets;
step S402, obtaining the times N of the offset average value between two adjacent groups of data in N groups of offset data based on the current N groups of offsets;
step S403, when the number n of times is greater than a preset number of times, the maximum offset is greater than a preset offset threshold value, and the average offset value is greater than a preset first offset, determining that the flight equipment is in a first running state; and the number of the first and second groups,
step S404, sending the information that the flight equipment is in the first running state to external equipment;
the first driving state is that the flying equipment is influenced by outside weather and has stronger airflow, when the number n of times is greater than a preset number of times, the maximum offset is greater than a preset offset threshold value and the average value of the offsets is greater than a preset first offset value, the processor judges that the flying equipment is in the first driving state, and then the processor sends the first driving state information to the external equipment for the external equipment to refer whether the current data is adopted to judge the offset degree of the flying equipment;
and carrying out distortion correction on the acquired frame image based on the internal and external parameters of the image acquisition equipment and the lens distortion parameter.
2. The early warning method based on image analysis according to claim 1, wherein after the moving images are obtained by image stitching based on the driving video, the method further comprises: configuring time attribute information for the moving image according to the generation time of the moving image,
the method further comprises the following steps:
generating an offset curve based on the offset and the corresponding time attribute information thereof;
comparing the current offset curve with a plurality of offset curves in an offset curve library to obtain a corresponding matching degree;
weighting and summing all the matching degrees to obtain a matching characteristic value of the current offset curve;
and when the matching characteristic value is larger than a preset value, storing the offset curve to the offset curve library and sending the offset curve to an external terminal.
3. An early warning system based on image analysis, comprising:
the first acquisition module is used for acquiring a running video of the flight equipment running in the runway in real time; the driving video comprises a first characteristic object in flight equipment and a second characteristic object in the runway;
the first processing module is used for carrying out image splicing on the basis of the running video to obtain a plurality of moving images;
the second processing module is used for inputting the moving images into a first deep learning model trained in advance to screen out a plurality of characteristic images comprising a first characteristic object and a second characteristic object;
the third processing module is used for inputting all the characteristic images into a pre-trained second deep learning model to obtain a first labeling frame containing a first characteristic object and a second labeling frame containing a second characteristic object;
the fourth processing module is used for obtaining the pixel space between the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame based on the first characteristic point in the first labeling frame and the second characteristic point in the second labeling frame;
a fifth processing module, configured to obtain an offset of the flight device in the runway based on the pixel distance;
the sixth processing module is used for sending an early warning signal when the offset is greater than a preset first value;
the method comprises the steps that a running video of the flight equipment running in a runway is collected through a high-speed camera installed on the lower abdomen of a fuselage of the flight equipment, the high-speed camera can be communicated with a processor, a shot video image is sent to the processor in real time by the high-speed camera, and then the processor can acquire the running video of the flight equipment running in the runway in real time; the first characteristic object is a grounding point of a main landing gear of the flight equipment, and the second characteristic object is a mark pattern pasted or drawn in a runway;
step 202, tracking the tracking area to further obtain the movement amount of the first characteristic object in two adjacent frames;
step 203, obtaining matched images to be spliced based on the movement amount;
step 204, after a preset number of images to be spliced are obtained, splicing the preset number of images to be spliced to obtain one moving image;
step 205, after one of the moving images is obtained, obtaining the next moving image;
the driving video consists of a plurality of frame images, and the processor sequentially acquires the images of the driving video according to the sequence of the frames; the method for setting the tracking area comprises the following steps: firstly, manually defining a tracking area, and carrying out background modeling on the areas where the first characteristic object and the second characteristic object are located; for the images of the subsequent frames, performing image difference processing and image binarization processing on the images and the image modeled by the background to determine a tracking area in the corresponding image;
step S301, storing the obtained images to be spliced to obtain a preset number of images to be spliced, and then obtaining the sum of memories occupied by the preset number of images to be spliced;
step S302, memory space is applied based on total occupied memory; the applied memory space is not less than the sum of the memories; and the number of the first and second groups,
step S303, copying the images to be spliced to a memory space in sequence according to the sequence of the frames, and splicing the images;
after each image to be spliced is obtained, only storing the images to be spliced without directly splicing, and after a preset number of images to be spliced are obtained, splicing the images;
s401, obtaining an offset average value and a maximum offset of N groups of offsets based on the current N groups of offsets;
step S402, obtaining the times N of the offset average value between two adjacent groups of data in N groups of offset data based on the current N groups of offsets;
step S403, when the number n of times is greater than a preset number of times, the maximum offset is greater than a preset offset threshold value, and the average offset value is greater than a preset first offset, determining that the flight equipment is in a first running state; and the number of the first and second groups,
step S404, sending the information that the flight equipment is in the first running state to external equipment;
the first driving state is that the flying equipment is influenced by outside weather and has stronger airflow, when the number n of times is greater than a preset number of times, the maximum offset is greater than a preset offset threshold value and the average value of the offsets is greater than a preset first offset value, the processor judges that the flying equipment is in the first driving state, and then the processor sends the first driving state information to the external equipment for the external equipment to refer whether the current data is adopted to judge the offset degree of the flying equipment;
and carrying out distortion correction on the acquired frame image based on the internal and external parameters of the image acquisition equipment and the lens distortion parameter.
4. A computer device, characterized by: comprising a memory and a server, said memory having stored thereon a computer program for a method according to any one of claims 1 to 2, when loaded and executed by the server.
5. A computer-readable storage medium, in which a computer program is stored which can be loaded by a server and which executes the method according to any one of claims 1 to 2.
CN202211043803.6A 2022-08-30 2022-08-30 Early warning method and system based on image analysis Active CN115131717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211043803.6A CN115131717B (en) 2022-08-30 2022-08-30 Early warning method and system based on image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211043803.6A CN115131717B (en) 2022-08-30 2022-08-30 Early warning method and system based on image analysis

Publications (2)

Publication Number Publication Date
CN115131717A CN115131717A (en) 2022-09-30
CN115131717B true CN115131717B (en) 2022-12-20

Family

ID=83387576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211043803.6A Active CN115131717B (en) 2022-08-30 2022-08-30 Early warning method and system based on image analysis

Country Status (1)

Country Link
CN (1) CN115131717B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276286A (en) * 2019-06-13 2019-09-24 中国电子科技集团公司第二十八研究所 A kind of embedded panoramic video splicing system based on TX2
CN114715168A (en) * 2022-05-18 2022-07-08 新疆大学 Vehicle yaw early warning method and system under road marking missing environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10311739B2 (en) * 2015-01-13 2019-06-04 Guangzhou Xaircraft Technology Co., Ltd Scheduling method and system for unmanned aerial vehicle, and unmanned aerial vehicle
CN108446630B (en) * 2018-03-20 2019-12-31 平安科技(深圳)有限公司 Intelligent monitoring method for airport runway, application server and computer storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276286A (en) * 2019-06-13 2019-09-24 中国电子科技集团公司第二十八研究所 A kind of embedded panoramic video splicing system based on TX2
CN114715168A (en) * 2022-05-18 2022-07-08 新疆大学 Vehicle yaw early warning method and system under road marking missing environment

Also Published As

Publication number Publication date
CN115131717A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN111666855B (en) Animal three-dimensional parameter extraction method and system based on unmanned aerial vehicle and electronic equipment
CN108109437B (en) Unmanned aerial vehicle autonomous route extraction and generation method based on map features
CN107481292A (en) The attitude error method of estimation and device of vehicle-mounted camera
CN102073846B (en) Method for acquiring traffic information based on aerial images
CN109344878B (en) Eagle brain-like feature integration small target recognition method based on ResNet
CN110580475A (en) line diagnosis method based on unmanned aerial vehicle inspection, electronic device and storage medium
CN112215860A (en) Unmanned aerial vehicle positioning method based on image processing
CN114281093B (en) Defect detection system and method based on unmanned aerial vehicle power inspection
CN113296537B (en) Electric power unmanned aerial vehicle inspection method and system based on electric power pole tower model matching
CN110008919A (en) The quadrotor drone face identification system of view-based access control model
US20220237908A1 (en) Flight mission learning using synthetic three-dimensional (3d) modeling and simulation
CN114445467A (en) Specific target identification and tracking system of quad-rotor unmanned aerial vehicle based on vision
CN115131717B (en) Early warning method and system based on image analysis
Singh et al. An efficient approach for instance segmentation of railway track sleepers in low altitude UAV images using mask R-CNN
CN115980742B (en) Radar detection method and device for unmanned aerial vehicle
CN117115252A (en) Bionic ornithopter space pose estimation method based on vision
CN111104965A (en) Vehicle target identification method and device
CN111523392A (en) Deep learning sample preparation method and recognition method based on satellite ortho-image full-attitude
Antsev et al. UAV landing system simulation model software system
CN113204246A (en) Unmanned aerial vehicle running state detection method
CN110826432A (en) Power transmission line identification method based on aerial picture
CN111310695B (en) Forced landing method and device and electronic equipment
CN111666959A (en) Vector image matching method and device
CN114842296A (en) Attitude identification method and device based on flight instrument
CN117710844B (en) Building safety monitoring method based on unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant