CN114640853A - Unmanned aerial vehicle image processing system that cruises - Google Patents

Unmanned aerial vehicle image processing system that cruises Download PDF

Info

Publication number
CN114640853A
CN114640853A CN202210536115.7A CN202210536115A CN114640853A CN 114640853 A CN114640853 A CN 114640853A CN 202210536115 A CN202210536115 A CN 202210536115A CN 114640853 A CN114640853 A CN 114640853A
Authority
CN
China
Prior art keywords
area
image
gray
gray level
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210536115.7A
Other languages
Chinese (zh)
Other versions
CN114640853B (en
Inventor
王新亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Binzhou Civil Air Defense Engineering And Command Support Center
Original Assignee
Binzhou Civil Air Defense Engineering And Command Support Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Binzhou Civil Air Defense Engineering And Command Support Center filed Critical Binzhou Civil Air Defense Engineering And Command Support Center
Priority to CN202210536115.7A priority Critical patent/CN114640853B/en
Publication of CN114640853A publication Critical patent/CN114640853A/en
Application granted granted Critical
Publication of CN114640853B publication Critical patent/CN114640853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention discloses an unmanned aerial vehicle cruise image processing system, and relates to the field of image recognition. The method mainly comprises the following steps: the image acquisition module acquires a multi-frame gray image of the current moment of the target area by using the unmanned aerial vehicle; the image processing module is used for sequentially carrying out differential operation on adjacent frame images in the multi-frame gray level image and superposing differential operation results to obtain a superposed image; dividing the superposed image into areas, and obtaining the weight of each pixel point in each area by using the frequency of the gray level of each area; the transmission module is used for carrying out Huffman coding on the superposed image according to the weight of each pixel point in each region and sending coded data to the monitoring center; and the monitoring center receives the coded data sent by the transmission module, decodes the received coded data to obtain a gray image, and sends feedback information to the image acquisition module and the image processing module respectively. The embodiment of the invention can obtain the image containing more effective information in emergency.

Description

Unmanned aerial vehicle image processing system that cruises
Technical Field
The application relates to the field of image recognition, in particular to an unmanned aerial vehicle cruise image processing system.
Background
At present, for image identification in the cruising process of an unmanned aerial vehicle, mobile network equipment is mainly utilized to carry out communication transmission on images to be transmitted, however, in an emergency situation, an original communication transmission mode is kept to transmit original images, a monitoring center cannot receive images containing more information within effective time, and meanwhile, due to the fact that the image processing speed and the image storage space are limited, data redundancy possibly exists in the transmitted images, and therefore effective information in the images cannot be transmitted to the monitoring center in time.
Therefore, there is a need for an unmanned aerial vehicle cruise image processing system capable of recognizing more effective information existing in an image in an emergency and transmitting the image containing more effective information.
Disclosure of Invention
In view of the above technical problems, an embodiment of the present invention provides an unmanned aerial vehicle cruising image processing system, which can adaptively adjust compression degrees of different areas of an image to be transmitted according to a frequency of receiving feedback information from a monitoring center by an image processing module, and determine an image acquisition frequency according to the frequency of receiving the feedback information by an image acquisition module, thereby adaptively providing the compressed transmission image to the monitoring center, so as to identify more effective information existing in the image, and enable the monitoring center to receive the image containing more effective information as soon as possible.
The embodiment of the invention provides an unmanned aerial vehicle cruise image processing system, which comprises:
and the image acquisition module is used for acquiring the multi-frame gray level image of the target area at the current moment by using the unmanned aerial vehicle.
And the image processing module is used for sequentially carrying out differential operation on adjacent frame gray level images in the multi-frame gray level images and superposing differential operation results to obtain superposed images. And carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
And the transmission module is used for carrying out Huffman coding on the superposed image according to the weight of each pixel point in each region and sending the coded data to the monitoring center.
And the monitoring center is used for receiving the coded data sent by the transmission module, carrying out Hoffman decoding on the received coded data to obtain a gray image, and respectively sending feedback information to the image acquisition module and the image processing module.
Further, in the unmanned aerial vehicle cruising image processing system, the image processing module is further configured to determine the number of frames of the gray scale image acquired at the next moment based on the frequency of receiving the feedback information from the monitoring center at the current moment.
Further, in an unmanned aerial vehicle image processing system that cruises, carry out regional division to the superimposed image among the image processing module, utilize the frequency of each regional grey level to obtain the weight of each pixel in each region, include:
and dividing the superposed image into a first area and a second area, wherein the first area is a connected domain with the largest area in the superposed image.
And adjusting the gray level number in the first area to be a first gray level number and adjusting the gray level number in the second area to be a second gray level number in a linear mapping mode, wherein the first gray level number is based on the frequency of the received feedback information, the sum of the first gray level number and the second gray level number is a preset gray level number, and the first gray level number is greater than the second gray level number.
And sequentially giving weights to all the gray levels in the second area according to the sequence of the frequency numbers of the gray levels in the second area from low to high, wherein the larger the frequency number of the gray levels in the second area is, the larger the corresponding weight is.
And determining the weight corresponding to each gray level in the first area according to the adjusted frequency of each gray level in the first area on the basis of each weight corresponding to each gray level in the second area so as to obtain the weight of each pixel point in the first area.
Further, in an unmanned aerial vehicle cruise image processing system, the number of gray levels in a first region is adjusted to a first number of gray levels and the number of gray levels in a second region is adjusted to a second number of gray levels in a linear mapping manner, including:
the second gray scale number is determined based on a frequency of receiving the feedback information, and the lower the frequency of receiving the feedback information, the smaller the second gray scale number.
And compressing the gray scale series in the second region from 256 to a second gray scale series by means of linear mapping.
And determining the first gray scale number according to the second gray scale number and the preset gray scale digit number.
The number of gray levels in the first region is compressed from 256 to a first number of gray levels by means of linear mapping.
Further, in an unmanned aerial vehicle cruise image processing system, on the basis of each weight corresponding to each gray level in a second region, according to the adjusted frequency of each gray level in a first region, determining the weight corresponding to each gray level in the first region to obtain the weight of each pixel point in the first region, the method includes:
the maximum value of the weights corresponding to the respective gray levels in the second region is determined, and 1 is added to the maximum value to obtain the initial weight of the second region.
And sequentially giving weights to all gray levels in the first area according to the order from low frequency to high frequency of the gray levels in the first area, wherein the larger the frequency of the gray levels in the first area is, the larger the corresponding weight is, and the weight of the gray level with the lowest frequency in the first area is given as the initial weight.
And respectively taking the weights corresponding to the gray levels of the pixels in the first area as the weights of the pixels in the first area.
Further, in an unmanned aerial vehicle image processing system that cruises, before carrying out difference operation in proper order to adjacent frame image in the multiframe gray level image of current moment in the image processing module, still include: and respectively carrying out median filtering denoising on each gray level image in the multi-frame gray level image.
Further, in the unmanned aerial vehicle cruise image processing system, the image acquisition module is further used for acquiring position information when multi-frame gray level images of the target area are acquired, and sending the position information to the monitoring center.
Further, in the unmanned aerial vehicle cruising image processing system, when the frequency of the image processing module receiving the feedback information from the monitoring center is lower than a preset frequency threshold value, rescue workers or equipment are informed to arrive at the position where the position information is located to implement search and rescue.
Compared with the prior art, the embodiment of the invention provides an unmanned aerial vehicle cruise image processing system, which has the beneficial effects that: the method and the device have the advantages that the compression degree of different areas of the image to be transmitted can be adaptively adjusted according to the frequency of the image processing module receiving the feedback information from the monitoring center, and the acquisition frequency of the image is determined according to the frequency of the image acquisition module receiving the feedback information, so that the compressed transmission image is adaptively provided to the monitoring center, more effective information in the image is identified, and the monitoring center can receive the image containing more effective information as soon as possible.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an unmanned aerial vehicle cruise image processing system according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the operation of the image processing module and the transmission module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a process of obtaining weights of pixels in a superimposed image according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a first region and a second region in an overlay image according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the invention provides an unmanned aerial vehicle cruise image processing system, as shown in fig. 1, comprising: an image acquisition module 100, an image processing module 200, a transmission module 300 and a monitoring center 400.
As shown in fig. 2, the operation process of the image processing module and the transmission module in the embodiment of the present invention may include the following contents:
step S101, the image acquisition module 100 acquires a multi-frame grayscale image of the target area at the current time by using the drone.
Step S102, the image processing module 200 sequentially performs difference operation on adjacent frame gray level images in the multi-frame gray level image, and superposes the difference operation results to obtain a superposed image; and carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
Step S103, the transmission module 300 performs huffman coding on the superimposed image according to the weight of each pixel in each region, and sends the coded data to the monitoring center 400.
In step S104, the image capturing module 100 and the image processing module 200 receive feedback information sent from the monitoring center 400.
And the monitoring center 400 is configured to receive the encoded data sent by the transmission module 300, perform huffman decoding on the received encoded data to obtain a gray image, and send feedback information to the image acquisition module 100 and the image processing module 200, respectively.
It should be noted that the feedback information in the embodiment of the present invention refers to feedback information sent by the monitoring center 400 to the image acquisition module 100 or the image processing module 200 for feeding back the received encoded data.
The monitoring center 200 is configured to receive encoded data sent by the monitoring center, perform huffman decoding on the received encoded data to obtain a grayscale image, and send feedback information to the sending end.
The embodiment of the invention mainly aims to: and acquiring images of the target area by using the unmanned aerial vehicle, performing different data compression coding according to different emergency levels, and transmitting communication information by using the unmanned aerial vehicle.
Further, in step S101, the image acquisition module 100 acquires a multi-frame grayscale image of the target area at the current time by using the drone. The method specifically comprises the following steps:
optionally, after the unmanned aerial vehicle in the scene transmits the image, the transmitted image may be subjected to a class stack concept and temporarily cached in the register. The specific mode is to set a storage upper limit value, arrange the transmitted images in ascending order according to the time sequence, and delete the image corresponding to the feedback information from the memory when receiving the feedback information sent by the monitoring center, thereby releasing the storage space.
In order to realize the emergency communication of the unmanned aerial vehicle in the emergency scene, the emergency scene can be classified according to the frequency, namely the interval duration, of the feedback information sent by the monitoring center received by the sending end, the pixels of the effective information in the image information are distinguished by combining historical data in different emergency grade scenes, and the effective information code carried by the pixels is shortened as much as possible by carrying out dynamic adjustment of the Huffman code according to the emergency grade, so that the unmanned aerial vehicle can be transmitted more efficiently.
In the embodiment of the present invention, the image acquisition module 100 may change the number of frames in the multi-frame grayscale image acquired at the next time based on the frequency of receiving the feedback information from the monitoring center 400, wherein the higher the frequency of receiving the feedback information is, the higher the frequency indicates that the monitoring center can timely and efficiently receive the encoded data, and at this time, the number of frames of the grayscale image acquired at the next time can be correspondingly reduced; on the contrary, the lower the frequency of receiving the feedback information is, it indicates that the monitoring center 400 is more likely to fail to receive the encoded data transmitted by the transmission module in time, or the transmission module has a tendency of failing to continue to perform image transmission with the monitoring center, and at this time, the number of frames of images acquired by the image acquisition module 100 at the next moment needs to be increased to acquire more effective information, so that images containing more information are acquired and transmitted to the monitoring center 400 in time.
Step S102, the image processing module 200 sequentially performs difference operation on adjacent frame gray level images in the multi-frame gray level image, and superposes the difference operation results to obtain a superposed image; and carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
Firstly, difference operation is sequentially carried out on adjacent frame gray level images in a multi-frame gray level image, and the difference operation results are superposed to obtain a superposed image.
In the process of sequentially carrying out differential operation on adjacent frame gray level images in the collected multi-frame gray level images of the target area at the current moment, the frame difference interval during the differential operation can be determined according to the frequency of receiving feedback information from the monitoring center by the sending end, so that effective information can be obtained, and information redundancy is avoided.
And secondly, carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
Further, as shown in fig. 3, performing region division on the superimposed image, and obtaining the weight of each pixel in each region by using the frequency of the gray level of each region may include: step S1021, step S1022, step S1023, step S1024.
Further, in step S1021, the superimposed image is divided into a first area and a second area, wherein the first area is a connected domain with the largest area in the superimposed image.
Firstly, through connected domain analysis, connected domains in the superposed image are obtained, the connected domain analysis is also called as a connected domain mark, which means that the connected domains in the image are found and marked, as shown in fig. 4, a schematic diagram of a first region and a second region in the embodiment of the present invention is shown, as shown in fig. 4, the connected domain with the largest area in each connected domain is used as the first region, and meanwhile, the part of the superposed image except the first region is used as the second region.
It should be noted that, in the embodiment of the present invention, the information included in the first area is the most, and the information included in the second area is usually the background portion, so that it is convenient to perform different degrees of gray scale compression on the first area and the second area respectively, so that the effective information is retained to the maximum extent, and the irrelevant background information is compressed as much as possible.
Further, in step S1022, the number of gray levels in the first region is adjusted to a first number of gray levels and the number of gray levels in the second region is adjusted to a second number of gray levels in a linear mapping manner, where the second number of gray levels may be based on the frequency of receiving the feedback information, the sum of the first number of gray levels and the second number of gray levels is a preset number of gray levels, and the first number of gray levels is greater than the second number of gray levels.
First, a second gray scale number is determined based on a frequency of receiving the feedback information, and the lower the frequency of receiving the feedback information, the smaller the second gray scale number. It should be noted that the lower the frequency of receiving the feedback information, the greater the degree of grayscale compression that needs to be performed on the second region, and at the same time, the information contained in the first region is retained as much as possible while reducing the amount of data transmission.
Secondly, compressing the gray level number in the second area from 256 to a second gray level number in a linear mapping mode; that is, the original gray scale range is [0, 255], which corresponds to 256 gray scales, and the gray scales are reduced to a plurality of second gray scales by means of linear mapping. For example, when the second gray scale number is 32, the gray scale values of the pixel points located in the [0, 8] gray scale range before linear mapping are all linearly mapped to 4, the gray scale values of the pixel points located in the [9, 16] gray scale range before linear mapping are all linearly mapped to 12, and thus the mapping of the gray scale of the pixel points in the whole [0, 255] range is completed.
Then, since the first gray scale number is greater than the second gray scale number and the sum of the first gray scale number and the second gray scale number is the preset number of gray bits, the first gray scale number can be obtained, and the first gray scale number can be determined according to the second gray scale number and the preset number of gray bits, for example, when the preset number of gray bits is 256 and the second number of gray bits is 32, the first number of gray bits can be obtained as 224.
Then, the number of gray levels in the first region is compressed from 256 to the first number of gray levels by means of linear mapping, and it is necessary to make the linear mapping result a positive integer.
Further, in step S1023, weights are sequentially given to the gray levels in the second area in the order of the frequency count of the gray levels in the second area from low to high, and the larger the frequency count of the gray levels in the second area is, the larger the corresponding weight is.
For example, since the gray level in the second region is the second gray level after the gray level adjustment process of the linear mapping, the weights may be sequentially given to the gray level occurrence frequency from low to high, the weight of the gray level with the smallest frequency may be given as 1, and the weight of the gray level with the largest frequency may be given as the second gray level.
Further, step S1024 is performed to determine, based on the weights corresponding to the gray levels in the second region, the weights corresponding to the gray levels in the first region according to the adjusted frequency of the gray levels in the first region, so as to obtain the weights of the pixels in the first region.
First, the maximum value of the weights corresponding to the respective gray levels in the second region is determined, and the maximum value is added to 1 to be the initial weight of the first region. The initial weight in the embodiment of the present invention refers to a weight corresponding to a gray level with the smallest frequency number in the first region.
And then, sequentially giving weights to all the gray levels in the first area according to the order of the frequency numbers of the gray levels in the first area from low to high, wherein the larger the frequency number of the gray levels in the first area is, the larger the corresponding weight is, and the weight of the gray level with the lowest frequency number in the first area is given as the initial weight.
And respectively taking the weights corresponding to the gray levels of the pixels in the first area as the weights of the pixels in the first area.
Therefore, the weights of all pixel points in the superposed image are obtained respectively, normalization processing can be carried out on the weights to enable the sum of all the weights to be one, so that encoded data can be obtained after encoding is carried out according to a Huffman encoding mode, and the encoded data are transmitted to the monitoring center.
Further, in step S103, the transmission module 300 performs huffman coding on the superimposed image according to the weight of each pixel point in each region, and sends the coded data to the monitoring center 400. The method specifically comprises the following steps:
in this way, the monitoring center can receive the coded data containing more information so as to decompress the image containing more effective information.
Further, in step S104, the image capturing module 100 and the image processing module 200 receive feedback information sent from the monitoring center 400. The feedback information in the embodiment of the invention refers to feedback information which is sent by the monitoring center to the image acquisition module or the image processing module and is used for feeding back the received coded data. The method specifically comprises the following steps:
optionally, the image acquisition module may be further configured to acquire position information when acquiring a multi-frame grayscale image of the target area, and send the position information to the monitoring center.
Further, when the frequency of the image processing module receiving the feedback information from the monitoring center is lower than a preset frequency threshold, the image processing module can inform rescue workers or equipment of reaching the position in the position information to conduct search and rescue.
It should be noted that, in the embodiment of the present invention, the monitoring center may be further configured to perform analysis according to the decoded grayscale image to obtain information included in the grayscale image, for example, when the decoded image is a forest image captured by an unmanned aerial vehicle, the actual situation of the forest image can be analyzed and determined by using the forest image.
In summary, the embodiment of the present invention provides an unmanned aerial vehicle cruise image processing system, which can adaptively adjust the compression degrees of different areas of an image to be transmitted according to the frequency of receiving feedback information from a monitoring center by an image processing module, and determine the acquisition frequency of a picture according to the frequency of receiving feedback information by an image acquisition module, thereby adaptively providing the compressed transmission image to the monitoring center, so as to identify more effective information in the image, and enable the monitoring center to receive the image containing more effective information as soon as possible.
The use of words such as "including," "comprising," "having," and the like in this disclosure is an open-ended term that means "including, but not limited to," and is used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that the various components or steps may be broken down and/or re-combined in the methods and systems of the present invention. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The above-mentioned embodiments are merely examples for clearly illustrating the present invention and do not limit the scope of the present invention. It will be apparent to those skilled in the art that other variations and modifications may be made in the foregoing description, and it is not necessary or necessary to exhaustively enumerate all embodiments herein. All designs identical or similar to the present invention are within the scope of the present invention.

Claims (8)

1. An unmanned aerial vehicle image processing system that cruises, characterized in that includes:
the image acquisition module is used for acquiring a multi-frame gray image of the target area at the current moment by using the unmanned aerial vehicle;
the image processing module is used for sequentially carrying out differential operation on adjacent frame gray level images in the multi-frame gray level images and superposing differential operation results to obtain superposed images; dividing the superposed image into areas, and obtaining the weight of each pixel point in each area by using the frequency of the gray level of each area;
the transmission module is used for carrying out Huffman coding on the superposed image according to the weight of each pixel point in each region and sending coded data to the monitoring center;
the monitoring center is used for receiving the coded data sent by the transmission module, carrying out Hoffman decoding on the received coded data to obtain a gray image, and sending feedback information to the image acquisition module and the image processing module respectively.
2. The unmanned aerial vehicle cruise image processing system according to claim 1, wherein said image processing module is further configured to determine a frame number of the grayscale image acquired at a next time based on a frequency of receiving feedback information from the monitoring center at the current time.
3. The unmanned aerial vehicle cruise image processing system according to claim 1, wherein in the image processing module, the area division is performed on the superimposed image, and the weight of each pixel point in each area is obtained by using the frequency of the gray level of each area, including:
dividing the superposed image into a first area and a second area, wherein the first area is a connected area with the largest area in the superposed image;
adjusting the gray level number in the first area to a first gray level number and the gray level number in the second area to a second gray level number in a linear mapping mode, wherein the first gray level number is based on the frequency of received feedback information, the sum of the first gray level number and the second gray level number is a preset gray level number, and the first gray level number is greater than the second gray level number;
according to the order from low frequency to high frequency of the gray levels in the second area, weights are sequentially given to all the gray levels in the second area, and the larger the frequency of the gray levels in the second area is, the larger the corresponding weight is;
and determining the weight corresponding to each gray level in the first area according to the adjusted frequency of each gray level in the first area on the basis of each weight corresponding to each gray level in the second area so as to obtain the weight of each pixel point in the first area.
4. The image processing system for unmanned aerial vehicle cruise according to claim 3, wherein the adjusting of the number of gray levels in the first region to a first number of gray levels and the adjusting of the number of gray levels in the second region to a second number of gray levels in a linear mapping manner comprises:
determining the second gray scale number based on the frequency of receiving the feedback information, wherein the lower the frequency of receiving the feedback information is, the smaller the second gray scale number is;
compressing the gray level number in the second area from 256 to a second gray level number in a linear mapping mode;
determining a first gray scale number according to the second gray scale number and a preset gray scale bit number;
the number of gray levels in the first region is compressed from 256 to the first number of gray levels by means of linear mapping.
5. The unmanned aerial vehicle cruise image processing system according to claim 3, wherein on the basis of the weights corresponding to the gray levels in the second region, the weights corresponding to the gray levels in the first region are determined according to the adjusted frequency of the gray levels in the first region, so as to obtain the weights of the pixels in the first region, and the method includes:
determining the maximum value of the weights corresponding to the gray levels in the second area, and adding 1 to the maximum value to be used as the initial weight of the second area;
sequentially giving weights to all gray levels in the first area according to the sequence of the frequency numbers of the gray levels in the first area from low to high, wherein the larger the frequency number of the gray level in the first area is, the larger the corresponding weight is, and the weight of the gray level with the lowest frequency number in the first area is given as the initial weight;
and respectively taking the weights corresponding to the gray levels of the pixels in the first area as the weights of the pixels in the first area.
6. The unmanned aerial vehicle cruise image processing system according to claim 1, wherein in the image processing module, before sequentially performing differential operation on adjacent frame images in the multi-frame gray image at the current moment, the image processing module further comprises: and respectively carrying out median filtering denoising on each gray level image in the multi-frame gray level image.
7. The unmanned aerial vehicle cruise image processing system according to claim 1, wherein the image acquisition module is further configured to acquire position information when acquiring a multi-frame grayscale image of a target area, and send the position information to a monitoring center.
8. The unmanned aerial vehicle cruise image processing system according to claim 7, wherein when the frequency of the image processing module receiving the feedback information from the monitoring center is lower than a preset frequency threshold, rescuers or equipment are notified to arrive at a position in the position information to perform search and rescue.
CN202210536115.7A 2022-05-18 2022-05-18 Unmanned aerial vehicle image processing system that cruises Active CN114640853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210536115.7A CN114640853B (en) 2022-05-18 2022-05-18 Unmanned aerial vehicle image processing system that cruises

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210536115.7A CN114640853B (en) 2022-05-18 2022-05-18 Unmanned aerial vehicle image processing system that cruises

Publications (2)

Publication Number Publication Date
CN114640853A true CN114640853A (en) 2022-06-17
CN114640853B CN114640853B (en) 2022-07-29

Family

ID=81952898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210536115.7A Active CN114640853B (en) 2022-05-18 2022-05-18 Unmanned aerial vehicle image processing system that cruises

Country Status (1)

Country Link
CN (1) CN114640853B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0730768A (en) * 1993-07-12 1995-01-31 Fujitsu Ltd Image data transmission processing system
US5805228A (en) * 1996-08-09 1998-09-08 U.S. Robotics Access Corp. Video encoder/decoder system
JP2002232680A (en) * 2001-01-30 2002-08-16 Fuji Photo Film Co Ltd Portable device, portable telephone, image transmission system, and image transmission method
US20040228410A1 (en) * 2003-05-12 2004-11-18 Eric Ameres Video compression method
CN1633177A (en) * 2004-12-31 2005-06-29 大唐微电子技术有限公司 Method of frame rate adjusting for video communication system
US20050248776A1 (en) * 2004-05-07 2005-11-10 Minoru Ogino Image transmission device and image reception device
CN101340575A (en) * 2007-07-03 2009-01-07 英华达(上海)电子有限公司 Method and terminal for dynamically regulating video code
CN101345862A (en) * 2008-07-25 2009-01-14 深圳市迈进科技有限公司 Image transmission method for real-time grasp shoot of network monitoring system
CN102484748A (en) * 2009-06-16 2012-05-30 高通股份有限公司 Managing video adaptation algorithms
CN102595093A (en) * 2011-01-05 2012-07-18 腾讯科技(深圳)有限公司 Video communication method for dynamically changing video code and system thereof
CN105208335A (en) * 2015-09-22 2015-12-30 成都时代星光科技有限公司 High-power zoom unmanned aerial vehicle aerial high-definition multi-dimension real-time investigation transmitting system
KR20160119431A (en) * 2016-09-30 2016-10-13 에스케이플래닛 주식회사 Terminal Device, method for streaming UI, and storage medium thereof
CN109524015A (en) * 2017-09-18 2019-03-26 杭州海康威视数字技术股份有限公司 Audio coding method, coding/decoding method, device and audio coding and decoding system
CN110320926A (en) * 2019-07-24 2019-10-11 北京中科利丰科技有限公司 A kind of power station detection method and power station detection system based on unmanned plane
US20200280739A1 (en) * 2017-11-21 2020-09-03 Immersive Robotics Pty Ltd Image Compression For Digital Reality
US20210311476A1 (en) * 2018-12-05 2021-10-07 Bozhon Precision Industry Technology Co., Ltd. Patrol robot and patrol robot management system
WO2021253961A1 (en) * 2020-06-15 2021-12-23 北京世纪瑞尔技术股份有限公司 Intelligent visual perception system

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0730768A (en) * 1993-07-12 1995-01-31 Fujitsu Ltd Image data transmission processing system
US5805228A (en) * 1996-08-09 1998-09-08 U.S. Robotics Access Corp. Video encoder/decoder system
JP2002232680A (en) * 2001-01-30 2002-08-16 Fuji Photo Film Co Ltd Portable device, portable telephone, image transmission system, and image transmission method
US20040228410A1 (en) * 2003-05-12 2004-11-18 Eric Ameres Video compression method
US20050248776A1 (en) * 2004-05-07 2005-11-10 Minoru Ogino Image transmission device and image reception device
CN1633177A (en) * 2004-12-31 2005-06-29 大唐微电子技术有限公司 Method of frame rate adjusting for video communication system
CN101340575A (en) * 2007-07-03 2009-01-07 英华达(上海)电子有限公司 Method and terminal for dynamically regulating video code
CN101345862A (en) * 2008-07-25 2009-01-14 深圳市迈进科技有限公司 Image transmission method for real-time grasp shoot of network monitoring system
CN102484748A (en) * 2009-06-16 2012-05-30 高通股份有限公司 Managing video adaptation algorithms
CN102595093A (en) * 2011-01-05 2012-07-18 腾讯科技(深圳)有限公司 Video communication method for dynamically changing video code and system thereof
CN105208335A (en) * 2015-09-22 2015-12-30 成都时代星光科技有限公司 High-power zoom unmanned aerial vehicle aerial high-definition multi-dimension real-time investigation transmitting system
KR20160119431A (en) * 2016-09-30 2016-10-13 에스케이플래닛 주식회사 Terminal Device, method for streaming UI, and storage medium thereof
CN109524015A (en) * 2017-09-18 2019-03-26 杭州海康威视数字技术股份有限公司 Audio coding method, coding/decoding method, device and audio coding and decoding system
US20200280739A1 (en) * 2017-11-21 2020-09-03 Immersive Robotics Pty Ltd Image Compression For Digital Reality
US20210311476A1 (en) * 2018-12-05 2021-10-07 Bozhon Precision Industry Technology Co., Ltd. Patrol robot and patrol robot management system
CN110320926A (en) * 2019-07-24 2019-10-11 北京中科利丰科技有限公司 A kind of power station detection method and power station detection system based on unmanned plane
WO2021253961A1 (en) * 2020-06-15 2021-12-23 北京世纪瑞尔技术股份有限公司 Intelligent visual perception system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
M. ABDAT,ET AL.: "Combining Gray coding and JBIG for lossless image compression", 《PROCEEDINGS OF 1ST INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 *
王新亮等.: "基于1553B总线的飞控软件测试仿真平台设计", 《计算机测量与控制》 *
陈阳.: "星上高光谱图像无损压缩新算法研究", 《中国优秀硕士学位全文数据库(电子期刊)》 *

Also Published As

Publication number Publication date
CN114640853B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN108780499B (en) System and method for video processing based on quantization parameters
CN102523367B (en) Real time imaging based on many palettes compresses and method of reducing
CN111726633B (en) Compressed video stream recoding method based on deep learning and significance perception
CN105391972A (en) Image communication apparatus, image transmission apparatus, and image reception apparatus
CN104205813A (en) Level signaling for layered video coding
CN116405574B (en) Remote medical image optimization communication method and system
CN111491167B (en) Image encoding method, transcoding method, device, equipment and storage medium
CN103327335B (en) For the FPGA coded method of unmanned plane image transmission, system
US20210166434A1 (en) Signal processing apparatus and signal processing method
CN111131828B (en) Image compression method and device, electronic equipment and storage medium
CN111713107A (en) Image processing method and device, unmanned aerial vehicle and receiving end
CN109474824B (en) Image compression method
CN114640853B (en) Unmanned aerial vehicle image processing system that cruises
EP3178224B1 (en) Apparatus and method for compressing color index map
CN110636334B (en) Data transmission method and system
US20220375022A1 (en) Image Compression/Decompression in a Computer Vision System
US10430665B2 (en) Video communications methods using network packet segmentation and unequal protection protocols, and wireless devices and vehicles that utilize such methods
KR102324724B1 (en) Apparatus for compressing and transmitting image using parameters of modem and network and operating method thereof
CN110784620A (en) Equipment data intercommunication method
CN112104872A (en) Image transmission method and device
CN114093051B (en) Communication line inspection method, equipment and system and computer readable storage medium
CN116684003B (en) Quantum communication-based railway line air-ground comprehensive monitoring method and system
WO2023047485A1 (en) Communication apparatus and data communication method
WO2023199696A1 (en) Video compression device, video compression method, and computer program
CN115913984A (en) Model-based information service providing method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant