CN105141924A - Wireless image monitoring system based on 4G technology - Google Patents

Wireless image monitoring system based on 4G technology Download PDF

Info

Publication number
CN105141924A
CN105141924A CN201510594468.2A CN201510594468A CN105141924A CN 105141924 A CN105141924 A CN 105141924A CN 201510594468 A CN201510594468 A CN 201510594468A CN 105141924 A CN105141924 A CN 105141924A
Authority
CN
China
Prior art keywords
image
wireless communication
video
communication module
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510594468.2A
Other languages
Chinese (zh)
Other versions
CN105141924B (en
Inventor
彭彦平
张万宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU TIMES TECH Co Ltd
Original Assignee
CHENGDU TIMES TECH Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU TIMES TECH Co Ltd filed Critical CHENGDU TIMES TECH Co Ltd
Priority to CN201510594468.2A priority Critical patent/CN105141924B/en
Publication of CN105141924A publication Critical patent/CN105141924A/en
Application granted granted Critical
Publication of CN105141924B publication Critical patent/CN105141924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a wireless image monitoring system based on 4G technology. The system comprises an image monitoring device arranged in an unmanned aerial vehicle and a video transmission device arranged in a ground central station, wherein the image monitoring device comprises a monitoring processor, a satellite navigation unit, a high definition motion camera, and a vehicle-terminal 4G wireless communication module arranged on the unmanned aerial vehicle, the monitoring processor regards an SOC single-chip multimedia processor as the core and is provided with a video input/output interface, an audio input/output interface, a video analog-to-digital/digital-to-analog converter, a memory, and a network communication interface, the video transmission device comprises a station-terminal 4G wireless communication module, a central site image processing module, and a display terminal, and the station-terminal 4G wireless communication module receives image signals of the vehicle-terminal 4G wireless communication module. According to the system, video transmission is realized by the adoption of the 4G wireless communication network, the freedom and the flexibility of video monitoring are enhanced, the work efficiency is improved, the system cost is low, and the security is high.

Description

Based on the wireless image supervisory control system of 4G technology
Technical field
The present invention relates to picture control field, be specifically related to a kind of wireless image supervisory control system based on 4G technology.
Background technology
The video of current unmanned plane shooting be all generally by graphic transmission equipment by transmission of video on earth station system, then observer can on ground base station, the video of real time inspection unmanned plane shooting, but due to the restriction of graphic transmission equipment and antenna, make the distance between the position at ground base station place and unmanned plane must in certain scope, thus cause observer also must along with ground base station is within the scope of this, if have left this scope, the video of real time inspection unmanned plane shooting can not be carried out, to its application, there is very large restriction.
The key that UAV Video transmission application realizes is wireless transmission link means.Current Radio Transmission Technology mainly contains and comprises following technology: 3G network (CDMA2000, WCDMA, TD-SCDMA), 4G (TD-LTE and FDD-LTE) network, WLAN (wireless local area network) (WIFI), satellite, microwave etc.
Satellite and microwave technology are the traditional means of wireless video transmission, and the great advantage of communication technology of satellite to be service range wide, powerful, use flexibly, by the impact of geographical environment and other external environment condition, especially not by the impact of external electromagnetic environment.But these two kinds of technical costss remain high, and the initial expenditure of construction of its costliness and communication fee often make people hang back, cannot spread.
The technique construction wireless MANs such as WIMAX/WIFI carry out the Video Applications covered on a large scale, need Construction Party to build a large amount of base station, and base station construction cost is huge on the one hand, and non-general user can bear; Even if on the other hand a certain unit has built up wireless MAN, because its initial construction cost is huge and be reluctant to share with other users, thus larger waste is caused to social resources.
Forth generation mobile phone mobile communication standard, refers to forth generation mobile communication technology, and foreign language is abridged: 4G; This technology comprises TD-LTE and FDD-LTE two kinds of standards (on stricti jurise, LTE is 3.9G, although be promoted as 4G wireless standard, but it is not in fact by the of future generation wireless communication standard IMT-Advanced of 3GPP accreditation described by International Telecommunication Union, and therefore it does not also reach the standard of 4G in a strict sense.The LTEAdvanced of upgrade version is only had just to meet the requirement of International Telecommunication Union to 4G); 4G integrates 3G and WLAN, and can quick data transfering, high-quality, audio frequency, video and image etc.; 4G can download with the speed of more than 100Mbps, than fast 25 times current of family expenses broadband A/D SL (4,000,000), and can meet the requirement of nearly all user for wireless service; In addition, 4G can not have chlamydate place to dispose at DSL and Cable Modem, and then expands to whole distract; Clearly, 4G has incomparable superiority.
Summary of the invention
The invention provides a kind of wireless image supervisory control system based on 4G technology, this system is supported vision guided navigation, image recognition and is kept away barrier, have employed 4G cordless communication network and realize transmission of video, enhance freedom and the flexibility ratio of video monitoring, and greatly reduce installation wiring work, increase work efficiency, reduce system cost, the high speed that can solve large capacity image data exchanges, and has higher fail safe.
To achieve these goals, the invention provides a kind of wireless image supervisory control system based on 4G technology, this system comprises:
Be arranged on the image monitor in unmanned plane and the video frequency transmitter being arranged on ground central station;
Wherein, image monitor comprises:
Be arranged on the monitoring processor on unmanned plane, satellite navigation unit, high definition motion cameras, machine end 4G wireless communication module;
Described monitoring processor with SOC single-chip multimedia processor for core, with video input output interface, audio input output interface, video A/D/digital to analog converter, storage and network communication interface;
Video frequency transmitter comprises:
Stand and hold 4G wireless communication module, central site image processing module and display terminal;
Described station end 4G wireless communication module, receives the picture signal of described machine end 4G wireless communication module.
Preferably, SOC single-chip multimedia processor has been connected the 4G wireless communication transmission of video image by usb bus and machine end 4G wireless communication module.
Preferably, described monitoring processor adopts SOC single-chip multimedia processor i.MX27 as core processor, and it adopts ARM926 as core I P, and process runs real time operating system Linux.
Preferably, SOC single-chip multimedia processor is by SDR bus external data memory storage SDRAM; By EMI bus Add-In memory NANDFlash; High definition motion cameras is connected by CSI interface; By I2S bus external audio frequency AD converter.
Preferably, described central site image processing module comprises:
Acquiring unit, described acquiring unit, for obtaining the frame transmitted from station end 4G wireless communication module, namely obtains the image that this frame represents;
Denoising unit, described denoising unit is used for the noise data removed according to predetermined noise in the described image of rule removal;
Recognition unit, described recognition unit is used in the image of described removal noise data, identifying destination object according to predetermine one recognition rule;
Adding device, described adding device is used for for described frame adds label, and described label can based on the predetermined characteristic of semantic meaning representation destination object;
Memory cell, described memory cell is for storing label corresponding to described frame.
The present invention has the following advantages and beneficial effect: (1) supports that high-definition digital image passes ground back in real time, meets high-definition digital transmission requirement, supports that vision guided navigation, obstacle avoidance and images steganalysis are followed the tracks of, meets development of new techniques requirement; (2) pre-defined algorithm of central site image processing module, is convenient to the high-layer semantic information of people's intuitivism apprehension, and the classification realized on this basis video monitoring image data and mark, realize the acquisition fast and efficiently of video monitoring image.
Accompanying drawing explanation
Fig. 1 shows the block diagram of a kind of wireless image supervisory control system based on 4G technology of the present invention.
Fig. 2 shows a kind of wireless image method for supervising based on 4G technology of the present invention.
Embodiment
Fig. 1 shows a kind of wireless image supervisory control system based on 4G technology of the present invention.This system comprises: be arranged on the image monitor 1 in unmanned plane and the video frequency transmitter 2 being arranged on ground central station.
Wherein, image monitor 1 comprises: be arranged on the monitoring processor 11 on unmanned plane, satellite navigation unit 13, high definition motion cameras 12, machine end 4G wireless communication module 14 and vision computer 15.
Described monitoring processor 11 is also embedded with Ethernet switching chip (LANswitch), and described Ethernet switching chip (LANswitch) is connected by local area network (LAN) (LAN) with flight control computer 15 (ARM).
Described monitoring processor 11 with SOC single-chip multimedia processor for core, with video input output interface, audio input output interface, video A/D/digital to analog converter, storage and network communication interface;
Video frequency transmitter 2 comprises: stand end 4G wireless communication module 21, central site image processing module 22 and display terminal 23; Described station end 4G wireless communication module 21, receives the picture signal of described machine end 4G wireless communication module 14.
Preferably, SOC single-chip multimedia processor has been connected the 4G wireless communication transmission of video image by usb bus and machine end 4G wireless communication module.
Preferably, described monitoring processor adopts SOC single-chip multimedia processor i.MX27 as core processor, and it adopts ARM926 as core I P, and process runs real time operating system Linux.
Preferably, SOC single-chip multimedia processor is by SDR bus external data memory storage SDRAM; By EMI bus Add-In memory NANDFlash; High definition motion cameras is connected by CSI interface; By I2S bus external audio frequency AD converter.
There are dsp processor, arm processor in described vision computer 15 inside, run (SuSE) Linux OS, be connected with described flight control computer with 100 m ethernet mouth, the Ethernet exchanging formula bus expanded by the Ethernet switching chip (LANswitch) of described monitoring processor receives the picture that high definition Flying Camera is passed back, the analysis carrying out image is resolved, and merge with light stream transducer, ultrasonic sensor, Inertial Measurement Unit data, carry out vision guided navigation, obstacle avoidance, images steganalysis tracking.
The Ethernet exchanging formula bus that described high definition motion cameras 12 is directly expanded by Ethernet interface and monitoring processor 11 is connected, support the forwarding of multiple video flowing, by Ethernet switching chip (LANswitch), HD video data are passed to vision computer (DSP+ARM) and carry out image calculating.
Described satellite navigation unit 13 is GPS/ Big Dipper receiving chip, magnetic compass, single-chip microcomputer, go out CAN to be connected with flight control computer (ARM), support GPS and Beidou navigation location, support magnetometer resolving attitude of flight vehicle, and carry out data fusion with Inertial Measurement Unit (IMU), finally resolve attitude of flight vehicle and position of aircraft by monitoring processor 11.
Video frequency transmitter 2 comprises: stand end 4G wireless communication module 21, multichannel distribution module 22, central site image processing module 23 and display terminal 24.Described station end 4G wireless communication module 21, receives through satellite network or mobile communications network the picture signal that described image emissions module launches 14; Described multichannel distribution module 22 is by video compression encoder, multichannel communication discharge device, communication equipment, gateway device forms, and described communication equipment comprises wired transmission equipment, short-range wireless communication apparatus, mobile communication equipment, satellite communication equipment, described center image treatment system is by decoding device, and image display forms.
Preferably, described central site vision computer 15 comprises:
Acquiring unit, described acquiring unit, for obtaining the frame transmitted from station end 4G wireless communication module, namely obtains the image that this frame represents.
Denoising unit, described denoising unit is used for the noise data removed according to predetermined noise in the described image of rule removal; Image usually can be subject to the interference of various noise and impact and make image deterioration obtaining, in transmission and storing process.In order to obtain high-quality digital picture, being necessary to carry out noise reduction process to image, as far as possible while maintenance raw information integrality, information useless in signal can being removed again.In view of the particularity that video monitoring system is the monitoring to movable destination object mostly, in an execution mode of the application, to the immovable background of monitoring or key monitoring do not needed to be separated with movable prospect, the background parts being about to the monitor video obtained be removed as a part for noise data.
Recognition unit, described recognition unit is used in the image of described removal noise data, identifying destination object according to predetermine one recognition rule.The object retrieved image to identify destination object wherein, first will extract the feature of destination object, and according to this feature identification object.Therefore one of the subject matter of image retrieval is exactly the extraction of image low-level image feature.Namely the application's execution mode is extract based on to the clarification of objective in the image after denoising with the identification of realize target object.
Adding device, described adding device is used for for described frame adds label, and described label can based on the predetermined characteristic of semantic meaning representation destination object.After completing the identification of destination object, can to the destination object filling label identified, the label of filling can based on the expression of the high-layer semantic information of the intuitivism apprehension of people.
Memory cell, described memory cell is for storing label corresponding to described frame.
Fig. 2 shows a kind of wireless image method for supervising based on 4G technology of the present invention.The method specifically comprises the steps:
S1. monitoring processor startup flies control program, and described satellite navigation unit starts GPS navigation program;
S2. high definition motion cameras gathers video image according to the track flying control program, and vision computer processes image;
S3. machine end 4G wireless communication module, and station end 4G wireless communication module, coordinated wireless transmission and the reception of picture signal;
S4. central site image processing module processes the picture signal received, and shows on display terminal.
Preferably, in step sl, following navigator fix step is also comprised:
Monitoring processor 11 pairs of satellite navigation unit 13 transmit the locator data come and judge:
If locator data is in normal range (NR):, monitoring processor 11 by the locator data that receives stored in memory;
The described locator data in normal range (NR) refers to: the longitude of two sampled points adjacent in locator data, latitude value, height value are compared between two, if the difference of the longitude of adjacent two sampled points is no more than 0.0002 degree, and the difference of the latitude of adjacent two sampled points is no more than 0.00018 degree, and the difference of the height of adjacent two sampled points is no more than 20 meters, judge that locator data is as normal range (NR);
If locator data occurs abnormal:, the locator data stored in memory recalls by monitoring processor 11, turns back to homeposition according to historical track;
There is abnormal referring in described locator data: the longitude of two sampled points adjacent in locator data, latitude value, height value is compared between two, if the difference of longitude is more than 0.0002 degree, or the difference of latitude is more than 0.00018 degree, or the difference of height is more than 20 meters, then judge that locator data occurs abnormal.
Preferably, described locator data is longitude information x, the latitude information y of unmanned plane at each time point, the set of elevation information z, is designated as { xtytzt}; Wherein,
(x1y1z1) for unmanned plane is at longitude, latitude, the elevation information of the 1st time point;
(x2y2z2) for unmanned plane is at longitude, latitude, the elevation information of the 2nd time point;
By that analogy, (xt-1yt-1zt-1) for unmanned plane t-1 time point longitude, latitude, elevation information; (xtytzt) for unmanned plane is at longitude, latitude, the elevation information of t time point;
The interval of adjacent two time points gets 0.5 to 5.0 second; Each historical location data is all stored in the memory of monitoring processor 11;
The locator data of t time point and the locator data of t-1 time point are compared:
If xt-xt-1 < 0.0002, and yt-yt-1 < 0.00018, and zt-zt-1 < 20 meters,
Namely the difference of longitude is no more than 0.0002 degree, and the difference of latitude is no more than 0.00018 degree, when the difference of height is no more than 20 meters, judge that the locator data of t time point belongs to normal range (NR), and by the memory of the locator data of this t time point stored in monitoring processor 11;
If xt-xt-1 >=0.0002, or yt-yt-1 >=0.00018, or zt-zt-1 >=20 meter; Any one in the difference of the i.e. difference of the difference of longitude, latitude, height exceeds normal range (NR), all judges that the locator data of t time point there occurs exception, also namely thinks that the flight of unmanned plane there occurs exception;
By monitoring processor 11 by the locator data of t-1 time point in memory, a t-2 time point locator data ... the locator data of the 2nd time point, the locator data of the 1st time point successively read, and control the departure place that unmanned vehicle returns according to original track.
Preferably, in step sl, fly control program and comprise application layer program, real-time task scheduler and external interrupt processor, hardware initialization program, hardware drive program, CAN communication protocol procedure, LAN (TCP/IP) communication protocol program, described application layer program is connected with real-time task scheduler and external interrupt processor, described real-time task scheduler is connected with hardware initialization program with external interrupt processor, and described hardware initialization program is connected with hardware drive program.
Preferably, described application layer program comprise Applied layer interface program, power management and electric quantity monitoring program, the indicator light control program that flies, security control program, visual spatial attention program, flight tracking control program, augmentation control program, remote control decoding program, communication processing program.
Preferably, in step s 2, following sub-step is comprised:
S21. the video file dispenser of vision computer 15 is split video file;
S22. the video compression encoder of vision computer 15 compresses the file split;
S23. the encryption device of vision computer 15 is encrypted operation to the video file compressed.
Preferably, in step s 4 which, can adopt and with the following method video image is processed:
S41: obtain a frame in video, namely obtain the image that this frame represents.
S42: remove the noise data in the described image of rule removal according to predetermined noise.
Image usually can be subject to the interference of various noise and impact and make image deterioration obtaining, in transmission and storing process.In order to obtain high-quality digital picture, being necessary to carry out noise reduction process to image, as far as possible while maintenance raw information integrality, and information useless in signal can being removed.
The final purpose of video image denoising improves given image, solves real image causes image quality decrease problem due to noise jamming.Effectively improve picture quality by noise-removed technology, increase signal to noise ratio, better embody the information entrained by original image.
At present substantially can be divided into two classes to the method that image carries out denoising: space domain method and transpositions domain.The former directly carries out data operation on original image, processes the gray value of pixel; Common spatial domain Image denoising algorithm has neighborhood averaging, medium filtering, low-pass filtering etc.The latter is in the relevant enterprising row operation of spatial domain in process pixel field, certain computing is carried out to image, image is changed to transform domain from transform of spatial domain, again the conversion coefficient in transform domain is processed, then carry out inverse transformation and image is transformed into spatial domain from transform domain reaches and remove the object of picture noise.Wherein, Fourier transform and wavelet transformation are the common transform methods for image denoising.Due to the technology that denoising method is comparatively ripe, therefore the embodiment of the present application according to actual conditions unrestricted choice said method, can not form the restriction to application.
In view of the particularity that video monitoring system is the monitoring to movable destination object mostly, in an execution mode of the application, to the immovable background of monitoring or key monitoring do not needed to be separated with movable prospect, the background parts being about to the monitor video obtained be removed as a part for noise data.
S43: identify destination object according to predetermine one recognition rule in the image of described removal noise data.
The object retrieved image to identify destination object wherein, first will extract the feature of destination object, and according to this feature identification object.Therefore one of the subject matter of image retrieval is exactly the extraction of image low-level image feature.
The application can extract image low-level image feature can comprise color, texture, shape and the depth of field.
1, color
Color is a kind of very important visual character of body surface, is one of main Perception Features of people's recognition image; Compared with the shape of Description Image, Texture eigenvalue, color characteristic is the most basic Image Visual Feature of CBIR, be the most direct method used in image representation and retrieval, it is simple that main cause is that color characteristic calculates, and in its information and image, specifically object is very relevant to scene type.In addition, the dependence of color characteristic to the size of image itself, direction, visual angle is relatively little.But in reality, there is difference in the color of the same target that the different cameras that can cause due to differences such as environment illumination intensity, shooting angle, imaging characteristic, object are far and near collects.In order to address this problem, obtain stably, there is uniquely target signature expression, color transfer method or color changeover method can be utilized to eliminate color distortion, improve the robustness of color characteristic.
Before utilizing color transfer method or color changeover method elimination color distortion, first can carry out enhancing preliminary treatment to the video monitoring image collected.
Research shows, human visual system carrys out the intensity of illumination of perceptual object in the mode of nonlinearity.But the imaging processes such as video camera are then comparatively simple.Generally, video camera imaging is different from the direct perception of the mankind, and this situation is more obvious when the dynamic range of object is larger.Dynamic range refers to the ratio between the brightest in scene and the darkest object brightness.Owing to have employed the method for region adaptivity, human visual system perception can be greater than the dynamic range of 1000:1, and common display only can show the dynamic range of 100:1.When object dynamic range is greater than the scope that display can show, dynamic range compression need be carried out to image, be suitable for display to make it.Simple tone mapping method adopts the logarithmic function of the overall situation, Gamma correction or Sigmoid function to carry out the dynamic range of compressed image, easily causes the loss in detail of local.Higher tone mapping all adopts the method for region adaptivity, and the method based on Retinex is a wherein class.
To be Land proposed in 1963 Retinex theory regulates based on human vision and perceive the color of object and the model of brightness, and its basic thought is the absolute illumination value that this point is not depended in illumination that people perceives certain point, also relevant with the illumination value around it.Retinex strengthens process can improve color of image shape constancy, compressed image dynamic range, improves contrast, effectively shows the details be submerged in shadow region.The step that Retinex method is applied in the application's execution mode first carries out illumination estimation to the video monitoring image collected, then from video monitoring image, illumination is deducted at log-domain, suppress illumination variation on the impact of image, the image after being enhanced.
After utilizing Retinex algorithm to carry out enhancing process to the video monitoring image collected, adopt color transfer or color robin to carry out color difference eliminating process to the video monitoring image of this enhancing, improve the robustness of color characteristic.Carrying out color difference eliminating process to video monitoring image is the important component part removing picture noise.
2, texture
Textural characteristics is the total intrinsic characteristic in all objects surface and the reflection to imaging surface space structure and attribute, is a kind of visual signature not relying on the reflection image homogeneity phenomenon of color or brightness.Textural characteristics contains the important information of object surface structure tissue line, and it shows as the regularity of gray scale or distribution of color on image, is therefore usually counted as certain local property of image, or in localized region between pixel relation one tolerance.
Conventional image texture characteristic has co-occurrence matrix, wavelet texture, Tamura textural characteristics etc.Wherein, Haralick etc. utilize the method for co-occurrence matrix Description Image textural characteristics, have studied the spatial dependence of gray scale image and adopt this dependent statistical information of the form record of matrix from mathematical angle.Gray level co-occurrence matrixes statistics be the space distribution information of color intensity, close series structure co-occurrence matrix (co-occurrencematrix) according to the azimuth-range between image pixel, therefrom extract the description of significant statistical nature as textural characteristics.
The feature of video monitoring image is that target is often in mobile status.In non-rigid object tracking or long-time target following, the global characteristics such as objective contour may change greatly, and local feature then has good consistency, and therefore local feature point methods is good selection.
The application's execution mode can adopt local binary (LocalBinaryPattern, LBP) descriptor to detect face, to improve retrieval precision to pedestrian and retrieval rate.
3, shape
Shape facility, to be divided into basis to objects in images and region, is the key character in image expression and image understanding.Intuitively, people are insensitive to the conversion of body form, Rotation and Zoom, so the shape facility extracted also has corresponding consistency, are a kind of effective image-region shape descriptors.
4, the depth of field
Concerning visual attention, the depth of field extracted from single image be one in general manner, top-down ground feature, beyond the focal zone that target is placed on video camera, just there will be defocusing blurring.
The extracting method of the application's execution mode depth of field feature can comprise two key steps.First, the fog-level of single width image border is estimated.Then, the ambiguity of edge is carried out Gauss's weighting thus is obtained the relative depth of field of each fundamental region.The circular of single image depth map is as follows:
First, secondary is carried out with Gauss's collecting image that standard deviation is σ 1 fuzzy.Then, the ratio T of the gradient of original image and the gradient of secondary blurred picture is calculated in the edge of image.The fog-level σ at image border place can according to following formulae discovery:
The application's Canny rim detection extracts the edge of image, and the standard deviation arranging secondary Gaussian Blur is σ 1=1.Then, the fog-level σ of all for image edges is normalized to [0,1] interval.
Subsequently, the relative depth of field Wm of fundamental region m is defined as Gauss's weighted average of all edge blurry degree of image:
(i, j) is the coordinate of the pixel of fundamental region m, and σ (i, j) is the fog-level of edge pixel point M (i, j), and Gauss's weight definition is:
Wherein belong to the neighborhood Vij of (i, j), σ W is the secondary Gaussian Blur standard deviation of the relative depth of field, and it is for suppressing the depth of field for the sensitiveness of edge pixel point M (i, j) and fundamental region m spacing.σ W value has considerable influence to depth of field feature, and value excessive then depth of field trend is identical, and value is too small, strengthens On Local Fuzzy.In the application's execution mode, σ W is set to σ W=0.15.
The above-mentioned low-level image feature such as color, texture, shape, the depth of field mentioned is global characteristics.Certainly, global characteristics is not limited to above-mentioned four kinds of features, and such as it can also comprise GIST characteristic sum fractal characteristic, does not repeat them here.In addition, image can also comprise local feature, such as SIFT feature.
The object of CBIR is on the basis of extracting image vision low-level image feature, from image library, find out associated picture.The content characteristic of image comprises low-level image feature and high-level semantics features, with the low-level image feature of the color extracted, texture, shape, depth of field character representation image, selected by training and combine low-level image feature better to simulate the direct feel of people to image high-level semantics features, the convenient high-level semantics features mapping of image vision low-level image feature being obtained image.
In order to the later stage is convenient to retrieval, according to the low-level image feature extracted, can first classify to the video monitoring image obtained.The identification of each semantic category is considered as independently two classification problems.Suppose that all video monitoring images have m class, be designated as L={A1, A2 ... Am}, the amount of images belonging to semantic category Ai is Ni, the classification problem of m class is converted into two class classification problems: for any one class Ai, training positive example is all images that such comprises, counter-example is the image of the every other class not belonging to such in training set, and namely the positive example of Ai class adds up to Ni, and counter-example adds up to
To given semantic category A ∈ L, the training set T={ (x1, y1) of its two classes classification problem, (x2, y2) ..., (xl, yl) }; (xi, yi) represents given in advance and one group of image of label of annotating through semanteme, and wherein xi ∈ Rn is an image vector, represents the image belonging to the features such as same or analogous color, texture, shape and the depth of field.{+1 ,-1}, if yi=+1 represents xi ∈ A, the image that namely vector x i represents belongs to semantic classes A to yi ∈.In like manner, yi=-1 represents
S44: for described frame adds label, described label can based on the predetermined characteristic of semantic meaning representation destination object.
After completing the identification of destination object, can to the destination object filling label identified, the label of filling can based on the expression of the high-layer semantic information of the intuitivism apprehension of people.The feature expressed by these labels of filling is all the high-layer semantic information being convenient to intuitivism apprehension.
S45: label described in corresponding described Frame storage, is formed and is convenient to later stage acquisition tag library.
The application's execution mode is by the vision low-level image feature to extraction, according to predetermined Algorithm mapping to the high-layer semantic information being convenient to people's intuitivism apprehension, and the classification realized on this basis video monitoring image data and mark, the semanteme of good expression video monitoring image data, reduce even removal of images low-level image feature and the mankind and enrich " semantic gap " between semantic content, realize the acquisition fast and efficiently of video monitoring image.
S46: receive inquiry request, described inquiry request is accompanied with keyword.
When needs are inquired about destination object, receive inquiry request, be accompanied with in inquiry request in advance to the keyword that destination object defines.
S47: search for described keyword in the label of described storage, obtain the frame that the label identical with described keyword is corresponding.
S48: according to the frame obtained described in time sequencing arrangement.
The all frames with this destination object obtained are arranged according to time sequencing; Further, the frame of Time Continuous is combined to form video, using discontinuous frame as independent image.Destination object interruption over time and space can being eliminated to a certain extent like this, providing directly objectively information for studying and judging destination object motion track.
As mentioned above, although the embodiment limited according to embodiment and accompanying drawing are illustrated, various amendment and distortion can be carried out from above-mentioned record concerning the technical staff that the art has a general knowledge.Such as, carry out according to the order mutually different from method illustrated in the technology illustrated, and/or carry out combining or combining according to the form mutually different from the method illustrated by the inscape such as system, structure, device, circuit illustrated, or carry out replacing or replacing also can reaching suitable effect according to other inscapes or equipollent.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, make some equivalent to substitute or obvious modification, and performance or purposes identical, all should be considered as belonging to protection scope of the present invention.

Claims (5)

1., based on a wireless image supervisory control system for 4G technology, this system comprises:
Be arranged on the image monitor in unmanned plane and the video frequency transmitter being arranged on ground central station;
Wherein, image monitor comprises:
Be arranged on the monitoring processor on unmanned plane, satellite navigation unit, high definition motion cameras, machine end 4G wireless communication module;
Described monitoring processor with SOC single-chip multimedia processor for core, with video input output interface, audio input output interface, video A/D/digital to analog converter, storage and network communication interface;
Video frequency transmitter comprises:
Stand and hold 4G wireless communication module, central site image processing module and display terminal;
Described station end 4G wireless communication module, receives the picture signal of described machine end 4G wireless communication module.
2. the system as claimed in claim 1, is characterized in that, described SOC single-chip multimedia processor has been connected the 4G wireless communication transmission of video image by usb bus and machine end 4G wireless communication module.
3. system as claimed in claim 2, is characterized in that, described monitoring processor adopts SOC single-chip multimedia processor i.MX27 as core processor, and it adopts ARM926 as core I P, and process runs real time operating system Linux.
4. system as claimed in claim 3, is characterized in that, SOC single-chip multimedia processor is by SDR bus external data memory storage SDRAM; By EMI bus Add-In memory NANDFlash; High definition motion cameras is connected by CSI interface; By I2S bus external audio frequency AD converter.
5. the system as claimed in claim 1, is characterized in that, described central site image processing module comprises:
Acquiring unit, described acquiring unit, for obtaining the frame transmitted from station end 4G wireless communication module, namely obtains the image that this frame represents;
Denoising unit, described denoising unit is used for the noise data removed according to predetermined noise in the described image of rule removal;
Recognition unit, described recognition unit is used in the image of described removal noise data, identifying destination object according to predetermine one recognition rule;
Adding device, described adding device is used for for described frame adds label, and described label can based on the predetermined characteristic of semantic meaning representation destination object;
Memory cell, described memory cell is for storing label corresponding to described frame.
CN201510594468.2A 2015-09-17 2015-09-17 Wireless image monitoring system based on 4G technologies Active CN105141924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510594468.2A CN105141924B (en) 2015-09-17 2015-09-17 Wireless image monitoring system based on 4G technologies

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510594468.2A CN105141924B (en) 2015-09-17 2015-09-17 Wireless image monitoring system based on 4G technologies

Publications (2)

Publication Number Publication Date
CN105141924A true CN105141924A (en) 2015-12-09
CN105141924B CN105141924B (en) 2018-03-30

Family

ID=54727120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510594468.2A Active CN105141924B (en) 2015-09-17 2015-09-17 Wireless image monitoring system based on 4G technologies

Country Status (1)

Country Link
CN (1) CN105141924B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578143A (en) * 2015-12-29 2016-05-11 成都移动魔方科技有限公司 Intelligent remote monitoring system
CN106412500A (en) * 2016-09-14 2017-02-15 芜湖扬展新材料科技服务有限公司 Embedded Web network monitoring system based on RTSP protocol
CN106452556A (en) * 2016-09-14 2017-02-22 芜湖扬展新材料科技服务有限公司 Data transmission system for aircraft based on 4G network
CN106568777A (en) * 2016-11-16 2017-04-19 王金鹏 Pest and disease monitoring system
CN106645147A (en) * 2016-11-16 2017-05-10 王金鹏 Method for pest and disease damage monitoring
CN106843251A (en) * 2017-02-20 2017-06-13 上海大学 Crowded crowd promptly dredges unmanned plane
CN107831784A (en) * 2017-11-13 2018-03-23 广州纳飞智能技术有限公司 A kind of UAV Flight Control device based on CPU sizes framework and Linux system
CN107888884A (en) * 2017-11-23 2018-04-06 深圳市智璟科技有限公司 Unmanned plane command dispatching system platform and method based on 4G networks
CN108280981A (en) * 2018-03-01 2018-07-13 四川智慧鹰航空科技有限公司 A kind of miniature drone wireless control system
CN108391047A (en) * 2018-01-23 2018-08-10 倪惠芳 A kind of indoor unmanned plane multi-angled shooting control system of electronic information technical field
CN108447237A (en) * 2018-03-01 2018-08-24 四川智慧鹰航空科技有限公司 A kind of signal transmitting apparatus for unmanned plane remote control

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102183955A (en) * 2011-03-09 2011-09-14 南京航空航天大学 Transmission line inspection system based on multi-rotor unmanned aircraft
CN102238367A (en) * 2010-04-23 2011-11-09 上海东嵌科技发展有限公司 Video monitoring system based on 3G mobile communication technology
US20110311099A1 (en) * 2010-06-22 2011-12-22 Parrot Method of evaluating the horizontal speed of a drone, in particular a drone capable of performing hovering flight under autopilot
CN101778260B (en) * 2009-12-29 2012-01-04 公安部第三研究所 Method and system for monitoring and managing videos on basis of structured description
CN102620736A (en) * 2012-03-31 2012-08-01 贵州贵航无人机有限责任公司 Navigation method for unmanned aerial vehicle
CN102999926A (en) * 2012-11-12 2013-03-27 北京交通大学 Low-level feature integration based image vision distinctiveness computing method
CN203773717U (en) * 2013-11-12 2014-08-13 武汉大学 Remote visual touch screen control system for unmanned plane

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778260B (en) * 2009-12-29 2012-01-04 公安部第三研究所 Method and system for monitoring and managing videos on basis of structured description
CN102238367A (en) * 2010-04-23 2011-11-09 上海东嵌科技发展有限公司 Video monitoring system based on 3G mobile communication technology
US20110311099A1 (en) * 2010-06-22 2011-12-22 Parrot Method of evaluating the horizontal speed of a drone, in particular a drone capable of performing hovering flight under autopilot
CN102183955A (en) * 2011-03-09 2011-09-14 南京航空航天大学 Transmission line inspection system based on multi-rotor unmanned aircraft
CN102620736A (en) * 2012-03-31 2012-08-01 贵州贵航无人机有限责任公司 Navigation method for unmanned aerial vehicle
CN102999926A (en) * 2012-11-12 2013-03-27 北京交通大学 Low-level feature integration based image vision distinctiveness computing method
CN203773717U (en) * 2013-11-12 2014-08-13 武汉大学 Remote visual touch screen control system for unmanned plane

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柳明: "GPS失效下的无人机组合导航系统", 《济南大学学报(自然科学版)》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578143A (en) * 2015-12-29 2016-05-11 成都移动魔方科技有限公司 Intelligent remote monitoring system
CN106412500A (en) * 2016-09-14 2017-02-15 芜湖扬展新材料科技服务有限公司 Embedded Web network monitoring system based on RTSP protocol
CN106452556A (en) * 2016-09-14 2017-02-22 芜湖扬展新材料科技服务有限公司 Data transmission system for aircraft based on 4G network
CN106568777A (en) * 2016-11-16 2017-04-19 王金鹏 Pest and disease monitoring system
CN106645147A (en) * 2016-11-16 2017-05-10 王金鹏 Method for pest and disease damage monitoring
CN106843251A (en) * 2017-02-20 2017-06-13 上海大学 Crowded crowd promptly dredges unmanned plane
CN107831784A (en) * 2017-11-13 2018-03-23 广州纳飞智能技术有限公司 A kind of UAV Flight Control device based on CPU sizes framework and Linux system
CN107888884A (en) * 2017-11-23 2018-04-06 深圳市智璟科技有限公司 Unmanned plane command dispatching system platform and method based on 4G networks
CN108391047A (en) * 2018-01-23 2018-08-10 倪惠芳 A kind of indoor unmanned plane multi-angled shooting control system of electronic information technical field
CN108280981A (en) * 2018-03-01 2018-07-13 四川智慧鹰航空科技有限公司 A kind of miniature drone wireless control system
CN108447237A (en) * 2018-03-01 2018-08-24 四川智慧鹰航空科技有限公司 A kind of signal transmitting apparatus for unmanned plane remote control

Also Published As

Publication number Publication date
CN105141924B (en) 2018-03-30

Similar Documents

Publication Publication Date Title
CN105120237B (en) Wireless image monitoring method based on 4G technologies
CN105141924B (en) Wireless image monitoring system based on 4G technologies
CN110674746B (en) Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium
US11393212B2 (en) System for tracking and visualizing objects and a method therefor
CN112417953B (en) Road condition detection and map data updating method, device, system and equipment
CN104484814B (en) A kind of advertising method and system based on video map
CN105120230A (en) Unmanned plane image monitoring and transmitting system
CN113192646B (en) Target detection model construction method and device for monitoring distance between different targets
EP4116938A1 (en) Image generating device, image generating method, recording medium generating method, learning model generating device, learning model generating method, learning model, data processing device, data processing method, inferring method, electronic instrument, generating method, program, and non-transitory computer-readable medium
CN110659391A (en) Video detection method and device
CN105049790A (en) Video monitoring system image acquisition method and apparatus
CN106682592A (en) Automatic image recognition system and method based on neural network method
CN104486585B (en) A kind of city magnanimity monitor video management method and system based on GIS
CN105120232A (en) Image monitoring and transmitting method for unmanned plane
CN110113560A (en) The method and server of video intelligent linkage
CN112364843A (en) Plug-in aerial image target positioning detection method, system and equipment
CN108132054A (en) For generating the method and apparatus of information
CN114255407A (en) High-resolution-based anti-unmanned aerial vehicle multi-target identification and tracking video detection method
CN114038193A (en) Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking
Mansourifar et al. Gan-based satellite imaging: A survey on techniques and applications
CN114299230A (en) Data generation method and device, electronic equipment and storage medium
CN114419444A (en) Lightweight high-resolution bird group identification method based on deep learning network
CN112668675B (en) Image processing method and device, computer equipment and storage medium
Hongquan et al. Video scene invariant crowd density estimation using geographic information systems
CN112926415A (en) Pedestrian avoiding system and pedestrian monitoring method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant