CN113436205A - Remote sensing image rapid interpretation method based on sight tracking - Google Patents

Remote sensing image rapid interpretation method based on sight tracking Download PDF

Info

Publication number
CN113436205A
CN113436205A CN202110667778.8A CN202110667778A CN113436205A CN 113436205 A CN113436205 A CN 113436205A CN 202110667778 A CN202110667778 A CN 202110667778A CN 113436205 A CN113436205 A CN 113436205A
Authority
CN
China
Prior art keywords
image
sight
remote sensing
interpretation
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110667778.8A
Other languages
Chinese (zh)
Inventor
孙康
陈金勇
李方方
王敏
帅通
王士成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202110667778.8A priority Critical patent/CN113436205A/en
Publication of CN113436205A publication Critical patent/CN113436205A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a remote sensing image fast interpretation method based on sight tracking, and belongs to the field of remote sensing image processing. The method comprises the steps of acquiring sight line information of a user by using sight line tracking equipment, acquiring an image area concerned by the user by constructing a visual attention model, displaying different types of images by adopting an optimized visualization strategy during manual interpretation, and realizing rapid interpretation of the images by adopting an object-oriented method on the basis. The method has extremely low computational complexity, can quickly acquire the image interpretation of the region of interest of the user by capturing and analyzing the eye sight information and the like, and has extremely high efficiency. Meanwhile, the method has a wide application range, is suitable for panchromatic, multispectral, hyperspectral and SAR images, and can interpret various types of ground objects. Test results show that the method can greatly improve the image interpretation efficiency and has higher interpretation precision aiming at smaller targets and ground object edges.

Description

Remote sensing image rapid interpretation method based on sight tracking
Technical Field
The invention belongs to the technical field of remote sensing image processing, and particularly relates to a remote sensing image rapid interpretation method based on sight tracking.
Background
With the rapid development of aerospace technology, the spatial resolution, spectral resolution and temporal resolution of remote sensing data are continuously improved, so that large-range, high-precision and multi-level dynamic observation data on the ground are formed. The increase of data causes the redundancy of information and the increase of data processing complexity, and brings new challenges to the quick and effective detection and identification of remote sensing targets. The application of the remote sensing image shows that most of remote sensing data are worthless, including areas covered by clouds, areas covered by a large amount of water bodies, areas covered by a large amount of vegetation and the like. In many cases, the industry users of remote sensing data pay attention to the key area information in wide remote sensing data. Particularly, under the conditions of disaster relief, emergency and the like, the key problem of remote sensing data application is how to quickly find and extract the effective information of the part related to the task in the face of massive remote sensing image data, namely, the selective processing of the data.
Disclosure of Invention
The invention aims to provide a remote sensing image fast interpretation method based on sight tracking, which can realize the fast interpretation of information of a wide remote sensing image with extremely low computational complexity by capturing and analyzing the sight of human eyes and has higher interpretation precision.
In order to achieve the purpose, the invention adopts the technical scheme that:
a remote sensing image fast interpretation method based on sight tracking comprises the following steps:
step 1, arranging sight tracking equipment, wherein the sight tracking equipment is used for collecting visual information of a display observed by human eyes;
step 2, carrying out enhanced visualization on the remote sensing image on a display; specifically, the panchromatic or SAR image is automatically optimized and stretched, and the multispectral or hyperspectral image is subjected to maximum signal-to-noise ratio enhancement display;
step 3, completing sight tracking and analysis by adopting a pupil-cornea reflection method to obtain sight direction information and determining a primary attention area of the sight;
step 4, partitioning the primary attention area, calculating the visual attention of each partition, and taking the partition with the maximum visual attention as the visual attention area;
and 5, based on the visual attention area obtained in the step 4, carrying out image analysis in the visual attention area based on an object-oriented segmentation method, and finishing image interpretation.
Furthermore, the sight tracking device is arranged at the bottom of the display and comprises an active irradiation point light source and an image acquisition module.
Further, the specific manner of step 2 is as follows:
for panchromatic or SAR images, counting the gray value of the whole image, sorting the gray value in the order from large to small, and respectively finding out the pixel value v of the positive number 0.02NTAnd the pixel value v of the 0.02N lastBAnd N is the pixel number of the whole image, and each pixel value is mapped according to the following relation:
Figure BDA0003117591250000021
wherein v isiTo the pre-mapped pixel value, vi' is the mapped pixel value;
for a multispectral or hyperspectral image, firstly, a covariance matrix K of the image is calculated:
Figure BDA0003117591250000022
wherein k isijThe inner product of the ith wave band and the jth wave band of the multispectral or hyperspectral image is obtained, and L is the number of the wave bands of the image;
then, decomposing the characteristic value of K, and taking the characteristic vector e corresponding to the first three maximum characteristic values1,e2,e3Calculating the transformed image I according to1,I2,I3
Figure BDA0003117591250000023
Wherein e isijAs feature vectors eiJ element of (1), MjFor the jth wave band of the multispectral or hyperspectral image, I1,I2,I3The bands are mapped to red, green and blue bands of the image to be displayed respectively for visualization.
Further, the specific manner of step 3 is as follows:
the eyeball is actively irradiated by the active irradiation point light source, a Purkinje spot is formed on the cornea, the image acquisition module is used for shooting an eye image, the Purkinje spot extraction and the pupil center positioning are carried out on the eye image, and the sight direction estimation is carried out according to the position relation between the Purkinje spot and the pupil.
Further, in step 4, the size of the block is 50 × 500 pixels; the visual attention is calculated in the following way:
Figure BDA0003117591250000024
wherein t is fixation time, which is visual retention time when the sight line moves within the range of 10 × 10 pixels and the retention time exceeds a threshold value, and the threshold value is 200 ms; f is the fixation times, which means the number of fixation points with fixation time exceeding 200ms in the block area; s is the eye jump frequency, which is the frequency of the jump of the sight line between different fixation points; d is the pupil size; m is a sight line movement mode, and the calculation mode is as follows:
Figure BDA0003117591250000031
during the movement of the line of sight, (x)i,yi) Is a focal point coordinate sequence of sight line movement, i is more than or equal to 1 and less than or equal to n, n is the number of focal points, (x)r,yr) And (x)l,yl) The two points farthest apart in the line of sight movement, ε is the relative fitting error of fitting the focal coordinate sequence by a polynomial of degree 2.
Further, the specific manner of step 5 is as follows:
step 5a, performing Gaussian filtering on the visual attention area obtained in the step 4, wherein a filtering kernel function w is selected according to the following formula:
Figure BDA0003117591250000032
step 5b, Canny edge detection is carried out on the filtered area image to obtain edge points;
step 5c, taking the edge points as starting points, utilizing a moving average method to carry out 8-neighborhood image segmentation, and judging whether the image belongs to the current region or not by adopting a 3 sigma method during segmentation;
and 5d, generating a vector segmentation line of the current segmentation result.
Further, the step 5 is followed by the steps of:
and 6, marking the image interpretation result on the image and prompting a user to confirm the result.
The invention has the following advantages:
(1) the method has extremely low computational complexity, and can quickly acquire the image interpretation of the region of interest of the user by capturing and analyzing the eye sight information and the like, thereby having extremely high efficiency.
(2) The method has wide application range, is suitable for panchromatic, multispectral, hyperspectral and SAR images, and simultaneously adopts an object-oriented image segmentation technology, so that interpretable ground objects are various in types, including vegetation, water bodies, buildings, roads, airplanes, ships and warships and the like.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic layout of a vision tracking device.
Detailed Description
The technical solution of the present invention will be further described with reference to the accompanying drawings and the detailed description.
A remote sensing image fast interpretation method based on sight tracking utilizes sight tracking equipment to obtain sight information of a user, obtains an image area concerned by the user by constructing a visual attention model, adopts different optimization display strategies for different types of images during manual interpretation, and realizes image interpretation by adopting an object-oriented method on the basis.
As shown in fig. 1, the method comprises the steps of:
step 1, connecting and laying sight tracking equipment, and completing the calibration of sight on the basis. As shown in fig. 2, the gaze tracking device is mainly a workstation (with a display) and a gaze tracking device, the gaze tracking device comprises an active illumination and image acquisition module, and the device is arranged near the display and flush at the bottom of the display. The equipment collects human eye image information and uploads data to computer software for analysis in the process of observing a software interface on a display by human eyes, so as to obtain the visual information of human eyes;
step 2, enhancing and visualizing the remote sensing image on a display, automatically optimizing and stretching the panchromatic (or SAR) image, and performing maximum signal-to-noise ratio enhancement display on the multispectral (or hyperspectral) image;
and 3, completing the sight tracking and analysis by adopting a pupil-cornea reflection method. The sight tracking device actively irradiates an eyeball by using a near-infrared point light source to form a reflection point (Purkinje spot) on a cornea, a CCD camera is used for shooting an eye image, the recorded image data is further analyzed and processed, finally, the position relation between the Purkinje spot and a pupil is analyzed to obtain sight direction information, and a primary area concerned by vision is obtained on the basis of the sight direction information. The method mainly comprises the steps of iris area positioning, Purkinje spot extraction, pupil center positioning and sight line direction estimation;
step 4, constructing a visual attention model through factors such as the watching time, the watching times, the eye jump times, the pupil size, the sight line moving mode and the like, and acquiring the accurate range of the sight line in the image, namely the visual attention area, on the basis of the primary area of the visual attention obtained in the step 3;
step 5, image interpretation is carried out, based on the visual attention area obtained in the step 4, image analysis in the visual attention area is carried out based on an object-oriented segmentation method, and image interpretation is completed;
and 6, marking the image interpretation result on the image and prompting a user to confirm the result. In addition, the user can edit the results, including the addition, deletion, and movement of the vertices of the annotation objects and object regions.
The remote sensing image enhancement display mode in the step 2 is as follows:
step 2a, if the image to be processed is a panchromatic (or SAR) image, counting the gray value of the whole image, sorting the gray value in the order from large to small, and respectively finding out the pixel values v of the positive number 0.02N and the negative number 0.02NTAnd vBN is the pixel number of the whole image, and each pixel value is mapped according to the following relation:
Figure BDA0003117591250000041
wherein v isiTo the pre-mapped pixel value, vi' is the mapped pixel value;
step 2b, if the image to be processed is a multispectral (or hyperspectral) image, firstly calculating a covariance matrix K of the image:
Figure BDA0003117591250000051
wherein k isijThe inner product of the ith wave band and the jth wave band of the multispectral (or hyperspectral) image is obtained, and L is the number of the wave bands of the image; then, carrying out eigenvalue decomposition on the K, and taking eigenvector e corresponding to the first three maximum eigenvalues1,e2,e3Calculating the transformed image I according to1,I2,I3
Figure BDA0003117591250000052
Wherein eijAs feature vectors eiJ element of (1), MjFor the jth wave band of the multispectral (or hyperspectral) image, I1,I2,I3The wave bands are respectively mapped into red, green and blue wave bands of an image to be displayed for visualization.
In step 4, partitioning the primary attention area, wherein the size of the partitions is 50 × 500 pixels, calculating the visual attention of each partition, and taking the partition with the maximum visual attention as the visual attention area;
the calculation method of the visual attention degree comprises the following steps:
Figure BDA0003117591250000053
the symbols in the formula have the following meanings:
the watching time t refers to the visual retention time within the range of 10 multiplied by 10 pixels of the movement of the sight line of the user, and the user is judged to start to pay attention when the watching time t is more than or equal to 200ms according to the multiple test results;
the watching times f refer to the number of the watching points of which the watching time of the user exceeds 200ms in a specific area, generally f is more than or equal to 3, and when f is smaller, the attention degree of the area is reduced;
the number of times of eye jump s is the rapid eye jump between the two fixation points, people enable interested visual information to fall on the position of the fovea of the retina through the eye jump so as to carry out sufficient processing, and the larger the number of times of eye jump s is, the higher the attention of regional visual sense is;
the pupil size d, which is the size that the pupil can adapt to, is also related to the cognitive load and emotion of a person. When an individual performs cognitive activities (such as perception, recall, recognitions, calculation and the like), the cognitive load is increased, and the pupil is enlarged, so that the pupil size d and the visual attention are in a positive correlation relationship;
the sight line moving mode m is characterized in that the sight line of a user is subjected to purposeless mobile search or mobile search according to a certain rule, and the sight line track data acquired by the two modes are obviously different from the sight line track data acquired by the continuous tracking after the interested target is found, and when the sight line moving mode m is in regular movement, the regional vision attention is improved. Order (x)i,yi) I is more than or equal to 1 and less than or equal to n is a focal point coordinate sequence of the sight of an observer, the sequence is fitted according to a polynomial direction of degree 2, and epsilon is a relative fitting error, then
Figure BDA0003117591250000061
(xr,yr) And (x)l,yl) The two points farthest apart in the line-of-sight path. Therefore, the larger m is, the more irregular the sight line moving mode is;
the specific mode of the step 5 is as follows:
step 5a, performing Gaussian filtering on the area image acquired in the step 4, wherein a filtering kernel function w is selected according to the following formula:
Figure BDA0003117591250000062
step 5b, Canny edge detection is carried out on the filtered image; obtaining an edge detection result;
step 5c, taking the edge points as starting points, utilizing a moving average (MeanShift) method to carry out 8-neighborhood image segmentation, and judging whether the image belongs to the current region by adopting a 3 sigma method when the image is segmented;
and 5d, generating a vector segmentation line of the current segmentation result.
The method can realize the rapid interpretation and interpretation of the wide remote sensing image. Specifically, visual information of a user for observing a remote sensing image is obtained by using sight line capturing equipment, a visual attention model is constructed by analyzing parameters of the user, and on the basis, the region of interest of the user is rapidly extracted. In addition, the object-oriented analysis method is used for completing the rapid interpretation of the region of interest of the user, meanwhile, the interpretation result is marked on the image, and the user can confirm and edit the result.
The effect of the present method can be further illustrated by the following tests:
1. test conditions.
The computer is configured with an Intel Core i7-3770 CPU 3.4Ghz, 64GB memory; the operating system is a Windows 764 bit professional edition, and the software environment is MATLAB 2017.
2. Test methods.
The method is adopted to interpret the remote sensing images, the comparison group is interpreted in a manual labeling mode, a plurality of remote sensing images are adopted to continuously interpret and interpret, and time consumed by the remote sensing image interpretation and interpretation precision are mainly compared.
3. Test contents and results.
In the test, the 3 scenes of the high score No. 1 image and the 5 scenes of the high score No. 2 image are selected, and 8 scenes of the images are interpreted, wherein the 8 scenes of the images have the size of about 12.6GB in total and cover the region by about 5525 square kilometers. The method is used for carrying out auxiliary interpretation firstly, then carrying out unaided manual interpretation, and comparing interpretation efficiency and interpretation precision under two conditions. The interpretation objects mainly comprise vegetation, water bodies, buildings, roads, airplanes, ships and warships and the like.
The test result is as follows, the interpretation and interpretation of the 8-scene image are completed within 12.4 minutes by using the method, the interpretation and interpretation are completed within 53.2 minutes without assistance, and the image interpretation efficiency is improved by 4.3 times by using the method.
In the aspect of interpreting and interpreting the interpretation precision, the final results are manually confirmed, so the overall difference is small, but the method has higher precision on smaller targets and the edges of the ground objects, because the method adopts a strategy of optimizing display when the images are displayed. In the manual interpretation process, the visualization strategy of the image is particularly critical. The invention respectively carries out corresponding optimized display aiming at panchromatic (SAR) and multispectral (hyperspectral) images, can highlight the information of ground objects and help users to interpret and interpret the remote sensing images more quickly and accurately.
Test results show that the method can greatly improve the image interpretation efficiency and has higher interpretation precision aiming at smaller targets and ground object edges.
In a word, the method utilizes sight tracking equipment to obtain sight information of a user, obtains an image area concerned by the user by constructing a visual attention model, displays different types of images by adopting an optimized visualization strategy during manual interpretation and interpretation, and realizes rapid interpretation of the images by adopting an object-oriented method on the basis. The method has extremely low computational complexity, can quickly acquire the image interpretation of the region of interest of the user by capturing and analyzing the eye sight information and the like, and has extremely high efficiency. Meanwhile, the method has a wide application range, is suitable for panchromatic, multispectral, hyperspectral and SAR images, and can interpret various types of ground objects including vegetation, water bodies, buildings, roads, airplanes, ships and warships by using an object-oriented image segmentation technology.

Claims (7)

1. A remote sensing image fast interpretation method based on sight tracking is characterized by comprising the following steps:
step 1, arranging sight tracking equipment, wherein the sight tracking equipment is used for collecting visual information of a display observed by human eyes;
step 2, carrying out enhanced visualization on the remote sensing image on a display; specifically, the panchromatic or SAR image is automatically optimized and stretched, and the multispectral or hyperspectral image is subjected to maximum signal-to-noise ratio enhancement display;
step 3, completing sight tracking and analysis by adopting a pupil-cornea reflection method to obtain sight direction information and determining a primary attention area of the sight;
step 4, partitioning the primary attention area, calculating the visual attention of each partition, and taking the partition with the maximum visual attention as the visual attention area;
and 5, based on the visual attention area obtained in the step 4, carrying out image analysis in the visual attention area based on an object-oriented segmentation method, and finishing image interpretation.
2. The method for rapidly interpreting remote sensing images based on eye tracking as claimed in claim 1, wherein the eye tracking device is arranged at the bottom of the display and comprises an active illumination point light source and an image acquisition module.
3. The method for quickly interpreting remote sensing images based on sight line tracking according to claim 1, wherein the specific mode of the step 2 is as follows:
for panchromatic or SAR images, counting the gray value of the whole image, sorting the gray value in the order from large to small, and respectively finding out the pixel value v of the positive number 0.02NTAnd the pixel value v of the 0.02N lastBAnd N is the pixel number of the whole image, and each pixel value is mapped according to the following relation:
Figure FDA0003117591240000011
wherein v isiTo the pre-mapped pixel value, vi' is the mapped pixel value;
for a multispectral or hyperspectral image, firstly, a covariance matrix K of the image is calculated:
Figure FDA0003117591240000012
wherein k isijThe inner product of the ith wave band and the jth wave band of the multispectral or hyperspectral image is obtained, and L is the number of the wave bands of the image;
then, decomposing the characteristic value of K, and taking the characteristic vector e corresponding to the first three maximum characteristic values1,e2,e3Calculating the transformed image I according to1,I2,I3
Figure FDA0003117591240000013
Wherein e isijAs feature vectors eiJ element of (1), MjFor the jth wave band of the multispectral or hyperspectral image, I1,I2,I3The bands are mapped to red, green and blue bands of the image to be displayed respectively for visualization.
4. The method for quickly interpreting remote sensing images based on sight line tracking according to claim 2, wherein the specific mode of the step 3 is as follows:
the eyeball is actively irradiated by the active irradiation point light source, a Purkinje spot is formed on the cornea, the image acquisition module is used for shooting an eye image, the Purkinje spot extraction and the pupil center positioning are carried out on the eye image, and the sight direction estimation is carried out according to the position relation between the Purkinje spot and the pupil.
5. The method for rapidly interpreting the remote sensing image based on the sight line tracking according to claim 1, wherein in the step 4, the size of the blocks is 50 x 500 pixels; the visual attention is calculated in the following way:
Figure FDA0003117591240000021
wherein t is fixation time, which is visual retention time when the sight line moves within the range of 10 × 10 pixels and the retention time exceeds a threshold value, and the threshold value is 200 ms; f is the fixation times, which means the number of fixation points with fixation time exceeding 200ms in the block area; s is the eye jump frequency, which is the frequency of the jump of the sight line between different fixation points; d is the pupil size; m is a sight line movement mode, and the calculation mode is as follows:
Figure FDA0003117591240000022
during the movement of the line of sight, (x)i,yi) Is a focal point coordinate sequence of sight line movement, i is more than or equal to 1 and less than or equal to n, n is the number of focal points, (x)r,yr) And (x)l,yl) The two points farthest apart in the line of sight movement, ε is the relative fitting error of fitting the focal coordinate sequence by a polynomial of degree 2.
6. The method for quickly interpreting remote sensing images based on sight line tracking according to claim 5, wherein the specific mode of the step 5 is as follows:
step 5a, performing Gaussian filtering on the visual attention area obtained in the step 4, wherein a filtering kernel function w is selected according to the following formula:
Figure FDA0003117591240000023
step 5b, Canny edge detection is carried out on the filtered area image to obtain edge points;
step 5c, taking the edge points as starting points, utilizing a moving average method to carry out 8-neighborhood image segmentation, and judging whether the image belongs to the current region or not by adopting a 3 sigma method during segmentation;
and 5d, generating a vector segmentation line of the current segmentation result.
7. The method for quickly interpreting remote sensing images based on sight line tracking according to claim 1, characterized by further comprising the following steps after the step 5:
and 6, marking the image interpretation result on the image and prompting a user to confirm the result.
CN202110667778.8A 2021-06-16 2021-06-16 Remote sensing image rapid interpretation method based on sight tracking Pending CN113436205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110667778.8A CN113436205A (en) 2021-06-16 2021-06-16 Remote sensing image rapid interpretation method based on sight tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110667778.8A CN113436205A (en) 2021-06-16 2021-06-16 Remote sensing image rapid interpretation method based on sight tracking

Publications (1)

Publication Number Publication Date
CN113436205A true CN113436205A (en) 2021-09-24

Family

ID=77756306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110667778.8A Pending CN113436205A (en) 2021-06-16 2021-06-16 Remote sensing image rapid interpretation method based on sight tracking

Country Status (1)

Country Link
CN (1) CN113436205A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036295A (en) * 2014-06-18 2014-09-10 西安电子科技大学 Road center line auto-detection method employing multispectral remote sensing images
CN107506705A (en) * 2017-08-11 2017-12-22 西安工业大学 A kind of pupil Purkinje image eye tracking is with watching extracting method attentively
CN107730447A (en) * 2017-10-31 2018-02-23 北京信息科技大学 A kind of Hyperspectral imagery processing method, apparatus and system
CN110310249A (en) * 2019-05-20 2019-10-08 西北工业大学 Visual enhancement method for remote sensing images
CN110345815A (en) * 2019-07-16 2019-10-18 吉林大学 A kind of creeper truck firearms method of sight based on Eye-controlling focus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036295A (en) * 2014-06-18 2014-09-10 西安电子科技大学 Road center line auto-detection method employing multispectral remote sensing images
CN107506705A (en) * 2017-08-11 2017-12-22 西安工业大学 A kind of pupil Purkinje image eye tracking is with watching extracting method attentively
CN107730447A (en) * 2017-10-31 2018-02-23 北京信息科技大学 A kind of Hyperspectral imagery processing method, apparatus and system
CN110310249A (en) * 2019-05-20 2019-10-08 西北工业大学 Visual enhancement method for remote sensing images
CN110345815A (en) * 2019-07-16 2019-10-18 吉林大学 A kind of creeper truck firearms method of sight based on Eye-controlling focus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯少康等: "基于视线追踪的卫星观测图像分割技术", 《数字技术与应用》 *
张婷婷: "《遥感技术概论》", 31 July 2011, 黄河水利出版社 *

Similar Documents

Publication Publication Date Title
Vidya et al. Skin cancer detection using machine learning techniques
EP3696726A1 (en) Ship detection method and system based on multidimensional scene characteristics
Han et al. A novel computer vision-based approach to automatic detection and severity assessment of crop diseases
Jafari et al. Automatic detection of melanoma using broad extraction of features from digital images
Kadkhodaei et al. Automatic segmentation of multimodal brain tumor images based on classification of super-voxels
CN112017185B (en) Focus segmentation method, device and storage medium
US20070154088A1 (en) Robust Perceptual Color Identification
US20150125052A1 (en) Drusen lesion image detection system
WO2009123354A1 (en) Method, apparatus, and program for detecting object
CN106372629A (en) Living body detection method and device
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
Sabrol et al. Recent studies of image and soft computing techniques for plant disease recognition and classification
Elloumi et al. Ocular diseases diagnosis in fundus images using a deep learning: approaches, tools and performance evaluation
CN115082815B (en) Tea bud picking point positioning method and device based on machine vision and picking system
CN113012093B (en) Training method and training system for glaucoma image feature extraction
AlDera et al. A model for classification and diagnosis of skin disease using machine learning and image processing techniques
Zhao et al. SAI-YOLO: a lightweight network for real-time detection of driver mask-wearing specification on resource-constrained devices
Guo et al. Emfn: Enhanced multi-feature fusion network for hard exudate detection in fundus images
CN112541433A (en) Two-stage human eye pupil accurate positioning method based on attention mechanism
Li et al. Sublingual vein extraction algorithm based on hyperspectral tongue imaging technology
Dutta Facial pain expression recognition in real-time videos
US20210209755A1 (en) Automatic lesion border selection based on morphology and color features
Shojaeipour et al. Using image processing methods for diagnosis diabetic retinopathy
Nirmala et al. HoG based Naive Bayes classifier for glaucoma detection
Fondón et al. Automatic optic cup segmentation algorithm for retinal fundus images based on random forest classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210924