CN114519799A - Real-time detection method and system for multi-feature seat state - Google Patents

Real-time detection method and system for multi-feature seat state Download PDF

Info

Publication number
CN114519799A
CN114519799A CN202210142265.XA CN202210142265A CN114519799A CN 114519799 A CN114519799 A CN 114519799A CN 202210142265 A CN202210142265 A CN 202210142265A CN 114519799 A CN114519799 A CN 114519799A
Authority
CN
China
Prior art keywords
seat
image
calculating
pixel
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210142265.XA
Other languages
Chinese (zh)
Inventor
王崧玉
冯瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202210142265.XA priority Critical patent/CN114519799A/en
Publication of CN114519799A publication Critical patent/CN114519799A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a real-time detection method for multi-feature seat states, which has the following characteristics that: step 1, preprocessing a seat image to be detected, establishing a pixel background model of a scene, screening out a seat image with a changed state, and extracting the seat image with the changed state; step 2, calculating three visual characteristics of HOG, GIST and SIFT in parallel on the changed sitting posture image; step 3, classifying the three visual characteristics through a support vector machine classifier to obtain a classification result; and 4, fusing the classification results to finally obtain the seat state in the current scene. The invention also discloses a real-time multi-feature seat state detection system which comprises a preprocessing part and a target detection processing part.

Description

Real-time detection method and system for multi-feature seat state
Technical Field
The invention relates to the field of computer vision, in particular to a real-time detection method and a real-time detection system for multi-feature seat states
Background
Seat state monitoring is an important method for recording meeting information of a large meeting place. With more and more meetings, reports and conferences, the utilization rate of large-scale meeting places is continuously increased, and the difficulty of recording meeting place meeting information under high frequency and large scale is continuously increased. For a large-scale meeting place capable of accommodating thousands of people or even tens of thousands of people, a manager of the meeting place needs to know the state of each seat in real time to record the seating information of the meeting participants or to find out an emergency situation in time. The current meeting place management system only has the basic functions of recording meeting videos and recording personnel in and out, needs to spend a large amount of manpower to monitor the seat state of the meeting place, and even if the basic functions are the same, the timely and accurate seat state cannot be achieved.
In recent years, with the continuous development of machine learning, especially the excellent performance of neural networks in pattern recognition, more and more target detection tasks can be efficiently automated. Some studies have applied neural networks to the task of real-time monitoring of seat status.
However, the current target detection method based on machine learning is very time-consuming in operation. Although a better character detection effect is achieved by combining the HOG characteristics with the SVM classifier, real-time detection cannot be achieved on real meeting place video data. Meanwhile, in an actual meeting place management system, a single camera is generally responsible for shooting 200 to 400 unequal seats, the pixels of the single seat are only about 80 × 60, and a general seat state monitoring model is difficult to have strong generalization and high-precision identification performance on such a small difficult sample.
Disclosure of Invention
The present invention is made to solve the above problems, and an object of the present invention is to provide a method and a system for detecting a multi-feature seat status in real time.
The invention provides a real-time detection method for a multi-feature seat state, which has the following characteristics that: step 1, preprocessing a seat image to be detected, establishing a pixel background model of a scene, screening out a seat image with a changed state, and extracting the seat image with the changed state; step 2, calculating three visual characteristics of HOG, GIST and SIFT in parallel on the changed sitting posture image; step 3, classifying the three visual characteristics through a support vector machine classifier to obtain a classification result; and 4, fusing the classification results to finally obtain the seat state in the current scene.
The real-time detection method for the multi-feature seat state provided by the invention can also have the following features: wherein, in step 1, the pretreatment part comprises the following steps: step 1-1, initializing a ViBe background difference model in a first frame of seat image to obtain a background sampling model, and then entering step 1-2; step 1-2, calculating the distance between the pixel value and n sampling points in the background sampling model, and then entering step 1-3; step 1-3, defining the distance between the R value representation and the current pixel, calculating whether the current pixel belongs to the background, if the current pixel belongs to the motion foreground, entering step 1-5, otherwise, entering step 1-4; step 1-4, firstly, randomly taking a sample from a current background sampling model to exchange with a current pixel value, then randomly taking a pixel from 8 adjacent pixel points adjacent to the current pixel point to exchange with a random sample in the background sampling model, thereby updating the background sampling model, and then entering step 1-5; and step 1-5, repeating the step 1-2 to the step 1-4 to continuously process the next pixel until all the pixels are processed.
The real-time detection method for the multi-feature seat state provided by the invention can also have the following features: wherein, step 2 includes the following steps: step 2-1, calculating the HOG visual characteristics; step 2-2, calculating GIST visual characteristics; and 2-3, calculating SIFT visual features.
The real-time detection method for the multi-feature seat state provided by the invention can also have the following features: in the step 2-1, the HOG visual characteristic calculation method comprises the following steps: step 2-1-1, adopting Gamma color correction to adjust the contrast of the seat image so as to normalize the seat image; step 2-1-2, calculating gradients in each color channel of the collected RGB image, and finally taking the maximum value as the gradient of the pixel in the gradients of 3 channels of the pixel; step 2-1-3, weighting each block in the image by using a Gaussian weight window, and counting a gradient histogram for each cell according to the gradient direction of a pixel; step 2-1-4, for each block, counting gradient histograms in the block in all directions, and then normalizing the gradient histograms; and 2-1-5, collecting histograms of gradient directions after each block is normalized, and connecting and combining the histograms into HOG characteristics.
The real-time detection method for the multi-feature seat state provided by the invention can also have the following features: wherein, in the step 2-2, the step of calculating the visual characteristics of GIST comprises the following steps:
step 2-2-1, performing convolution operation on the image of the seat area and a Gabor filter to obtain a filtered image; and 2-2-2, dividing the convolved image according to 4 x 4, taking the average values in each divided region, and combining the average values of the regions to obtain the GIST characteristics.
The real-time detection method for the multi-feature seat state provided by the invention can also have the following features: in step 2-3, the step of calculating the SIFT visual characteristics comprises the following steps:
step 2-3-1, adopting a scale space established by a DOG operator, comparing a certain point with 26 points of surrounding 8 adjacent points and 9 adjacent points at corresponding positions of upper and lower adjacent scales, and if the points are all larger or all smaller than the 26 points, selecting the point as an extreme point; step 2-3-2, filtering pixel points with low contrast and close to edges in the extreme points; step 2-3-3, calculating the main direction of the extreme point through a direction gradient histogram; and 2-3-4, rotating coordinate axes according to the direction of the extreme point by taking the extreme point as a center, taking 8 × 8 areas around the key point, dividing the areas according to 2 × 2, counting a direction histogram in each sub-area, and generating the SIFT feature descriptors after normalization.
The invention provides a multi-feature seat state real-time detection system, which is characterized by comprising the following components: the preprocessing part is used for preprocessing the seat image to be detected, establishing a pixel background model of a scene, screening out the seat image with changed state and extracting the changed seat image; and the target detection processing part is used for calculating three visual characteristics of HOG, GIST and SIFT in parallel on the changed seat image, classifying the three visual characteristics through a support vector machine classifier to obtain a classification result, fusing the classification result and finally obtaining the seat state in the current scene.
Action and effects of the invention
According to the real-time detection method for the multi-feature seat state, the detection steps are as follows: step 1, preprocessing a seat image to be detected, establishing a pixel background model of a scene, screening out a seat image with a changed state, and extracting the seat image with the changed state; step 2, calculating three visual characteristics of HOG, GIST and SIFT in parallel on the changed sitting posture image; step 3, classifying the three visual characteristics through a support vector machine classifier to obtain a classification result; and 4, fusing the classification results to finally obtain the seat state in the current scene.
Therefore, according to the real-time detection method for the multi-feature seat state, the problem of the seat state monitoring is converted into the problem of human body target detection, HOG, GIST and SIFT features are selected according to the actual situation of a scene, and three feature classification results are fused, so that the accuracy and the robustness of the seat state monitoring result are guaranteed. The seat with the changed state is screened out through image preprocessing based on the ViBe background difference, so that the calculation amount is reduced, and the requirement of real-time monitoring is met. In addition, the multi-feature seat state real-time monitoring strategy is simple to implement, can be used for various conventional convolutional neural network models, and is simple, convenient and fast.
Drawings
FIG. 1 is a schematic illustration of a meeting room seating distribution in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for real-time multi-feature seat status detection in an embodiment of the present invention;
FIG. 3 is a flow chart of HOG feature extraction in an embodiment of the present invention; and
FIG. 4 is a flow diagram of multi-feature fusion in an embodiment of the present invention.
Detailed Description
In order to make the technical means, creation features, achievement objectives and effects of the present invention easy to understand, the following embodiments specifically describe a multi-feature seat status real-time detection method and system of the present invention with reference to the accompanying drawings.
In this embodiment, a real-time detection method for a multi-feature seat state is provided, which can complete accurate real-time monitoring of the seat state for a seat distribution map with high recognition difficulty.
In this embodiment, a real-time multi-feature seat state detection method is implemented by a computer, where the computer needs a graphics card to perform GPU acceleration to complete a training process of a model, and a seat state monitoring model and an image recognition process that are trained are stored in the computer in the form of executable codes.
Fig. 1 is a schematic illustration of a meeting place seating distribution in an embodiment of the present invention.
As shown in fig. 1, in this embodiment, a data set is real-time video data of a large conference room during multiple conferences, the whole conference room includes 3000 seats, which are covered by 16 high-definition cameras, each camera is responsible for 100 to 300 seats, the resolution of an acquired image is 1920 × 1080, the number of frames reaches more than 25 frames per second, and the length of the acquired video reaches 600 minutes. And acquiring the coordinates of the seat area on the acquired video data set in a manual labeling mode. Uniformly collecting sample frames on a time axis, randomly sampling seats in sample frame images, ensuring the randomness of the sample seats as far as possible, and sequentially generating seat sample sets. After the seat sample set is obtained, each seat image is manually marked, and some fuzzy and sheltered samples are eliminated. Finally, in the obtained seat sample set, 1230 seated seat samples were used as a positive sample set, 820 unseated seat samples were used as a negative sample set, and the other seat samples were used as a test set.
Fig. 2 is a flow chart of a method for real-time detection of multi-feature seat status in an embodiment of the invention.
As shown in fig. 2, the real-time multi-feature seat status detection method according to the present embodiment includes the following steps:
And step S1, preprocessing the seat image to be detected, establishing a pixel background model of the scene, screening out the seat image with changed state, and extracting the seat image with changed state. And extracting moving object detection areas from the video images, wherein the detection areas in the view area of each camera are manually marked in advance.
Wherein, in step S1, the preprocessing section includes the steps of:
and S1-1, initializing a ViBe background difference model in the first frame seat image to obtain a background sampling model, and then entering S1-2.
Step S1-2, calculating the distance between the pixel value and n sampling points in the background sampling model, and then entering step S1-3.
And step S1-3, defining the distance between the R value representation and the current pixel, calculating whether the current pixel belongs to the background, if the current pixel belongs to the motion foreground, entering step S1-5, otherwise, entering step S1-4.
Step S1-4, a sample is randomly taken from the current background sampling model to exchange with the current pixel value, then a pixel is randomly taken from 8 adjacent pixel points adjacent to the current pixel point to exchange with a random sample in the background sampling model, so that the background sampling model is updated, and then the step S1-5 is carried out.
And S1-5, repeating the step S1-2 to the step S1-4 to continue to process the next pixel until all the pixels are processed.
And step S2, performing unit seat feature extraction on the preprocessed image obtained in the step S1, and calculating three visual features of HOG, GIST and SIFT in parallel on the changed seat image.
Wherein, step S2 includes the following steps:
step S2-1, HOG visual characteristics are calculated.
Fig. 3 is a flowchart of HOG feature extraction in an embodiment of the present invention.
As shown in fig. 3, wherein, in step S2-1, calculating the HOG visual characteristics includes the following steps:
and step S2-1-1, inputting an image and performing sliding window processing through the detection window. And performing Gamma color correction on the processed image, and adjusting the contrast of the seat image to normalize the seat image.
And S2-1-2, calculating image gradient of the color correction image obtained by the processing of the step S2-1-1, calculating gradient in each color channel of the acquired RGB image, and finally taking the maximum value in 3 channel gradients of the pixel as the gradient of the pixel.
And step S2-1-3, weighting each block in the image by using a Gaussian weight window, and counting a gradient histogram for each cell according to the gradient direction of the pixel.
And step S2-1-4, for each block, calculating a gradient histogram in the block in each direction, and then normalizing the gradient histogram.
And step S2-1-5, collecting histograms of gradient directions after each block is normalized, and connecting and combining the histograms into HOG characteristics.
Step S2-2, calculating GIST visual characteristics. Calculating GIST visual characteristics comprises the following steps:
and step S2-2-1, performing convolution operation on the image of the seat area and a Gabor filter to obtain a filtered image.
Step S2-2-2, the convolved image is divided into 4 × 4 regions, the average values are taken from the divided regions, and the average values of the regions are combined to obtain GIST features.
And step S2-3, calculating SIFT visual characteristics. The calculation of the SIFT visual characteristics comprises the following steps:
and S2-3-1, adopting the scale space established by the DOG operator, comparing a certain point with 26 points including surrounding 8 adjacent points and 9 adjacent points at corresponding positions of upper and lower adjacent scales, and if the points are all larger or all smaller than the 26 points, selecting the point as an extreme point.
And step S2-3-2, filtering pixel points with low contrast and close to the edge in the extreme points.
And step S2-3-3, calculating the main direction of the extreme point through the histogram of directional gradients.
And step S2-3-4, rotating coordinate axes according to the direction of the extreme point as the center, taking 8 × 8 areas around the key point, dividing the areas according to 2 × 2, counting direction histograms in each sub-area, and generating SIFT feature descriptors after normalization.
And step S3, classifying the three visual features through a Support Vector Machine (SVM) classifier, and inputting the unit seat features obtained in the step S2 into a seat recognition feature model obtained through pre-training so as to obtain a feature classification result.
In this embodiment, the feature model may be obtained by pre-training and stored in a computer, and the computer may call the feature model through an executable code and process a plurality of seat images in batch at the same time, to obtain and output a seat classification result for each image.
FIG. 4 is a flow chart of multi-feature fusion in an embodiment of the present invention.
As shown in fig. 4, in step S4, the classification results are fused to obtain the seat status in the current scene. And performing regional seat state fusion on the processed feature fusion result, and combining the calculated seat state and the state of the seat which does not move into the seat state of the whole region for the state with motion. And outputting the processed regional seat state to obtain an overall meeting place seat state diagram.
In this embodiment, the trained neural network model has a classification accuracy of 99.5% for the seat states of the test set.
The embodiment also provides a real-time detection system for the status of a multi-feature seat, which comprises:
and a preprocessing section that performs preprocessing by the method in step S1 in the present embodiment. And
the object detection processing unit performs real-time detection by the method of steps S2 to S4 in this embodiment, and obtains the classification result.
Effects and effects of the embodiments
According to the real-time detection method for the multi-feature seat state in the embodiment, the detection steps are as follows: step 1, preprocessing a seat image to be detected, establishing a pixel background model of a scene, screening out a seat image with a changed state, and extracting the seat image with the changed state; step 2, calculating three visual characteristics of HOG, GIST and SIFT in parallel on the changed sitting posture image; step 3, classifying the three visual characteristics through a support vector machine classifier to obtain a classification result; and 4, fusing the classification results to finally obtain the seat state in the current scene.
Therefore, according to the multi-feature seat state real-time detection method and system, the seat state monitoring problem is converted into the human body target detection problem, the HOG, GIST and SIFT features are selected according to the actual situation of the scene, and the three feature classification results are fused, so that the accuracy and the robustness of the seat state monitoring result are guaranteed. The seat with the changed state is screened out through image preprocessing based on the ViBe background difference, so that the calculation amount is reduced, and the requirement of real-time monitoring is met. In addition, the multi-feature seat state real-time monitoring strategy is simple to implement, can be used for various conventional convolutional neural network models, and is simple, convenient and fast.
The above embodiments are preferred examples of the present invention, and are not intended to limit the scope of the present invention.

Claims (7)

1. A real-time detection method for multi-feature seat states is characterized by comprising the following steps:
step 1, preprocessing a seat image to be detected, establishing a pixel background model of a scene, screening out a seat image with a changed state, and extracting the seat image with the changed state;
step 2, calculating three visual characteristics of HOG, GIST and SIFT in parallel on the changed seat image;
step 3, classifying the three visual characteristics through a support vector machine classifier to obtain a classification result;
and 4, fusing the classification results to finally obtain the seat state in the current scene.
2. The method of claim 1, wherein the real-time detection of the status of the multi-characteristic seat comprises:
wherein, in step 1, the pretreatment part comprises the following steps:
step 1-1, initializing a ViBe background difference model in the seat image of the first frame to obtain a background sampling model, and then entering step 1-2;
step 1-2, calculating the distance between a pixel value and n sampling points in the background sampling model, and then entering step 1-3;
Step 1-3, defining the distance between the R value representation and the current pixel, calculating whether the current pixel belongs to the background, if the current pixel belongs to the motion foreground, entering step 1-5, otherwise, entering step 1-4;
step 1-4, firstly, randomly taking a sample from a current background sampling model to exchange with a current pixel value, then randomly taking a pixel from 8 adjacent pixel points adjacent to the current pixel point to exchange with a random sample in the background sampling model, thereby updating the background sampling model, and then entering step 1-5;
and step 1-5, repeating the step 1-2 to the step 1-4 to continuously process the next pixel until all the pixels are processed.
3. The method for real-time detection of the status of a multi-characteristic seat as claimed in claim 1, wherein:
wherein, step 2 includes the following steps:
step 2-1, calculating the HOG visual characteristics;
step 2-2, calculating GIST visual characteristics;
and 2-3, calculating SIFT visual features.
4. A method as claimed in claim 3, wherein said method comprises the steps of:
wherein, in the step 2-1, the step of calculating the HOG visual characteristics comprises the following steps:
step 2-1-1, adopting Gamma color correction to adjust the contrast of the seat image so as to normalize the seat image;
Step 2-1-2, calculating gradients in each color channel of the collected RGB image, and finally taking the maximum value as the gradient of the pixel in the gradients of 3 channels of the pixel;
step 2-1-3, weighting each block in the image by using a Gaussian weight window, and counting a gradient histogram for each cell according to the gradient direction of a pixel;
step 2-1-4, for each block, counting gradient histograms in the block in all directions, and then normalizing the gradient histograms;
and 2-1-5, collecting histograms of gradient directions after each block is normalized, and connecting and combining the histograms into HOG characteristics.
5. A method as claimed in claim 3, wherein said method comprises the steps of:
wherein, in the step 2-2, the step of calculating the visual characteristics of GIST comprises the following steps:
step 2-2-1, performing convolution operation on the image of the seat area and a Gabor filter to obtain a filtered image;
and 2-2-2, dividing the convolved image according to 4 x 4, taking the average values in each divided region, and combining the average values of the regions to obtain the GIST characteristics.
6. The method of claim 1, wherein the real-time detection of the status of the multi-characteristic seat comprises:
In the step 2-3, the step of calculating the SIFT visual characteristics comprises the following steps:
step 2-3-1, adopting a scale space established by a DOG operator, comparing a certain point with 26 points of surrounding 8 adjacent points and 9 adjacent points at corresponding positions of upper and lower adjacent scales, and if the points are all larger or all smaller than the 26 points, selecting the point as an extreme point;
step 2-3-2, filtering pixel points with low contrast and close to edges in the extreme points;
2-3-3, calculating the main direction of the extreme point through a direction gradient histogram;
and 2-3-4, rotating coordinate axes according to the direction of the extreme point by taking the extreme point as a center, taking 8 × 8 areas around the key point, dividing the areas according to 2 × 2, counting direction histograms in each sub-area, and generating the SIFT feature descriptors after normalization.
7. A multi-feature seat status real-time detection system, comprising:
the preprocessing part is used for preprocessing the seat image to be detected, establishing a pixel background model of a scene, screening out the seat image with changed state and extracting the seat image with changed state; and
and the target detection processing part is used for calculating three visual characteristics of HOG, GIST and SIFT in parallel on the changed seat image, classifying the three visual characteristics through a support vector machine classifier to obtain a classification result, fusing the classification result and finally obtaining the seat state in the current scene.
CN202210142265.XA 2022-02-16 2022-02-16 Real-time detection method and system for multi-feature seat state Pending CN114519799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210142265.XA CN114519799A (en) 2022-02-16 2022-02-16 Real-time detection method and system for multi-feature seat state

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210142265.XA CN114519799A (en) 2022-02-16 2022-02-16 Real-time detection method and system for multi-feature seat state

Publications (1)

Publication Number Publication Date
CN114519799A true CN114519799A (en) 2022-05-20

Family

ID=81599895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210142265.XA Pending CN114519799A (en) 2022-02-16 2022-02-16 Real-time detection method and system for multi-feature seat state

Country Status (1)

Country Link
CN (1) CN114519799A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173104A (en) * 2023-08-04 2023-12-05 山东大学 Low-altitude unmanned aerial vehicle image change detection method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173104A (en) * 2023-08-04 2023-12-05 山东大学 Low-altitude unmanned aerial vehicle image change detection method and system
CN117173104B (en) * 2023-08-04 2024-04-16 山东大学 Low-altitude unmanned aerial vehicle image change detection method and system

Similar Documents

Publication Publication Date Title
CN109410168B (en) Modeling method of convolutional neural network for determining sub-tile classes in an image
CN111126325B (en) Intelligent personnel security identification statistical method based on video
CN109284738B (en) Irregular face correction method and system
CN111667455B (en) AI detection method for brushing multiple defects
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
US7136524B1 (en) Robust perceptual color identification
CN106951870B (en) Intelligent detection and early warning method for active visual attention of significant events of surveillance video
DE112015000964T5 (en) Image processing apparatus, image processing method and image processing program
CN114241548A (en) Small target detection algorithm based on improved YOLOv5
CN112396635B (en) Multi-target detection method based on multiple devices in complex environment
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN109145964B (en) Method and system for realizing image color clustering
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN107194348A (en) The domain color recognition methods of target object in a kind of image
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN106709438A (en) Method for collecting statistics of number of people based on video conference
CN105825168A (en) Golden snub-nosed monkey face detection and tracking algorithm based on S-TLD
CN111444844A (en) Liquid-based cell artificial intelligence detection method based on variational self-encoder
CN112884782A (en) Biological object segmentation method, apparatus, computer device and storage medium
CN111986163A (en) Face image selection method and device
CN112464850A (en) Image processing method, image processing apparatus, computer device, and medium
CN114463843A (en) Multi-feature fusion fish abnormal behavior detection method based on deep learning
CN113362277A (en) Workpiece surface defect detection and segmentation method based on deep learning
JP2023115104A (en) Image processing apparatus, image processing method, and program
CN108345835A (en) A kind of target identification method based on the perception of imitative compound eye

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination