CN105678278A - Scene recognition method based on single-hidden-layer neural network - Google Patents
Scene recognition method based on single-hidden-layer neural network Download PDFInfo
- Publication number
- CN105678278A CN105678278A CN201610069804.6A CN201610069804A CN105678278A CN 105678278 A CN105678278 A CN 105678278A CN 201610069804 A CN201610069804 A CN 201610069804A CN 105678278 A CN105678278 A CN 105678278A
- Authority
- CN
- China
- Prior art keywords
- scene
- neural networks
- image
- hidden layer
- recognition method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
Abstract
The invention provides a scene recognition method based on a single-hidden-layer neural network, and the method is characterized in that the method comprises a training stage and a recognition stage; the training stage comprises the steps: carrying out the preprocessing of a pre-collected sampling image set for training, extracting the local gradient statistical characteristics of the pre-collected sampling image set after preprocessing, enabling the local gradient statistical characteristics and a corresponding scene type label to be added to a single-hidden-layer neural network classifier for layered supervised learning, obtaining a plurality of different optimal parameters of various types of single-hidden-layer neural networks, and constructing a multilayer scene classifier according to the optimal parameters; the recognition stage comprises the steps: carrying out the preprocessing of a to-be-recognized image set, carrying out the local gradient statistical characteristics of the to-be-recognized image set after preprocessing, enabling a local gradient statistical characteristic vector to be inputted into the multilayer scene classifier for recognition, and obtaining a class mark of the scene. The method achieves the high-precision scene recognition.
Description
Technical field
The present invention relates to a kind of scene recognition method based on neural networks with single hidden layer.
Background technology
The content such as same color feature identification that scene Recognition refers to according to scene image is close goes out the scene in scene picture, it is therefore an objective to excavate the scene characteristic in image by imitating the perception of the mankind, thus automatically identifying the scene that image is subordinate to. In scene Recognition process, whole image is to make as a whole to be judged to other, is not related to objectives. Because objectives are only as the foundation judging generic in scene classification, but not necessarily perfectly correlated with scene generic. Scene Recognition is a basic preprocessing process in computer vision and robot field, and it serves as important role in the computer intelligence field of Image Retrieval, pattern recognition and machine learning etc.
In recent years, scene Recognition research was achieved with greater advance, the method emerging the modeling of a lot of scene type. According to scene type modeling pattern, existing scene recognition method is divided into four classes:
(1) based on the scene recognition method of global characteristics
Scene is described by scene recognition method based on global characteristics mostly by the global visualization feature of the images such as color, texture and shape, and is successfully applied in outdoor scene identification. Comparatively speaking, color characteristic all can obtain better recognition result for the yardstick of scene, the change at visual angle and the rotation of image; And texture and shape facility are corresponding to the structure of image and directional information, these be just also human visual system sensitive, therefore texture has better concordance with the visually-perceptible result of shape facility with the mankind. But, typically require all pixels of search graph picture based on the scene recognition method of global characteristics, and do not account for the spatial relation of pixel, therefore it has poor real-time and versatility.
(2) scene recognition method of based target
One locality can be positioned exactly by a series of great representational targets about, and based on this principle, most of scene recognition method are also pick out the scene corresponding to image according to the result of target recognition in image. Then, such scene recognition method needs the stages such as experience image segmentation, combination of multiple features and target recognition. When target range visual angle to be identified is far, this target is just probably hidden in those background informations lacking break-up values, and in the segmentation stage, just oneself, through being ignored, and then causes that the work of this target recognition cannot realize. In addition, in order to simplify the degree of redying of concrete scene, it is necessary to choose one group of target that can represent this scene, and the On The Choice of these reliable and stable representative targets becomes another bottleneck of the scene Recognition restricting based target.
(3) based on the scene recognition method in region
In view of the limitation that the scene recognition method of based target has, some researcheres utilize splits the region obtained to replace scene representations target, and the structural relation according to these regions carries out feature combination thus forming scene indicia. Challenge is how of such scene recognition method obtains reliable Region Segmentation Algorithm. And the character representation method of these area informations has a lot, for instance: the mode that local combines with the overall situation can be adopted to realize, namely extract the global statistics feature of intra-zone; By the local invariant feature in extraction region, region can also be characterized; According to word bag model, area information can also be characterized.
(4) based on the scene recognition method of bionical feature
Consider the gap yet suffering from making up between real-time and the high efficiency of scene Recognition, computer vision system best at present and the mankind and the visual system of other animals. In view of the superior scene Recognition ability that humans and animals has, creating the scene recognition method based on bionical feature therewith, the method realizes scene Recognition by simulating the treatment mechanism of biological visual cortex. Its basic ideas are or a certain class biological vision characteristic spread research machine-processed for a certain biological vision, and set up effective computation model by careful analysis, thus obtaining gratifying result. Such as, select image area information that some can note easily caused by people by the method for mechanism as priority treatment object based on human visual attention, this selective mechanism can be greatly enhanced scene recognition method to the process of visual information, analyze and the efficiency of identification.
The every difficult point existed in existing scene Recognition, such as Same Scene be dynamically change, Same Scene picture exist the image between polytropy, different classes there may be much similar point, different scene image it is possible that the degree of accuracy of scene classification that results in of the situation etc. of overlap is not high, the invention provides a kind of scene recognition method based on neural networks with single hidden layer, scene Recognition based on global characteristics, it is make as a whole to be judged to other by whole scene image, without regard to objectives, it may be achieved higher scene image discrimination.
Summary of the invention
The technical problem to be solved in the present invention, is in that to provide a kind of scene recognition method based on neural networks with single hidden layer, improves scene Recognition degree of accuracy.
The present invention is achieved in that a kind of scene recognition method based on neural networks with single hidden layer, including training stage and cognitive phase;
The described training stage includes: carry out pretreatment to what gather in advance for the sample graph image set trained, extract the partial gradient statistical nature of pretreated sample graph image set, described partial gradient statistical nature and corresponding scene class label are joined neural networks with single hidden layer grader and carry out hierarchical supervised learning, obtain the optimized parameter of a plurality of different multiclass neural networks with single hidden layer, build multi-layer scene classifier according to described optimized parameter;
Described cognitive phase includes: image set to be identified is carried out pretreatment, extract the partial gradient statistical nature of pretreated image set to be identified, the partial gradient statistical nature vector of the image described to be identified extracted is sent in described multi-layer scene classifier and is identified, obtain the classification mark of affiliated scene class.
Further, described pretreatment includes picture contrast normalization and Gamma correction process, by regulating the contrast of image, alleviates the impact owing to shade and the illumination variation of image local cause, it is suppressed that the interference of noise.
Further, described picture contrast normalization specifically includes: forward image to YUV color space from RGB color and YUV color space carries out global and local contrast normalized, Y passage is only operated by described global and local contrast normalized process, and other two passages remain unchanged, image pixel value is normalized near image pixel average by described global normalization, the normalization of described local is that edge is strengthened, by the normalized of picture contrast, can significantly alleviate the impact owing to shade and the illumination variation of image local cause.
Further, the extraction of described partial gradient statistical nature is specific as follows:
Image is divided into Y, U, V triple channel, calculates the First-order Gradient of Y passage, U passage and V passage respectively;
Divide an image into nonoverlapping cell factory, calculate the histogram of gradients of each cell factory;
Cell factory by adjacent 2 × 2 forms overlapping block, block slides and is sized to the size of a cell factory, carry out two norm normalization of histogram of gradients each piece of the inside, the histogram information of superposition each piece obtains the characteristic vector of the characteristic vector of Y passage, the characteristic vector of U passage and V passage;
It is overlapped the characteristic vector of three passages obtaining final partial gradient statistical nature. Can effectively colouring information be fused in final feature by feature extraction, improve accuracy of identification.
Further, the calculation of described First-order Gradient is: adopts Sobel operator that original image carries out convolution operation and obtains the gradient classifications G of X-direction (horizontal direction)x(x, y) and the gradient component G of Y-direction (vertical direction)y(x, y), ask for each pixel in image gradient magnitude G (x, y) and direction
Further, the acquisition pattern of described optimized parameter is particularly as follows: for the neural networks with single hidden layer of each multiclass, the parameter learnt includes regularization coefficient and hidden node number, adopt the strategy adjusting ginseng respectively: be first randomly provided hidden node number, learn the regularization coefficient of optimum, then regularization coefficient is set to optimal value, learn the hidden node number of optimum, obtain regularization coefficient and hidden node number, the i.e. optimized parameter of optimum.
Further, the number of levels of described multi-layer scene classifier divides according to the membership relation of the attribute of scene own, and every one-level includes the neural networks with single hidden layer of at least one multiclass.
Further, the model of described neural networks with single hidden layer includes three layers: input layer, hidden layer and output layer, described input layer receives the characteristic vector of the image extracted, and carries out exporting from output layer after data process through hidden layer, determines the type belonging to current data according to output.
Present invention have the advantage that
1, tackled the dynamic change of scene picture by feature extraction, the some other invariance except color, illumination, angle effects of scene picture can be concerned about preferably, improve the precision identified;
2, by pretreatment operation, the impact owing to shade and the illumination variation of image local cause significantly is alleviated;
3, the present invention is based on the scene Recognition of neural networks with single hidden layer, polytropy for scene picture, the data adopting many groups different add experiment, alleviate and judge because picture the factor of changeability affects classification results, neural networks with single hidden layer owing to adopting possesses good classification performance, and pace of learning is exceedingly fast such that it is able to meet the real-time and precision that identify.
Accompanying drawing explanation
The present invention is further illustrated in conjunction with the embodiments with reference to the accompanying drawings.
Fig. 1 is that the inventive method performs flow chart.
Fig. 2 is the processing procedure schematic diagram of the inventive method.
Fig. 3 is the inventive method feature extraction flow chart.
Fig. 4 is the hierarchical scene classifier model schematic built based on neural networks with single hidden layer.
Fig. 5 is neural networks with single hidden layer model schematic.
Detailed description of the invention
If Fig. 1 is to shown in 5, a kind of scene recognition method based on neural networks with single hidden layer, including training stage and cognitive phase;
The described training stage includes:
Step 1, to gather in advance for training the sample graph image set of the hierarchical scene classifier based on neural networks with single hidden layer to carry out pretreatment, described sample image comprises different mode as much as possible, and the inhomogeneous scene image of correspondence keeps in balance as far as possible, in order to better learn scene classifier parameters, described pretreatment includes picture contrast normalization and Gamma correction process, and described picture contrast normalization specifically includes: forward image to YUV color space from RGB color and YUV color space is carried out global and local contrast normalized, Y passage is only operated by described global and local contrast normalized process, and other two passages remain unchanged, image pixel value is normalized near image pixel average by described global normalization, the normalization of described local is that edge is strengthened, by the normalized of picture contrast, can significantly alleviate the impact owing to shade and the illumination variation of image local cause,
Step 2, extract the partial gradient statistical nature of pretreated sample graph image set:
Sample image is divided into Y, U, V triple channel, calculating the First-order Gradient of Y passage, U passage and V passage respectively, the First-order Gradient of each passage calculates each through in the following manner and obtains: adopts Sobel operator that original image is carried out convolution operation and obtains the gradient classifications G of X-direction (horizontal direction)x(x, y) and the gradient component G of Y-direction (vertical direction)y(x, y), ask for each pixel in image gradient magnitude G (x, y) and directionFormula be:
Sobel operator is simple to operate, but result is but effective than other complicated operators.
Sample image is divided into nonoverlapping cell factory, calculates the histogram of gradients of each cell factory, each sliding window is formed by a plurality of pieces, each piece is further partitioned into a plurality of cell factory (each cell factory is made up of) multiple pixels, the one-dimensional characteristic vector dimension that each window is corresponding is equal to window block number and is multiplied by block cell factory number and is multiplied by each cell factory characteristic of correspondence vector again, mainly through local image region is encoded, pixel gradient directions all in each cell factory are added in corresponding rectangular histogram Direction interval according to the thought of weighting, form initial feature, such as angular range 0~180 degree is divided into 8 equal portions, it is divided into 8 direction blocks by the gradient direction 180 degree of cell factory, it is weighted projecting (being mapped to fixing angular range) as the weights of projection by gradient magnitude in rectangular histogram to pixel gradient direction each in cell factory, can be obtained by the gradient orientation histogram of this cell factory,
Cell factory by adjacent 2 × 2 forms overlapping block, block slides and is sized to the size of a cell factory, two norm normalization of histogram of gradients are carried out each piece of the inside, the histogram information of superposition each piece obtains the characteristic vector of Y passage, the characteristic vector of U passage and the characteristic vector of V passage, such as, rectangular histogram inside the block of all overlaps is got up according to sequential combination from top to bottom or from left to right, it is formed for the characteristic vector of final a certain passage, overlapping block can ensure that each unit cell space occurs repeatedly in last vector, so needing echelon's intensity is done normalization, owing to the contrast of illumination and prospect background causes gradient intensity transformation range very wide, so effective local method for normalizing can reduce these impacts, improve performance,
It is overlapped the characteristic vector of three passages obtaining final partial gradient statistical nature;
Step 3, described partial gradient statistical nature and corresponding scene class label are joined neural networks with single hidden layer grader and carry out hierarchical supervised learning, obtain the optimized parameter of a plurality of different multiclass neural networks with single hidden layer, multi-layer scene classifier is built according to described optimized parameter, the number of levels of described multi-layer scene classifier divides according to the membership relation of the attribute of scene own, every one-level includes the neural networks with single hidden layer of at least one multiclass, and different multiclass neural networks with single hidden layer can be trained respectively; Neural networks with single hidden layer for each multiclass, the parameter learnt includes regularization coefficient and hidden node number, adopt the strategy adjusting ginseng respectively, first it is randomly provided hidden node number, learn the regularization coefficient of optimum, again regularization coefficient is set to optimal value, learns the hidden node number of optimum, obtain regularization coefficient and the hidden node number of optimum.
Described cognitive phase includes:
Step 4, image set to be identified is carried out pretreatment, described pretreatment includes picture contrast normalization and Gamma correction process, described picture contrast normalization specifically includes: forward image to YUV color space from RGB color and YUV color space carries out global and local contrast normalized, Y passage is only operated by described global and local contrast normalized process, and other two passages remain unchanged, image pixel value is normalized near image pixel average by described global normalization, the normalization of described local is that edge is strengthened, by the normalized of picture contrast, can significantly alleviate the impact owing to shade and the illumination variation of image local cause.
Step 5, extracting the partial gradient statistical nature of pretreated image set to be identified, this extraction partial gradient statistical nature is consistent with the operating process extracting partial gradient statistical nature in step 2:
Image to be identified is divided into Y, U, V triple channel, calculating the First-order Gradient of Y passage, U passage and V passage respectively, the First-order Gradient of each passage calculates each through in the following manner and obtains: adopts Sobel operator that original image is carried out convolution operation and obtains the gradient classifications G of X-direction (horizontal direction)x(x, y) and the gradient component G of Y-direction (vertical direction)y(x, y), ask for each pixel in image gradient magnitude G (x, y) and directionFormula be:
Sobel operator is simple to operate, but result is but effective than other complicated operators;
Image division to be identified is become nonoverlapping cell factory, calculates the histogram of gradients of each cell factory, each sliding window is formed by a plurality of pieces, each piece is further partitioned into a plurality of cell factory (each cell factory is made up of) multiple pixels, the one-dimensional characteristic vector dimension that each window is corresponding is equal to window block number and is multiplied by block cell factory number and is multiplied by each cell factory characteristic of correspondence vector again, mainly through local image region is encoded, pixel gradient directions all in each cell factory are added in corresponding rectangular histogram Direction interval according to the thought of weighting, form initial feature, such as angular range 0~180 degree is divided into 8 equal portions, it is divided into 8 direction blocks by the gradient direction 180 degree of cell factory, it is weighted projecting (being mapped to fixing angular range) as the weights of projection by gradient magnitude in rectangular histogram to pixel gradient direction each in cell factory, can be obtained by the gradient orientation histogram of this cell factory,
Cell factory by adjacent 2 × 2 forms overlapping block, block slides and is sized to the size of a cell factory, two norm normalization of histogram of gradients are carried out each piece of the inside, the histogram information of superposition each piece obtains the characteristic vector of Y passage, the characteristic vector of U passage and the characteristic vector of V passage, such as, rectangular histogram inside the block of all overlaps is got up according to sequential combination from top to bottom or from left to right, it is formed for the characteristic vector of final a certain passage, overlapping block can ensure that each unit cell space occurs repeatedly in last vector, so needing echelon's intensity is done normalization, owing to the contrast of illumination and prospect background causes gradient intensity transformation range very wide, so effective local method for normalizing can reduce these impacts, improve performance,
It is overlapped the characteristic vector of three passages obtaining final partial gradient statistical nature, can effectively colouring information is fused in final feature by feature extraction, improve accuracy of identification;
Step 6, the partial gradient statistical nature vector of the image described to be identified extracted being sent in described multi-layer scene classifier and will be identified, obtaining the classification mark of affiliated scene class, thus realizing scene Recognition.
The model of described neural networks with single hidden layer includes three layers: input layer, hidden layer and output layer, described input layer receives the characteristic vector of the image extracted, carry out exporting from output layer after data process through hidden layer, the type belonging to current data is determined according to output, described hidden layer includes L hidden neuron, generally L is much smaller than number of samples, the vector of output layer output m dimension, ignore input layer and hidden layer, only consider output and the output layer of hidden layer neuron, the process that finally optimizes not only makes error minimum, also hidden layer output weights are made to reach minimum, the generalization ability of such model is just best. when training study, the parameter of model is estimated by the training sample (i.e. one group of observed quantity) of the training data of given known class, when identifying, by inputting the characteristic vector of current scene image to be identified, it is sent in the neural networks with single hidden layer model trained, finally according to the output of neural networks with single hidden layer model, determining currently test type described in data, its affiliated classification is determined by the highest output node.
Although the foregoing describing the specific embodiment of the present invention; but those familiar with the art is to be understood that; we are merely exemplary described specific embodiment; rather than for the restriction to the scope of the present invention; those of ordinary skill in the art, in the equivalent modification made according to the spirit of the present invention and change, should be encompassed in the scope of the claimed protection of the present invention.
Claims (8)
1. the scene recognition method based on neural networks with single hidden layer, it is characterised in that: include training stage and cognitive phase;
The described training stage includes: carry out pretreatment to what gather in advance for the sample graph image set trained, extract the partial gradient statistical nature of pretreated sample graph image set, described partial gradient statistical nature and corresponding scene class label are joined neural networks with single hidden layer grader and carry out hierarchical supervised learning, obtain the optimized parameter of a plurality of different multiclass neural networks with single hidden layer, build multi-layer scene classifier according to described optimized parameter;
Described cognitive phase includes: image set to be identified is carried out pretreatment, extract the partial gradient statistical nature of pretreated image set to be identified, the partial gradient statistical nature vector of the image described to be identified extracted is sent in described multi-layer scene classifier and is identified, obtain the classification mark of affiliated scene class.
2. a kind of scene recognition method based on neural networks with single hidden layer according to claim 1, it is characterised in that: described pretreatment includes picture contrast normalization and Gamma correction process.
3. a kind of scene recognition method based on neural networks with single hidden layer according to claim 2, it is characterized in that: described picture contrast normalization specifically includes: forward image to YUV color space from RGB color and YUV color space is carried out global and local contrast normalized, Y passage is only operated by described global and local contrast normalized process, and other two passages remain unchanged, image pixel value is normalized near image pixel average by described global normalization, and the normalization of described local is that edge is strengthened.
4. a kind of scene recognition method based on neural networks with single hidden layer according to claim 1, it is characterised in that: the extraction of described partial gradient statistical nature is specific as follows:
Image is divided into Y, U, V triple channel, calculates the First-order Gradient of Y passage, U passage and V passage respectively;
Divide an image into nonoverlapping cell factory, calculate the histogram of gradients of each cell factory;
Cell factory by adjacent 2 × 2 forms overlapping block, block slides and is sized to the size of a cell factory, carry out two norm normalization of histogram of gradients each piece of the inside, the histogram information of superposition each piece obtains the characteristic vector of the characteristic vector of Y passage, the characteristic vector of U passage and V passage;
It is overlapped the characteristic vector of three passages obtaining final partial gradient statistical nature.
5. a kind of scene recognition method based on neural networks with single hidden layer according to claim 4, it is characterised in that: the calculation of described First-order Gradient is: adopts Sobel operator that original image carries out convolution operation and obtains the gradient component G of X-directionx(x, y) and the gradient component G of Y-directiony(x, y), ask for each pixel in image gradient magnitude G (x, y) and direction
6. a kind of scene recognition method based on neural networks with single hidden layer according to claim 1, it is characterized in that: the acquisition pattern of described optimized parameter is particularly as follows: for the neural networks with single hidden layer of each multiclass, the parameter learnt includes regularization coefficient and hidden node number, adopt the strategy adjusting ginseng respectively: be first randomly provided hidden node number, learn the regularization coefficient of optimum, again regularization coefficient is set to optimal value, learn the hidden node number of optimum, obtain regularization coefficient and hidden node number, the i.e. optimized parameter of optimum.
7. a kind of scene recognition method based on neural networks with single hidden layer according to claim 1, it is characterized in that: the number of levels of described multi-layer scene classifier divides according to the membership relation of the attribute of scene own, and every one-level includes the neural networks with single hidden layer of at least one multiclass.
8. a kind of scene recognition method based on neural networks with single hidden layer according to claim 1, it is characterized in that: the model of described neural networks with single hidden layer includes three layers: input layer, hidden layer and output layer, described input layer receives the characteristic vector of the image extracted, carry out exporting from output layer after data process through hidden layer, determine the type belonging to current data according to output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610069804.6A CN105678278A (en) | 2016-02-01 | 2016-02-01 | Scene recognition method based on single-hidden-layer neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610069804.6A CN105678278A (en) | 2016-02-01 | 2016-02-01 | Scene recognition method based on single-hidden-layer neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105678278A true CN105678278A (en) | 2016-06-15 |
Family
ID=56303402
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610069804.6A Pending CN105678278A (en) | 2016-02-01 | 2016-02-01 | Scene recognition method based on single-hidden-layer neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105678278A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408085A (en) * | 2016-08-31 | 2017-02-15 | 天津南大通用数据技术股份有限公司 | BP neural network classification method for solving nonlinear problem through single hidden layer |
CN106547880A (en) * | 2016-10-26 | 2017-03-29 | 重庆邮电大学 | A kind of various dimensions geographic scenes recognition methodss of fusion geographic area knowledge |
CN106845440A (en) * | 2017-02-13 | 2017-06-13 | 山东万腾电子科技有限公司 | A kind of augmented reality image processing method and system |
CN106886754A (en) * | 2017-01-17 | 2017-06-23 | 华中科技大学 | Object identification method and system under a kind of three-dimensional scenic based on tri patch |
CN107133647A (en) * | 2017-05-04 | 2017-09-05 | 湘潭大学 | A kind of quick Manuscripted Characters Identification Method |
CN107316035A (en) * | 2017-08-07 | 2017-11-03 | 北京中星微电子有限公司 | Object identifying method and device based on deep learning neutral net |
CN107941397A (en) * | 2017-11-09 | 2018-04-20 | 包云清 | Base station support force detection alarm system |
CN107967457A (en) * | 2017-11-27 | 2018-04-27 | 全球能源互联网研究院有限公司 | A kind of place identification for adapting to visual signature change and relative positioning method and system |
CN108596195A (en) * | 2018-05-09 | 2018-09-28 | 福建亿榕信息技术有限公司 | A kind of scene recognition method based on sparse coding feature extraction |
CN109165682A (en) * | 2018-08-10 | 2019-01-08 | 中国地质大学(武汉) | A kind of remote sensing images scene classification method merging depth characteristic and significant characteristics |
CN109583502A (en) * | 2018-11-30 | 2019-04-05 | 天津师范大学 | A kind of pedestrian's recognition methods again based on confrontation erasing attention mechanism |
WO2019072057A1 (en) * | 2017-10-13 | 2019-04-18 | 华为技术有限公司 | Image signal processing method, apparatus and device |
CN109740618A (en) * | 2019-01-14 | 2019-05-10 | 河南理工大学 | Network paper score method for automatically counting and device based on FHOG feature |
CN110659541A (en) * | 2018-06-29 | 2020-01-07 | 深圳云天励飞技术有限公司 | Image recognition method, device and storage medium |
CN110929663A (en) * | 2019-11-28 | 2020-03-27 | Oppo广东移动通信有限公司 | Scene prediction method, terminal and storage medium |
CN110945449A (en) * | 2018-11-15 | 2020-03-31 | 灵动科技(北京)有限公司 | Real-time supervision type machine learning system and method for field environment |
CN113378911A (en) * | 2021-06-08 | 2021-09-10 | 北京百度网讯科技有限公司 | Image classification model training method, image classification method and related device |
CN114970654A (en) * | 2021-05-21 | 2022-08-30 | 华为技术有限公司 | Data processing method and device and terminal |
CN117333929A (en) * | 2023-12-01 | 2024-01-02 | 贵州省公路建设养护集团有限公司 | Method and system for identifying abnormal personnel under road construction based on deep learning |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376326A (en) * | 2014-11-02 | 2015-02-25 | 吉林大学 | Feature extraction method for image scene recognition |
-
2016
- 2016-02-01 CN CN201610069804.6A patent/CN105678278A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376326A (en) * | 2014-11-02 | 2015-02-25 | 吉林大学 | Feature extraction method for image scene recognition |
Non-Patent Citations (3)
Title |
---|
付毅等: "一种快速的全局场景分类算法", 《红外与激光工程》 * |
查宇飞等: "《视频目标跟踪方法》", 31 July 2015, 国防工业出版社 * |
金培源等: "一种黄金分割优化的极限学习机算法", 《中国计量学院学报》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408085A (en) * | 2016-08-31 | 2017-02-15 | 天津南大通用数据技术股份有限公司 | BP neural network classification method for solving nonlinear problem through single hidden layer |
CN106547880A (en) * | 2016-10-26 | 2017-03-29 | 重庆邮电大学 | A kind of various dimensions geographic scenes recognition methodss of fusion geographic area knowledge |
CN106547880B (en) * | 2016-10-26 | 2020-05-12 | 重庆邮电大学 | Multi-dimensional geographic scene identification method fusing geographic area knowledge |
CN106886754A (en) * | 2017-01-17 | 2017-06-23 | 华中科技大学 | Object identification method and system under a kind of three-dimensional scenic based on tri patch |
CN106886754B (en) * | 2017-01-17 | 2019-07-09 | 华中科技大学 | Object identification method and system under a kind of three-dimensional scenic based on tri patch |
CN106845440A (en) * | 2017-02-13 | 2017-06-13 | 山东万腾电子科技有限公司 | A kind of augmented reality image processing method and system |
CN106845440B (en) * | 2017-02-13 | 2020-04-10 | 山东万腾电子科技有限公司 | Augmented reality image processing method and system |
CN107133647A (en) * | 2017-05-04 | 2017-09-05 | 湘潭大学 | A kind of quick Manuscripted Characters Identification Method |
CN107316035A (en) * | 2017-08-07 | 2017-11-03 | 北京中星微电子有限公司 | Object identifying method and device based on deep learning neutral net |
US11430209B2 (en) | 2017-10-13 | 2022-08-30 | Huawei Technologies Co., Ltd. | Image signal processing method, apparatus, and device |
WO2019072057A1 (en) * | 2017-10-13 | 2019-04-18 | 华为技术有限公司 | Image signal processing method, apparatus and device |
CN109688351A (en) * | 2017-10-13 | 2019-04-26 | 华为技术有限公司 | A kind of image-signal processing method, device and equipment |
CN107941397A (en) * | 2017-11-09 | 2018-04-20 | 包云清 | Base station support force detection alarm system |
CN107967457A (en) * | 2017-11-27 | 2018-04-27 | 全球能源互联网研究院有限公司 | A kind of place identification for adapting to visual signature change and relative positioning method and system |
CN107967457B (en) * | 2017-11-27 | 2024-03-19 | 全球能源互联网研究院有限公司 | Site identification and relative positioning method and system adapting to visual characteristic change |
CN108596195A (en) * | 2018-05-09 | 2018-09-28 | 福建亿榕信息技术有限公司 | A kind of scene recognition method based on sparse coding feature extraction |
CN110659541A (en) * | 2018-06-29 | 2020-01-07 | 深圳云天励飞技术有限公司 | Image recognition method, device and storage medium |
CN109165682A (en) * | 2018-08-10 | 2019-01-08 | 中国地质大学(武汉) | A kind of remote sensing images scene classification method merging depth characteristic and significant characteristics |
US11682193B2 (en) | 2018-11-15 | 2023-06-20 | Lingdong Technology (Beijing) Co. Ltd. | System and method for real-time supervised machine learning in on-site environment |
CN110945449A (en) * | 2018-11-15 | 2020-03-31 | 灵动科技(北京)有限公司 | Real-time supervision type machine learning system and method for field environment |
CN109583502A (en) * | 2018-11-30 | 2019-04-05 | 天津师范大学 | A kind of pedestrian's recognition methods again based on confrontation erasing attention mechanism |
CN109583502B (en) * | 2018-11-30 | 2022-11-18 | 天津师范大学 | Pedestrian re-identification method based on anti-erasure attention mechanism |
CN109740618B (en) * | 2019-01-14 | 2022-11-04 | 河南理工大学 | Test paper score automatic statistical method and device based on FHOG characteristics |
CN109740618A (en) * | 2019-01-14 | 2019-05-10 | 河南理工大学 | Network paper score method for automatically counting and device based on FHOG feature |
CN110929663B (en) * | 2019-11-28 | 2023-12-29 | Oppo广东移动通信有限公司 | Scene prediction method, terminal and storage medium |
CN110929663A (en) * | 2019-11-28 | 2020-03-27 | Oppo广东移动通信有限公司 | Scene prediction method, terminal and storage medium |
CN114970654A (en) * | 2021-05-21 | 2022-08-30 | 华为技术有限公司 | Data processing method and device and terminal |
CN113378911A (en) * | 2021-06-08 | 2021-09-10 | 北京百度网讯科技有限公司 | Image classification model training method, image classification method and related device |
CN113378911B (en) * | 2021-06-08 | 2022-08-26 | 北京百度网讯科技有限公司 | Image classification model training method, image classification method and related device |
CN117333929A (en) * | 2023-12-01 | 2024-01-02 | 贵州省公路建设养护集团有限公司 | Method and system for identifying abnormal personnel under road construction based on deep learning |
CN117333929B (en) * | 2023-12-01 | 2024-02-09 | 贵州省公路建设养护集团有限公司 | Method and system for identifying abnormal personnel under road construction based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105678278A (en) | Scene recognition method based on single-hidden-layer neural network | |
CN111275688B (en) | Small target detection method based on context feature fusion screening of attention mechanism | |
CN112818903B (en) | Small sample remote sensing image target detection method based on meta-learning and cooperative attention | |
CN104809187B (en) | A kind of indoor scene semanteme marking method based on RGB D data | |
CN106127749A (en) | The target part recognition methods of view-based access control model attention mechanism | |
CN107451602A (en) | A kind of fruits and vegetables detection method based on deep learning | |
CN109583425A (en) | A kind of integrated recognition methods of the remote sensing images ship based on deep learning | |
CN106778835A (en) | The airport target by using remote sensing image recognition methods of fusion scene information and depth characteristic | |
CN108009509A (en) | Vehicle target detection method | |
CN107133569A (en) | The many granularity mask methods of monitor video based on extensive Multi-label learning | |
CN109785344A (en) | The remote sensing image segmentation method of binary channel residual error network based on feature recalibration | |
CN107016357A (en) | A kind of video pedestrian detection method based on time-domain convolutional neural networks | |
CN104517122A (en) | Image target recognition method based on optimized convolution architecture | |
CN105825511A (en) | Image background definition detection method based on deep learning | |
CN104915972A (en) | Image processing apparatus, image processing method and program | |
CN106446933A (en) | Multi-target detection method based on context information | |
CN105069774B (en) | The Target Segmentation method of optimization is cut based on multi-instance learning and figure | |
CN104036293B (en) | Rapid binary encoding based high resolution remote sensing image scene classification method | |
CN114092697B (en) | Building facade semantic segmentation method with attention fused with global and local depth features | |
CN110334584B (en) | Gesture recognition method based on regional full convolution network | |
CN109886267A (en) | A kind of soft image conspicuousness detection method based on optimal feature selection | |
CN110807485A (en) | Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image | |
CN108596195A (en) | A kind of scene recognition method based on sparse coding feature extraction | |
CN110287798B (en) | Vector network pedestrian detection method based on feature modularization and context fusion | |
CN104008374B (en) | Miner's detection method based on condition random field in a kind of mine image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160615 |