CN110046549A - Occlusion method is removed in a kind of identification of kitchen ventilator smog - Google Patents

Occlusion method is removed in a kind of identification of kitchen ventilator smog Download PDF

Info

Publication number
CN110046549A
CN110046549A CN201910178201.3A CN201910178201A CN110046549A CN 110046549 A CN110046549 A CN 110046549A CN 201910178201 A CN201910178201 A CN 201910178201A CN 110046549 A CN110046549 A CN 110046549A
Authority
CN
China
Prior art keywords
network
convolution
characteristic pattern
kitchen ventilator
greasy dirt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910178201.3A
Other languages
Chinese (zh)
Inventor
陈小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Viomi Electrical Technology Co Ltd
Original Assignee
Foshan Viomi Electrical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Viomi Electrical Technology Co Ltd filed Critical Foshan Viomi Electrical Technology Co Ltd
Priority to CN201910178201.3A priority Critical patent/CN110046549A/en
Publication of CN110046549A publication Critical patent/CN110046549A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

Occlusion method is removed in a kind of identification of kitchen ventilator smog, the specific steps are as follows: step A: building and generates network and differentiate network;Step B: the image that kitchen ventilator camera is shot, input generate in network, go to block processing, obtain the image that no greasy dirt blocks;Step C: judge differentiating in network obtained in step B without the image input that greasy dirt blocks, export true and false mark.The present invention proposes that occlusion method is removed in a kind of kitchen ventilator smog identification, blocks processing by carrying out with neural network to the image that camera is shot, maximizes and restore true smog scene, reduces camera lens greasy dirt and block the influence identified to smog.

Description

Occlusion method is removed in a kind of identification of kitchen ventilator smog
Technical field
The present invention relates to kitchen ventilator technical fields more particularly to a kind of identification of kitchen ventilator smog to remove occlusion method.
Background technique
With the development of vision technique, existing household electrical appliance are also being gradually introduced camera module, to promote the user of product Experience.In range hood, camera module can identify the smog of generation, and smog size cases are fed back to cigarette Machine, enables the air force of smoke machine adjust automatically blower, so as to improve user experience.Smoke machine in use, greasy dirt It can be gradually adhering to cam lens surface, cause to block, since be adhered to camera lens surface is originally greasy dirt, with smog itself And its it is similar, so that the picture effect actually obtained, has biggish distortion in shield portions, not can truly reflect practical smog Size, so that the picture and practical scene of shooting generate deviation, and traditional camera technology and Visual identification technology can not Preferably use the scene.
Summary of the invention
The present invention proposes that occlusion method is removed in a kind of kitchen ventilator smog identification, to solve the problems in background technique, passes through fortune The image that camera is shot is carried out with neural network to block processing, maximizes and restores true smog scene, is reduced Camera lens greasy dirt blocks the influence to smog identification.
To achieve this purpose, the present invention adopts the following technical scheme:
Occlusion method is removed in a kind of identification of kitchen ventilator smog, the specific steps are as follows:
Step A: it builds and generates network and differentiation network;
Step B: the image that kitchen ventilator camera is shot, input generate in network, go to block processing, obtain no greasy dirt The image blocked;
Step C: judge differentiating in network obtained in step B without the image input that greasy dirt blocks, export true and false Mark.
Preferably, in stepb, the image including shooting kitchen ventilator camera is divided into no greasy dirt picture and greasy dirt hides Picture is kept off, it is training set and test set that no greasy dirt picture and greasy dirt, which are blocked picture according to the ratio cut partition of 4:1,;
The training set generates network for training and differentiates network;
The test set goes to block processing as the input for generating network.
Preferably, including by the differentiation series network it into the generation network, specifically includes:
The fixed parameter for generating network simultaneously is used to train the differentiation network;
The fixed parameter for differentiating network simultaneously is used to train the generation network.
Preferably, in the step A, building the generation network, specific step is as follows with network is differentiated:
Step A1: greasy dirt is blocked into picture respectively and obtains characteristic pattern without greasy dirt picture progress image preprocessing;
Step A2: first time convolution is carried out to characteristic pattern obtained in step A1, obtains first time convolution characteristic pattern then Output;
Step A3: first time pond and down-sampling are carried out to first time convolution characteristic pattern;
Step A4: in step A3, the first time convolution characteristic pattern after first time pondization and down-sampling carries out the second secondary volume Product, obtains second of convolution characteristic pattern;
Step A5: second of pond and down-sampling are carried out to second of convolution characteristic pattern;
Step A6: it is built according to step A5 and generates network and differentiation network
Preferably, specific step is as follows for training generation network and differentiation network:
Step 1: calculating each layer of the state and activation value for generating network and differentiating multilayer perceptron in network, until The last layer;
Step 2: each layer of error for generating network and differentiating multilayer perceptron in network is calculated;
Step 3: weight parameter is updated.
Preferably, in the step A2, including multiple convolution kernels is used to carry out convolution, the first secondary volume to characteristic pattern respectively Long-pending formula is as follows:
Wherein: v is the input before convolution, and convolution kernel size is P*Q*R, and m is the feature after the input and convolution before convolution The call number of body connection, w are the neuron on j-th of characteristic pattern position (p, q, r) after convolution and m-th of feature before convolution Weight between figure.
Preferably, the quantity of the convolution characteristic pattern obtained after convolution changes with image size, the number of convolution characteristic pattern It is as follows to measure calculation formula:
Convolution characteristic pattern=primitive character figure quantity -3+1;
Preferably, the image size calculation formula of convolution characteristic pattern is as follows:
Convolution characteristic pattern size=[(primitive character figure size -3D convolution kernel size)/convolution step-length]+1.
Preferably, after carrying out pond and down-sampling to first time convolution characteristic pattern, the first time image of convolution characteristic pattern is big Small to change, quantity is constant.
Preferably, for first time pondization with after down-sampling, the image size calculation formula of first time convolution characteristic pattern is as follows:
The image size of first time convolution characteristic pattern after change=first time convolution characteristic pattern image size/Chi Hua great It is small.
Detailed description of the invention
Fig. 1 is that the flow chart blocked is removed in kitchen ventilator smog identification of the invention;
Fig. 2 is the frame diagram for generating network and differentiating network of the invention.
Specific embodiment
To further illustrate the technical scheme of the present invention below with reference to the accompanying drawings and specific embodiments.
Occlusion method is removed in a kind of kitchen ventilator smog identification of the present embodiment, as shown in Figure 1, the specific steps are as follows:
Step A: it builds and generates network and differentiation network;
Step B: the image that kitchen ventilator camera is shot, input generate in network, go to block processing, obtain no greasy dirt The image blocked;
Step C: judge differentiating in network obtained in step B without the image input that greasy dirt blocks, export true and false Mark.
Preferably, in stepb, the image including shooting kitchen ventilator camera is divided into no greasy dirt picture and greasy dirt hides Picture is kept off, it is training set and test set that no greasy dirt picture and greasy dirt, which are blocked picture according to the ratio cut partition of 4:1,;
The training set generates network for training and differentiates network;
The test set goes to block processing as the input for generating network.
Preferably, including by the differentiation series network it into the generation network, specifically includes:
The fixed parameter for generating network simultaneously is used to train the differentiation network;
The fixed parameter for differentiating network simultaneously is used to train the generation network.
As shown in Fig. 2, blocking the camera shooting of camera lens simultaneously using clean camera lens and greasy dirt respectively in practice The ratio cut partition that these pictures press 4:1 is training set and test set by the smog scene for shooting smoke machine;Generation network is built, if Setting input is the picture that greasy dirt blocks camera lens shooting, exports the clear picture after blocking for degreasing;Differentiation network is built, is arranged Input is the picture that clean camera lens is shot or the clear picture generated by generation network, is exported as true and false mark;It is fixed to generate Network parameter, training differentiate network;Differentiation series network to generating in network, fixed to differentiate network parameter, training generates net Network repeats always above-mentioned two step until training is completed;Greasy dirt in test set is blocked to the picture input life of camera lens shooting At network, export to remove the clear picture after blocking.
In the step A, building the generation network, specific step is as follows with network is differentiated:
Step A1: greasy dirt is blocked into picture respectively and obtains characteristic pattern without greasy dirt picture progress image preprocessing;
Image preprocessing eliminates while can remaining most important pixel characteristic in original image for nerve net Network handles image information useless, is convenient for subsequent processing;
Step A2: first time convolution is carried out to characteristic pattern obtained in step A1, obtains first time convolution characteristic pattern then Output;
Step A3: first time pond and down-sampling are carried out to first time convolution characteristic pattern;
Step A4: in step A3, the first time convolution characteristic pattern after first time pondization and down-sampling carries out the second secondary volume Product, obtains second of convolution characteristic pattern;
Step A5: second of pond and down-sampling are carried out to second of convolution characteristic pattern;
Step A6: it is built according to step A5 and generates network and differentiation network
Training generates network, and specific step is as follows with network is differentiated:
Step 1: calculating each layer of the state and activation value for generating network and differentiating multilayer perceptron in network, until The last layer;
Step 2: each layer of error for generating network and differentiating multilayer perceptron in network is calculated;
Step 3: weight parameter is updated.
In the step A2, including multiple convolution kernels is used to carry out convolution, the public affairs of first time convolution to characteristic pattern respectively Formula is as follows:
Wherein: v is the input before convolution, and convolution kernel size is P*Q*R, and m is the feature after the input and convolution before convolution The call number of body connection, w are the neuron on j-th of characteristic pattern position (p, q, r) after convolution and m-th of feature before convolution Weight between figure.
Such as with the 3D convolution of 2 7*7*3 (7*7 represents pixel convolution window size, and 3, which represent every 3 frames, does a convolution) Core, convolution step-length are 1 respectively to characteristic pattern progress first time convolution.
Multiple series and the corresponding first time convolution characteristic pattern of each series are obtained after first time convolution, calculate each series First time convolution characteristic pattern quantity and image size;
The quantity of the convolution characteristic pattern obtained after convolution changes with image size, and the quantity of convolution characteristic pattern calculates public Formula is as follows:
Convolution characteristic pattern=primitive character figure quantity -3+1;
The image size calculation formula of convolution characteristic pattern is as follows:
Convolution characteristic pattern size=[(primitive character figure size -3D convolution kernel size)/convolution step-length]+1.
After carrying out pond and down-sampling to first time convolution characteristic pattern, the image size of first time convolution characteristic pattern changes Become, quantity is constant.
After first time pondization and down-sampling, the image size calculation formula of first time convolution characteristic pattern is as follows:
The image size of first time convolution characteristic pattern after change=first time convolution characteristic pattern image size/Chi Hua great It is small.
The technical principle of the invention is described above in combination with a specific embodiment.These descriptions are intended merely to explain of the invention Principle, and shall not be construed in any way as a limitation of the scope of protection of the invention.Based on the explanation herein, the technology of this field Personnel can associate with other specific embodiments of the invention without creative labor, these modes are fallen within Within protection scope of the present invention.

Claims (10)

1. occlusion method is removed in a kind of kitchen ventilator smog identification, it is characterised in that: specific step is as follows:
Step A: it builds and generates network and differentiation network;
Step B: the image that kitchen ventilator camera is shot, input generate in network, go to block processing, obtain no greasy dirt and block Image;
Step C: judge differentiating in network obtained in step B without the image input that greasy dirt blocks, export true and false mark.
2. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 1, it is characterised in that:
In stepb, the image including shooting kitchen ventilator camera is divided into no greasy dirt picture and greasy dirt blocks picture, by nothing It is training set and test set that greasy dirt picture and greasy dirt, which block picture according to the ratio cut partition of 4:1,;
The training set generates network for training and differentiates network;
The test set goes to block processing as the input for generating network.
3. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 1, it is characterised in that:
Including the differentiation series network into the generation network, is specifically included:
The fixed parameter for generating network simultaneously is used to train the differentiation network;
The fixed parameter for differentiating network simultaneously is used to train the generation network.
4. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 3, it is characterised in that:
In the step A, building the generation network, specific step is as follows with network is differentiated:
Step A1: greasy dirt is blocked into picture respectively and obtains characteristic pattern without greasy dirt picture progress image preprocessing;
Step A2: carrying out first time convolution to characteristic pattern obtained in step A1, obtains first time convolution characteristic pattern and then exports;
Step A3: first time pond and down-sampling are carried out to first time convolution characteristic pattern;
Step A4: in step A3, the first time convolution characteristic pattern after first time pondization and down-sampling carries out second of convolution, obtains To second of convolution characteristic pattern;
Step A5: second of pond and down-sampling are carried out to second of convolution characteristic pattern;
Step A6: it is built according to step A5 and generates network and differentiation network.
5. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 2, it is characterised in that:
Training generates network, and specific step is as follows with network is differentiated:
Step 1: each layer of the state and activation value for generating network and differentiating multilayer perceptron in network are calculated, to the last One layer;
Step 2: each layer of error for generating network and differentiating multilayer perceptron in network is calculated;
Step 3: weight parameter is updated.
6. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 4, it is characterised in that:
In the step A2, including multiple convolution kernels is used to carry out convolution to characteristic pattern respectively, the formula of first time convolution is such as Under:
Wherein: v is the input before convolution, and convolution kernel size is P*Q*R, and m connects for the character after the input and convolution before convolution The call number connect, w be the neuron on j-th of characteristic pattern position (p, q, r) after convolution and m-th of characteristic pattern before convolution it Between weight.
7. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 4, it is characterised in that:
The quantity of the convolution characteristic pattern obtained after convolution changes with image size, and the number calculation formula of convolution characteristic pattern is such as Under:
Convolution characteristic pattern=primitive character figure quantity -3+1.
8. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 7, it is characterised in that:
The image size calculation formula of convolution characteristic pattern is as follows:
Convolution characteristic pattern size=[(primitive character figure size -3D convolution kernel size)/convolution step-length]+1.
9. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 4, it is characterised in that:
After carrying out pond and down-sampling to first time convolution characteristic pattern, the image size of first time convolution characteristic pattern changes, Quantity is constant.
10. occlusion method is removed in a kind of kitchen ventilator smog identification according to claim 9, it is characterised in that:
After first time pondization and down-sampling, the image size calculation formula of first time convolution characteristic pattern is as follows:
The image size of first time convolution characteristic pattern after change=first time convolution characteristic pattern image size/pond size.
CN201910178201.3A 2019-03-08 2019-03-08 Occlusion method is removed in a kind of identification of kitchen ventilator smog Pending CN110046549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910178201.3A CN110046549A (en) 2019-03-08 2019-03-08 Occlusion method is removed in a kind of identification of kitchen ventilator smog

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910178201.3A CN110046549A (en) 2019-03-08 2019-03-08 Occlusion method is removed in a kind of identification of kitchen ventilator smog

Publications (1)

Publication Number Publication Date
CN110046549A true CN110046549A (en) 2019-07-23

Family

ID=67274605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910178201.3A Pending CN110046549A (en) 2019-03-08 2019-03-08 Occlusion method is removed in a kind of identification of kitchen ventilator smog

Country Status (1)

Country Link
CN (1) CN110046549A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532897A (en) * 2019-08-07 2019-12-03 北京科技大学 The method and apparatus of components image recognition
CN113160156A (en) * 2021-04-12 2021-07-23 佛山市顺德区美的洗涤电器制造有限公司 Method for processing image, processor, household appliance and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145908A (en) * 2017-05-08 2017-09-08 江南大学 A kind of small target detecting method based on R FCN
CN108875511A (en) * 2017-12-01 2018-11-23 北京迈格威科技有限公司 Method, apparatus, system and the computer storage medium that image generates
CN109028232A (en) * 2018-09-29 2018-12-18 佛山市云米电器科技有限公司 A kind of band moves the kitchen ventilator and oil smoke concentration detection method of vision detection system
CN109359559A (en) * 2018-09-27 2019-02-19 天津师范大学 A kind of recognition methods again of the pedestrian based on dynamic barriers sample

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145908A (en) * 2017-05-08 2017-09-08 江南大学 A kind of small target detecting method based on R FCN
CN108875511A (en) * 2017-12-01 2018-11-23 北京迈格威科技有限公司 Method, apparatus, system and the computer storage medium that image generates
CN109359559A (en) * 2018-09-27 2019-02-19 天津师范大学 A kind of recognition methods again of the pedestrian based on dynamic barriers sample
CN109028232A (en) * 2018-09-29 2018-12-18 佛山市云米电器科技有限公司 A kind of band moves the kitchen ventilator and oil smoke concentration detection method of vision detection system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LAVI_QQ_2910138025: "理解3DCNN及3D卷积", 《HTTPS://BLOG.CSDN.NET/LIUWEIYUXIANG/ARTICLE/DETAILS/84202352》 *
RUI QIAN等: "Attentive Generative Adversarial Network for Raindrop Removal from A Single Image", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
SHUIWANG JI等: "3D Convolutional Neural Networksfor Human Action Recognition", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
时光杂货铺: "生成对抗网络(GAN)简单梳理", 《HTTPS://BLOG.CSDN.NET/XG123321123/ARTICLE/DETAILS/78034859》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532897A (en) * 2019-08-07 2019-12-03 北京科技大学 The method and apparatus of components image recognition
CN113160156A (en) * 2021-04-12 2021-07-23 佛山市顺德区美的洗涤电器制造有限公司 Method for processing image, processor, household appliance and storage medium

Similar Documents

Publication Publication Date Title
CN105574827B (en) A kind of method, apparatus of image defogging
CN109285129A (en) Image real noise based on convolutional neural networks removes system
CN110991281A (en) Dynamic face recognition method
CN103544681B (en) The restoration methods of non-homogeneous motion blur image
CN110046549A (en) Occlusion method is removed in a kind of identification of kitchen ventilator smog
WO2023040462A1 (en) Image dehazing method, apparatus and device
CN111724372A (en) Method, terminal and storage medium for detecting cloth defects based on antagonistic neural network
KR20110071213A (en) Apparatus and method for 3d face avatar reconstruction using stereo vision and face detection unit
CN112767279B (en) Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN106127696A (en) A kind of image based on BP neutral net matching sports ground removes method for reflection
CN104035557A (en) Kinect action identification method based on joint activeness
CN101697056A (en) Intelligent shooting projection system with frame self-adaptive function and projection method thereof
CN108765330A (en) Image de-noising method and device based on the joint constraint of global and local priori
CN107133590A (en) A kind of identification system based on facial image
CN108846837A (en) Body surface defect inspection method and device
CN110826402A (en) Multi-task-based face quality estimation method
CN106228515A (en) A kind of image de-noising method and device
CN108922617B (en) Autism auxiliary diagnosis method based on neural network
CN105701496B (en) A kind of go disk recognition methods based on artificial intelligence technology
CN105763814B (en) The method and device of night scene shooting
CN107833193A (en) A kind of simple lens global image restored method based on refinement network deep learning models
CN113856186A (en) Pull-up action judging and counting method, system and device
Yang et al. Image dehazing using bilinear composition loss function
CN109636746A (en) Picture noise removes system, method and apparatus
CN113376172A (en) Welding seam defect detection system based on vision and eddy current and detection method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190723

RJ01 Rejection of invention patent application after publication