CN112732962B - Online real-time garbage picture category prediction method based on deep learning and Flink - Google Patents

Online real-time garbage picture category prediction method based on deep learning and Flink Download PDF

Info

Publication number
CN112732962B
CN112732962B CN202110035236.9A CN202110035236A CN112732962B CN 112732962 B CN112732962 B CN 112732962B CN 202110035236 A CN202110035236 A CN 202110035236A CN 112732962 B CN112732962 B CN 112732962B
Authority
CN
China
Prior art keywords
picture
garbage
identified
classification model
flink
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110035236.9A
Other languages
Chinese (zh)
Other versions
CN112732962A (en
Inventor
柏文阳
骆振源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202110035236.9A priority Critical patent/CN112732962B/en
Publication of CN112732962A publication Critical patent/CN112732962A/en
Application granted granted Critical
Publication of CN112732962B publication Critical patent/CN112732962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method for predicting garbage picture types in real time on line based on deep learning and Flink, which comprises the following steps: step 1, transmitting a garbage picture to be identified to an object cloud storage OSS of a server; step 2, outputting the garbage picture to be identified in the object cloud storage OSS to a message queue service Kafka for queue caching; step 3, outputting the garbage picture to be identified cached in Kafka to the Flink; step 4, preprocessing the garbage picture to be identified by the big data engine Flink; step 5, loading a trained picture classification model file and a corresponding relation file of a subscript and a picture class by a big data engine Flink; identifying the picture type of the garbage picture to be identified; and 6, after the large data engine Flink identifies the picture category, outputting the corresponding picture category. The method provided by the application has strong robustness, has mass processing capability, is not limited by the picture specification, and can accurately predict the class of the garbage picture for the user.

Description

Online real-time garbage picture category prediction method based on deep learning and Flink
Technical Field
The application relates to the technical field of deep learning and Flink, in particular to a method for predicting garbage picture types on line in real time based on the deep learning and the Flink.
Background
Deep Learning (DL) is an inherent rule and presentation hierarchy of Learning sample data, and information obtained in these Learning processes greatly helps interpretation of data such as text, images, and sounds. Its final goal is to have the machine have analytical learning capabilities like a person, and to recognize text, image, and sound data. Deep learning is a complex machine learning algorithm that achieves far greater results in terms of speech and image recognition than prior art.
Apache Flink is an open source stream processing framework developed by the Apache software Foundation, the core of which is a distributed stream data stream engine written in Java and Scala. The Flink executes any stream data program in a data parallel and pipeline manner, and the pipeline runtime system of the Flink can execute batch processing and stream processing programs. Furthermore, the Flink runtime itself also supports the execution of iterative algorithms. The memory processing and the pipeline mode are applied to the scene of on-line real-time processing, and can provide the capability of real-time processing for the service.
Under the current condition of garbage classification implementation, as the common people do not learn the classification of each different garbage in so much free time, garbage classification of different garbage to be put in causes random garbage putting, does not well realize garbage classification putting, increases the workload of sanitation workers, and pollutes the environment. The conventional garbage picture category recognition system has the following disadvantages: 1. poor adaptive performance, once the target image is polluted by stronger noise or has larger defects, the ideal result is often not obtained; 2. the processing capacity is limited, and massive picture category recognition tasks cannot be processed in a short time; 3. the picture specification is limited, and a certain limit is set for the size of the picture, so that a user is generally required to process the picture data in advance; 4. the accuracy is too low, the prediction category is limited, and the accuracy is too low; there are no on-line real-time recognition systems for garbage pictures of various types on the market.
Disclosure of Invention
The application aims to: aiming at the defects of the prior art, the application provides a method for predicting the garbage picture category on line in real time based on deep learning and the Flink, which utilizes a picture classification model of the deep learning and the mass real-time processing capability of a big data engine Flink to process the request of a user for garbage category identification in real time.
In order to solve the technical problems, the application discloses a method for predicting garbage picture categories on line in real time based on deep learning and Flink, which comprises the following specific steps:
step 1, transmitting to-be-identified garbage picture information to an object cloud storage OSS (Object Storage Service) of a server, wherein the to-be-identified garbage picture information comprises an id of to-be-identified garbage picture and the to-be-identified garbage picture;
step 2, outputting the junk picture information to be identified in the object cloud storage OSS to a message queue service Kafka for queue caching;
step 3, outputting the to-be-identified junk picture information cached in the message queue service Kafka to a big data engine Flink, wherein the Kafka plays a role in data caching, so that massive data is prevented from one-time rushing into a Flink cluster and rushing out of a Flink cluster server;
step 4, preprocessing the garbage picture information to be identified by the big data engine Flink; the preprocessing comprises picture format conversion, picture size resetting and conversion into CHW channel data.
And 5, loading the trained picture classification model and the corresponding relation file of the subscript and the picture category by the big data engine Flink, and applying the training picture classification model and the corresponding relation file to the picture category identification. Loading the preprocessed picture CHW channel data in the step 4, inputting the picture CHW channel data into a picture classification model, outputting a corresponding picture category index, and outputting a picture category by using the corresponding relation between the index and the picture category;
and 6, after the large data engine Flink identifies the picture category, outputting the id of the garbage picture to be identified and the corresponding picture id and picture category.
In one implementation, step 4 includes:
step 4-1, converting the garbage picture information to be identified into a byte code array, and generating a buffer image object by using a read function of the ImageIO for loading into a memory;
step 4-2, loading the garbage picture byte code array to be identified into a memory, and delivering the bufferedImage object to a ColorProcessor for processing;
step 4-3, converting the picture object into RGB (Red-Green-Blue) format of the picture by a convertToRGB method of a ColorProcessor;
step 4-4, resetting the size of the picture object converted into the RGB format to 331 x 331 through a size function of a ColorProcessor; the size of the picture object converted into RGB format is set to 331×331 in order to fit the picture classification model data input format;
step 4-5, obtaining CHW (Channel-Height-Width) Channel arrays of the picture object with the reset size, obtaining the number of channels through a getNChannels function of a ColorProcessor, further obtaining arrays of all channels, and combining the arrays of CHW three channels;
step 4-6, converting the CHW channel array into a HWC channel array, adjusting the encoding sequence of the CHW channel array, and converting the CHW channel array into the HWC channel array; converting the CHW channel array into a HWC channel array to be matched with a data input format of the picture classification model;
and 4-7, outputting the id of the garbage picture to be identified and the HWC channel array.
The step 4 is used for preprocessing the garbage picture information to be identified, the size of the picture is not limited by the picture specification, no matter how large the picture is, the picture can be preprocessed for the user in millisecond level without manual preprocessing, and the processing time of preprocessing the picture by the big data engine Flink is within 500 ms.
In one implementation, step 5 includes:
step 5-1, loading a pre-trained picture classification model file and a corresponding relation file of a subscript and a picture class by a big data engine Flink;
step 5-2, inputting the id and HWC channel array of the garbage picture to be identified in the step 4-7 into a picture classification model, carrying out category prediction on the garbage picture to be identified, and outputting a subscript;
step 5-3, matching the picture category corresponding to the subscript from the corresponding relation file of the subscript and the picture category;
in one implementation manner, the trained picture classification model file in step 5-1 is a picture classification model file stored with the picture classification model after the training of the picture classification model is completed.
In one implementation, before training the picture classification model, a pre-training picture dataset needs to be data pre-processed, including:
step 5-1-1, adjusting the picture size in the pre-training picture data set, randomly scaling to the range of [360, 480] according to the shorter side of the picture, and obtaining a scaled picture;
step 5-1-2, horizontally and vertically overturning the zoomed picture, randomly cutting the picture with the size of 331 x 331, overturning and cutting the same picture in the pre-training picture data set to obtain 32 pieces of preliminary pre-processing pictures for pre-training;
step 5-1-3, carrying out normalization processing on the obtained preliminary pretreatment picture to obtain a pretreatment picture; and inputting the preprocessed pictures into the picture classification model for training.
The data preprocessing is performed on the pre-training picture data set to eliminate irrelevant information in the image, enhance the accuracy of picture classification and simplify the data to the maximum extent.
In one implementation, the normalization processing is performed on the obtained preliminary preprocessed picture in step 5-1-3 to obtain the data of each channel of the preprocessed picture HWC, and the average value of each channel is subtracted. The normalization processing mode can normalize the characteristics of the picture data to the same value range, distribute the value of the picture data near 0, reduce the noise of the picture, enable the picture classification model to be more easily and correctly converged to an optimal solution, and improve the picture classification accuracy.
In one implementation mode, the picture classification model selects a nasnet model as an integral structure of the picture classification model, an original output layer of the nasnet model is removed, a global average pooling layer is added, a relu function is added, a dropout function is added, a softmax function is added, a picture classification model output layer is added, and the output number of the picture classification model output layer is consistent with the class number of the garbage pictures so as to be matched with garbage picture classification recognition;
selecting weight information of an ImageNet model, and setting each layer of neural network as trainable;
an Adam optimizer is selected, a categorical cross sentronpy loss function is selected, a picture classification model is trained at a rate of 1e-5, and accuracy is used as an evaluation index.
The image classification model modifies the model structure based on the existing nasnet image classification model, accords with the class output of garbage image classification, and uses the existing ImageNet model weight, so that training time is greatly reduced when the model is pre-trained, and model convergence is achieved rapidly. After the picture classification model is trained, when the garbage picture to be identified is predicted, the picture classification model can infer the type of the current picture according to the picture characteristics even if the garbage picture to be identified is polluted by stronger noise or has larger defect.
In one implementation manner, the garbage picture to be identified in the step 1 is obtained by shooting with a shooting device, and the shooting device is not limited to a mobile phone and a camera; the resolution of the garbage picture to be identified is not limited.
The beneficial effects are that:
1. the method provided by the application has strong robustness, and even if the target image is polluted by strong noise or has large defects, the neural network for image recognition can infer the category of the current picture according to the image characteristics; the method has mass processing capability, and the Flink can maintain the speed of processing tens of millions of pictures at a second level under the effects of memory and clusters; the method is not limited by the specification of the picture, the picture can be preprocessed for the user in millisecond level no matter the size of the picture, and the manual preprocessing of the user is not needed; the accuracy is high, and under the addition of the deep learning technology, the accuracy of identifying the picture category can reach 93%, so that the picture category can be accurately predicted for the user.
2. The application can accurately predict the type of the garbage picture uploaded by the user each time by utilizing the picture type identification and the link mass data processing capability in the deep learning field. The capability of processing the garbage picture types is provided on line in real time, so that the garbage can be conveniently put in by the general public, the garbage can be conveniently classified by sanitation workers, and the pollution of harmful garbage to the environment is reduced.
Drawings
The foregoing and other advantages of the application will become more apparent from the following detailed description of the application when taken in conjunction with the accompanying drawings and detailed description.
FIG. 1 is a flow chart of an overall system for identifying on-line real-time garbage pictures;
FIG. 2 is a flow chart of real-time processing and predicting picture categories on a Flink line;
FIG. 3 is a flow chart of a picture classification model picture preprocessing;
FIG. 4 is a picture classification model training flow diagram;
fig. 5 is a flow chart of a method provided by the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 5, a flowchart of a method provided by the present application includes:
step 1, transmitting to-be-identified garbage picture information to an object cloud storage OSS of a server, wherein the to-be-identified garbage picture information comprises an id of to-be-identified garbage picture and the to-be-identified garbage picture; in this embodiment, the garbage picture to be identified is obtained by shooting with a shooting device, and the shooting device is not limited to a mobile phone and a camera; the resolution of the garbage picture to be identified is not limited.
Step 2, outputting the junk picture information to be identified in the object cloud storage OSS to a message queue service Kafka for queue caching;
step 3, outputting the to-be-identified junk picture information cached in the message queue service Kafka to the Flink;
step 4, preprocessing the garbage picture information to be identified by the big data engine Flink; the preprocessing comprises picture format conversion, picture size resetting and conversion into CHW channel data.
And 5, loading the trained picture classification model and the corresponding relation file of the subscript and the picture category by the big data engine Flink, and applying the training picture classification model and the corresponding relation file to the picture category identification. Loading the preprocessed picture CHW channel data in the step 4, inputting the picture CHW channel data into a picture classification model, outputting a corresponding picture category index, and outputting a picture category by using the corresponding relation between the index and the picture category;
and 6, after the large data engine Flink identifies the picture category, outputting the id of the garbage picture to be identified and the corresponding picture id and picture category.
In this embodiment, please refer to fig. 1, which is a flowchart of an overall online real-time garbage picture recognition system, a user provides garbage picture information to be delivered for shooting by a camera device such as a mobile phone, uploads the garbage picture information to an object cloud storage OSS of a server, further outputs the garbage picture information to a message queue service Kafka, performs picture preprocessing through a link big data cluster, loads a picture classification model and a corresponding relation between a subscript and a category, predicts the picture category, outputs the picture category, returns the category of the garbage picture, and helps the user to finish garbage classification.
In this embodiment, please refer to fig. 2, which is a flow of the garbage picture to be identified entering the flank big data cluster. Step 4 comprises:
and 4-1, loading the garbage picture to be identified into a picture byte array from the Flink Source component, and outputting the picture byte array to a FlatMap function for picture preprocessing. In the preprocessing process, the garbage picture information to be identified is converted into a byte code array, and a read function of the ImageIO is used for generating a buffer image object which is used for being loaded into a memory;
step 4-2, loading the garbage picture byte code array to be identified into a memory, delivering the BufferedImage object to a ColorProcessor for processing,
step 4-3, converting the picture into RGB format; converting the picture object into an RGB format of a picture by a convertToRGB method of a ColorProcessor;
step 4-4, resetting the size of the picture object converted into the RGB format to 331 x 331 by a size method of a ColorProcessor;
step 4-5, acquiring CHW channel arrays of the picture object with the reset size, acquiring the channel number through a getNChannels method of a ColorProcessor, further acquiring the arrays of all channels, and combining the arrays of the CHW three channels;
step 4-6, performing normalization processing, converting the CHW channel array into a HWC channel array, adjusting the encoding sequence of the CHW channel array, and converting the CHW channel array into the HWC channel array;
and 4-7, outputting the id and HWC channel array of the garbage picture to be identified to a model FlatMap.
In this embodiment, step 5 includes:
step 5-1, loading a pre-trained picture classification model file and a corresponding relation file of the subscript and the picture class in model prediction;
step 5-2, inputting the image id and HWC channel array in the step 4-7 into a picture classification model, carrying out class prediction on the garbage picture to be identified, and outputting a subscript;
and 5-3, matching the picture category corresponding to the subscript from the corresponding relation file of the subscript and the picture category.
In this embodiment, the trained image classification model file in step 5-1 is a image classification model file stored in the image classification model file after the training of the image classification model is completed.
In this embodiment, please refer to fig. 3, which is a picture preprocessing flow of a picture classification model, including:
step 5-1-1, adjusting the picture size in the pre-training picture data set, randomly scaling to the range of [360, 480] according to the shorter side of the picture, and obtaining a scaled picture;
step 5-1-2, horizontally and vertically overturning the zoomed picture, randomly cutting the picture with the size of 331 x 331, overturning and cutting the same picture in the pre-training picture data set to obtain 32 pieces of preliminary pre-processing pictures for pre-training;
step 5-1-3, carrying out normalization processing on the obtained preliminary pretreatment picture to obtain a pretreatment picture; and inputting the preprocessed pictures into the picture classification model for training.
In this embodiment, the normalization processing is performed on the obtained preliminary pre-processed image in step 5-1-3 to obtain the data of each channel of the pre-processed image HWC, and the average value of each channel is subtracted.
In this embodiment, please refer to fig. 4, which is a training process of the image classification model. After the picture is processed as shown in fig. 3, the picture classification model selects a nasnet model as an integral structure of the picture classification model, an original output layer of the nasnet model is removed, a global average pooling layer is added, a relu function is added, a dropout function is added, a softmax function is added, a picture classification model output layer is added, and the output number of the picture classification model output layer is consistent with the class number of the garbage picture; selecting weight information of an ImageNet model, and setting each layer of neural network as trainable; selecting an Adam optimizer, selecting a categorical-cross sentronpy loss function, training a picture classification model at a rate of 1e-5, and taking the accuracy as an evaluation index; after the training picture classification model is completed, the model is stored into a model file, and the model file is provided for online Flink real-time prediction.
The application provides a method for predicting garbage picture types on line in real time based on deep learning and Flink, and the method and the way for realizing the technical scheme are a plurality of methods, the above description is only a preferred embodiment of the application, and it should be noted that, for a person skilled in the art, a plurality of improvements and modifications can be made without departing from the principle of the application, and the improvements and modifications should also be regarded as the protection scope of the application. The components not explicitly described in this embodiment can be implemented by using the prior art.

Claims (5)

1. The method for predicting the garbage picture category in real time on line based on deep learning and Flink is characterized by comprising the following steps:
step 1, transmitting to-be-identified garbage picture information to an object cloud storage OSS of a server, wherein the to-be-identified garbage picture information comprises an id of to-be-identified garbage picture and the to-be-identified garbage picture;
step 2, outputting the junk picture information to be identified in the object cloud storage OSS to a message queue service Kafka for queue caching;
step 3, outputting the to-be-identified junk picture information cached in the message queue service Kafka to a big data engine Flink;
step 4, preprocessing the information of the garbage picture to be identified by the big data engine Flink, and outputting the id and HWC channel array of the garbage picture to be identified;
step 5, loading a trained picture classification model file and a corresponding relation file of a subscript and a picture class by a big data engine Flink; identifying the picture type of the garbage picture to be identified;
step 6, after the big data engine Flink recognizes the picture category, outputting the id of the garbage picture to be recognized and the corresponding picture category;
step 5 comprises the following steps:
step 5-1, loading a pre-trained picture classification model file and a corresponding relation file of a subscript and a picture class by a big data engine Flink;
step 5-2, inputting the id and HWC channel array of the garbage picture to be identified in the step 4 into a picture classification model, carrying out category prediction on the garbage picture to be identified, and outputting a subscript;
step 5-3, matching the picture category corresponding to the subscript from the corresponding relation file of the subscript and the picture category;
after the training of the picture classification model file in the step 5-1 is completed, the picture classification model is stored into the picture classification model file;
the picture classification model selects a nasnet model as an integral structure of the picture classification model, an original output layer of the nasnet model is removed, a global average pooling layer is added on the basis, a relu function is added, a dropout function is added, a softmax function is added, a picture classification model output layer is added, and the output number of the picture classification model output layer is consistent with the class number of the garbage pictures;
selecting weight information of an ImageNet model, and setting each layer of neural network as trainable;
an Adam optimizer is selected, a categorical cross sentronpy loss function is selected, a picture classification model is trained at a rate of 1e-5, and accuracy is used as an evaluation index.
2. The method for online real-time garbage picture classification prediction based on deep learning and flank according to claim 1, wherein step 4 comprises the steps of:
step 4-1, converting the junk picture information to be identified into a byte code array;
step 4-2, loading the garbage picture byte code array to be identified into a memory;
step 4-3, converting the garbage picture object to be identified into an RGB format;
step 4-4, resetting the size of the picture object converted into the RGB format to 331 x 331;
step 4-5, acquiring a CHW channel array of the picture object with the reset size;
step 4-6, converting the CHW channel array into a HWC channel array;
and 4-7, outputting the id and HWC channel array of the garbage picture to be identified.
3. The method for online real-time garbage picture classification prediction based on deep learning and flank according to claim 2, wherein the pre-training picture dataset needs to be subjected to data preprocessing before the picture classification model is trained, comprising:
step 5-1-1, adjusting the picture size in the pre-training picture data set, randomly scaling to the range of [360, 480] according to the shorter side of the picture, and obtaining a scaled picture;
step 5-1-2, horizontally and vertically overturning the zoomed picture, randomly cutting the overturned picture into a picture with the size of 331 x 331, overturning and cutting the same picture in the pre-training picture data set, and obtaining 32 preliminary preprocessing pictures;
step 5-1-3, carrying out normalization processing on the obtained preliminary pretreatment picture to obtain a pretreatment picture; and inputting the preprocessed pictures into the picture classification model for training.
4. The method for online real-time garbage picture classification prediction based on deep learning and flank according to claim 3, wherein the normalization of the obtained preliminary pre-processed pictures in step 5-1-3 is performed to obtain the data of each channel of the pre-processed pictures HWC, and the average value of each channel is subtracted.
5. The method for predicting the garbage picture category in real time on line based on deep learning and Flink according to claim 1, wherein the garbage picture to be identified in step 1 is obtained by shooting with a shooting device, and the shooting device comprises a mobile phone and a camera.
CN202110035236.9A 2021-01-12 2021-01-12 Online real-time garbage picture category prediction method based on deep learning and Flink Active CN112732962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110035236.9A CN112732962B (en) 2021-01-12 2021-01-12 Online real-time garbage picture category prediction method based on deep learning and Flink

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110035236.9A CN112732962B (en) 2021-01-12 2021-01-12 Online real-time garbage picture category prediction method based on deep learning and Flink

Publications (2)

Publication Number Publication Date
CN112732962A CN112732962A (en) 2021-04-30
CN112732962B true CN112732962B (en) 2023-10-13

Family

ID=75590365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110035236.9A Active CN112732962B (en) 2021-01-12 2021-01-12 Online real-time garbage picture category prediction method based on deep learning and Flink

Country Status (1)

Country Link
CN (1) CN112732962B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106115104A (en) * 2016-06-27 2016-11-16 湖南现代环境科技股份有限公司 Categorized consumer waste collecting and transferring system based on Internet of Things, devices and methods therefor
CN109493604A (en) * 2018-11-30 2019-03-19 平安科技(深圳)有限公司 Utilize the traffic control method, apparatus and computer equipment of big data
CN109994183A (en) * 2019-02-15 2019-07-09 忻州师范学院 A kind of intellectual analysis personal health condition and dietotherapy medical system
CN110163233A (en) * 2018-02-11 2019-08-23 陕西爱尚物联科技有限公司 A method of so that machine is competent at more complex works
CN110162556A (en) * 2018-02-11 2019-08-23 陕西爱尚物联科技有限公司 A kind of effective method for playing data value
CN110232336A (en) * 2019-05-28 2019-09-13 成都谷辘信息技术有限公司 A kind of deviation safety on line early warning system
WO2020040110A1 (en) * 2018-08-23 2020-02-27 荏原環境プラント株式会社 Information processing device, information processing program, and information processing method
CN110929760A (en) * 2019-10-30 2020-03-27 中国科学院自动化研究所南京人工智能芯片创新研究院 Garbage classification software based on computer vision
CN111126138A (en) * 2019-11-18 2020-05-08 施博凯 AI image recognition method for garbage classification
CN111259977A (en) * 2020-01-22 2020-06-09 浙江工业大学 Garbage classification device based on deep learning
CN111275599A (en) * 2020-02-03 2020-06-12 重庆特斯联智慧科技股份有限公司 Big data integration algorithm-based group rental house early warning method and device, storage medium and terminal
CN111352800A (en) * 2020-02-25 2020-06-30 京东数字科技控股有限公司 Big data cluster monitoring method and related equipment
CN111597173A (en) * 2020-04-02 2020-08-28 上海瀚之友信息技术服务有限公司 Data warehouse system
CN111738357A (en) * 2020-07-24 2020-10-02 完美世界(北京)软件科技发展有限公司 Junk picture identification method, device and equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106115104A (en) * 2016-06-27 2016-11-16 湖南现代环境科技股份有限公司 Categorized consumer waste collecting and transferring system based on Internet of Things, devices and methods therefor
CN110163233A (en) * 2018-02-11 2019-08-23 陕西爱尚物联科技有限公司 A method of so that machine is competent at more complex works
CN110162556A (en) * 2018-02-11 2019-08-23 陕西爱尚物联科技有限公司 A kind of effective method for playing data value
WO2020040110A1 (en) * 2018-08-23 2020-02-27 荏原環境プラント株式会社 Information processing device, information processing program, and information processing method
CN109493604A (en) * 2018-11-30 2019-03-19 平安科技(深圳)有限公司 Utilize the traffic control method, apparatus and computer equipment of big data
CN109994183A (en) * 2019-02-15 2019-07-09 忻州师范学院 A kind of intellectual analysis personal health condition and dietotherapy medical system
CN110232336A (en) * 2019-05-28 2019-09-13 成都谷辘信息技术有限公司 A kind of deviation safety on line early warning system
CN110929760A (en) * 2019-10-30 2020-03-27 中国科学院自动化研究所南京人工智能芯片创新研究院 Garbage classification software based on computer vision
CN111126138A (en) * 2019-11-18 2020-05-08 施博凯 AI image recognition method for garbage classification
CN111259977A (en) * 2020-01-22 2020-06-09 浙江工业大学 Garbage classification device based on deep learning
CN111275599A (en) * 2020-02-03 2020-06-12 重庆特斯联智慧科技股份有限公司 Big data integration algorithm-based group rental house early warning method and device, storage medium and terminal
CN111352800A (en) * 2020-02-25 2020-06-30 京东数字科技控股有限公司 Big data cluster monitoring method and related equipment
CN111597173A (en) * 2020-04-02 2020-08-28 上海瀚之友信息技术服务有限公司 Data warehouse system
CN111738357A (en) * 2020-07-24 2020-10-02 完美世界(北京)软件科技发展有限公司 Junk picture identification method, device and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Detecting Anomaly in Big Data System Logs Using Convolutional Neural Network;Siyang Lu等;《https://ieeexplore.ieee.org/abstract/document/8511880》;第151-158页 *
基于卷积神经网络的垃圾图像分类算法;董子源等;《计算机系统应用》;第199-204页 *
基于安全更新视图的XML更新控制方法;郭晋凯等;《计算机应用》;第 3409-3412页 *

Also Published As

Publication number Publication date
CN112732962A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN109891897B (en) Method for analyzing media content
CN110428820B (en) Chinese and English mixed speech recognition method and device
US20220121906A1 (en) Task-aware neural network architecture search
KR102027141B1 (en) A program coding system based on artificial intelligence through voice recognition and a method thereof
CN110149266B (en) Junk mail identification method and device
CN111680147A (en) Data processing method, device, equipment and readable storage medium
CN106778910B (en) Deep learning system and method based on local training
CN110956037B (en) Multimedia content repeated judgment method and device
CN111401374A (en) Model training method based on multiple tasks, character recognition method and device
CN113778871A (en) Mock testing method, device, equipment and storage medium
US20220124387A1 (en) Method for training bit rate decision model, and electronic device
CN110689359A (en) Method and device for dynamically updating model
CN111008329A (en) Page content recommendation method and device based on content classification
CN112732962B (en) Online real-time garbage picture category prediction method based on deep learning and Flink
CN112995690A (en) Live content item identification method and device, electronic equipment and readable storage medium
CN114241253A (en) Model training method, system, server and storage medium for illegal content identification
CN115587217A (en) Multi-terminal video detection model online retraining method
CN109871487B (en) News recall method and system
CN110659561A (en) Optimization method and device of internet riot and terrorist video identification model
CN117575894B (en) Image generation method, device, electronic equipment and computer readable storage medium
KR102326745B1 (en) Control method, device and system for login of platform service
CN111914068B (en) Method for extracting test question knowledge points
CN116778534B (en) Image processing method, device, equipment and medium
CN116959489B (en) Quantization method and device for voice model, server and storage medium
CN113762382B (en) Model training and scene recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant