CN117842923A - Control system and method of intelligent full-automatic oiling robot - Google Patents

Control system and method of intelligent full-automatic oiling robot Download PDF

Info

Publication number
CN117842923A
CN117842923A CN202410170055.0A CN202410170055A CN117842923A CN 117842923 A CN117842923 A CN 117842923A CN 202410170055 A CN202410170055 A CN 202410170055A CN 117842923 A CN117842923 A CN 117842923A
Authority
CN
China
Prior art keywords
image
gun
training
oiling
gradient histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410170055.0A
Other languages
Chinese (zh)
Inventor
施恒之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yikm Intelligent Technology Co ltd
Original Assignee
Zhejiang Yikm Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Yikm Intelligent Technology Co ltd filed Critical Zhejiang Yikm Intelligent Technology Co ltd
Priority to CN202410170055.0A priority Critical patent/CN117842923A/en
Publication of CN117842923A publication Critical patent/CN117842923A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A control system and method for intelligent full-automatic oiling robot is disclosed. The system comprises a filling station positioning subsystem for determining the position of a filling station by means of a sensor group, a filling gun identification subsystem for identifying the type of a filling gun, a tank detection subsystem for detecting and analyzing the information of a tank, a filling action control subsystem for controlling a filling action, a communication management subsystem for communicating with the filling station, a data acquisition subsystem for acquiring filling data and a data processing subsystem for processing the filling data and generating a filling report. In this way, intelligent fueling can be achieved.

Description

Control system and method of intelligent full-automatic oiling robot
Technical Field
The application relates to the field of automatic oiling, and more particularly, to a control system and method of an intelligent full-automatic oiling robot.
Background
With the development of the automobile industry, the number and the scale of the gas stations are continuously increased, and challenges are brought to the management and the operation of the gas stations. The traditional oiling mode needs manual operation, so that the efficiency is low, and potential safety hazards exist.
In order to improve the service quality and the safety of the gas station and reduce the labor cost, it is very necessary to develop a control system of an intelligent full-automatic oiling robot.
Disclosure of Invention
In view of this, the present application provides a control system and method of an intelligent full-automatic oiling robot, which can realize intelligent oiling.
According to an aspect of the present application, there is provided a control system of an intelligent full-automatic oiling robot, which includes:
a gas station positioning subsystem for determining the position of the gas station by means of a sensor group;
the fuel gun identification subsystem is used for identifying the type of the fuel gun;
the oil tank detection subsystem is used for detecting and analyzing information of the oil tank;
a fueling action control subsystem for controlling fueling actions;
a communication management subsystem for communicating with the gas station;
the data acquisition subsystem is used for acquiring fueling data; and
and the data processing subsystem is used for processing the oiling data and generating an oiling report.
In the above-mentioned control system of intelligent full-automatic oiling robot, the nozzle discernment subsystem includes:
the digital image acquisition module is used for acquiring the digital image of the oiling gun acquired by the camera;
The multidimensional image characterization module is used for extracting multidimensional image characterization information of the digital image of the oiling gun to obtain a multichannel characterization image of the oiling gun;
the oil gun image feature extraction module is used for extracting image features of the oil gun multichannel characterization image to obtain an oil gun image characterization feature map; and
and the fuel gun type determining module is used for determining the type of the fuel gun based on the fuel gun image characterization feature map.
In the above-mentioned control system of intelligent full-automatic oiling robot, the multidimensional image characterization module includes:
a direction gradient histogram calculation unit for calculating a direction gradient histogram of the digital image of the fuel truck nozzle;
a color gradient histogram calculation unit for calculating a color gradient histogram of the digital image of the fuel truck nozzle;
a position gradient histogram calculation unit for calculating a position gradient histogram of the digital image of the fuel truck nozzle; and
and the aggregation unit is used for aggregating the fuel gun digital image, the direction gradient histogram of the fuel gun digital image, the color gradient histogram of the fuel gun digital image and the position gradient histogram of the fuel gun digital image along the channel dimension to obtain the fuel gun multichannel characterization image.
In the above control system of the intelligent full-automatic oiling robot, the direction gradient histogram calculation unit includes:
the uniform dividing subunit is used for uniformly dividing the digital image of the oiling gun to obtain a plurality of cell spaces;
the gradient distribution generation subunit is used for calculating gradients of pixel points in each cell space in the plurality of cell spaces and generating a plurality of cell direction gradient histograms according to gradient distribution; and
a directional gradient histogram generation subunit for generating the directional gradient histogram based on the plurality of cell directional gradient histograms.
In the above control system of the intelligent full-automatic oiling robot, the color gradient histogram calculation unit includes:
an image conversion subunit, configured to convert the digital image of the fuel dispenser into a Lab color space map;
a color cell dividing subunit, configured to perform cell division on the Lab color space map to obtain a plurality of cells;
a color gradient distribution generation subunit, configured to calculate color gradients of the plurality of cells, and generate a plurality of cell color gradient histograms based on the color gradient distribution; and
And a color gradient histogram generation subunit configured to generate the color gradient histogram based on the color gradient histograms of the plurality of cells.
In the above control system of the intelligent full-automatic oiling robot, the position gradient histogram calculation unit includes:
the position cell division subunit is used for carrying out cell division on the digital image of the oil gun so as to obtain a plurality of cells;
a center point calculating subunit, configured to calculate a center point of each cell to obtain a plurality of cell center points; and
and the position gradient histogram generation subunit is used for calculating the relative positions from the central point of each cell to the center of the digital image of the oiling gun so as to obtain the position gradient histogram.
In the above-mentioned control system of intelligent full-automatic oiling robot, the nozzle image feature extraction module is used for:
and the fuel gun multichannel characterization image is passed through a fuel gun image feature extractor based on a convolutional neural network model to obtain the fuel gun image characterization feature map.
In the above-mentioned control system of intelligent full-automatic oiling robot, the nozzle type determining module includes:
The self-adaptive attention strengthening unit is used for enabling the oil gun image characterization feature map to pass through the self-adaptive attention module to obtain the self-adaptive attention strengthening oil gun image characterization feature map; and
the tag classification unit is used for enabling the self-adaptive attention-strengthening oiling gun image characterization feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for representing the type tag of the oiling gun;
wherein the self-adaptive attention strengthening unit is used for:
processing the fuel dispenser image characterization feature map with the following adaptive attention formula to obtain the adaptive attention enhanced fuel dispenser image characterization feature map; wherein, the self-adaptive attention formula is:
wherein,characterizing a feature map for said fuel dispenser image, < >>For pooling treatment, +.>For pooling vectors, ++>Is a weight matrix, < >>Is a bias vector, ++>For activating the treatment->For the initial meta-weight feature vector, +.>Is the +.f. in the initial meta-weight feature vector>Characteristic value of individual position->For correcting the metadata feature vector, +.>Is the image characterization characteristic diagram of the self-adaptive attention-strengthening oiling gun, and is->And multiplying the characteristic value in the correction element weight characteristic vector by each characteristic matrix of the characteristic image of the oil gun image along the channel dimension.
The control system of the intelligent full-automatic oiling robot further comprises a training module for training the oil gun image feature extractor, the self-adaptive attention module and the classifier based on the convolutional neural network model;
wherein, training module includes:
the system comprises a training data acquisition unit, a control unit and a control unit, wherein the training data acquisition unit is used for acquiring training data, and the training data comprises training oiling gun digital images acquired by a camera and a true value of a type tag of the oiling gun;
the training direction gradient histogram calculation unit is used for calculating a direction gradient histogram of the digital image of the training oiling gun;
the training color gradient histogram calculation unit is used for calculating a color gradient histogram of the digital image of the training oiling gun;
the training position gradient histogram calculation unit is used for calculating a position gradient histogram of the digital image of the training oiling gun;
the training aggregation unit is used for aggregating the training oil gun digital image, the direction gradient histogram of the training oil gun digital image, the color gradient histogram of the training oil gun digital image and the position gradient histogram of the training oil gun digital image along the channel dimension to obtain a training oil gun multichannel characterization image;
The training oil gun image feature extraction unit is used for enabling the training oil gun multichannel characterization image to pass through the oil gun image feature extractor based on the convolutional neural network model so as to obtain a training oil gun image characterization feature map;
the training self-adaptive attention strengthening unit is used for enabling the training oil gun image representation feature map to pass through the self-adaptive attention module so as to obtain the training self-adaptive attention strengthening oil gun image representation feature map;
the feature distribution optimizing unit is used for carrying out feature distribution optimization on the training self-adaptive attention-strengthening oiling gun image characterization feature map so as to obtain an optimized self-adaptive attention-strengthening oiling gun image characterization feature map;
the classification loss function value calculation unit is used for enabling the optimized self-adaptive attention-strengthening oiling gun image characterization feature map to pass through the classifier so as to obtain a classification loss function value; and
and the loss training unit is used for training the oil gun image feature extractor, the self-adaptive attention module and the classifier based on the convolutional neural network model by using the classification loss function value.
According to another aspect of the present application, there is provided a control method of an intelligent full-automatic oiling robot, including:
Determining the position of the gas station through a sensor group;
identifying the type of the oil gun;
detecting and analyzing information of the oil tank;
controlling the oiling action;
communicating with the gas station;
collecting oiling data; and
and processing the oiling data to generate an oiling report.
In this application, the system includes a fueling station positioning subsystem for determining the position of a fueling station by means of a sensor group, a fueling gun identification subsystem for identifying the type of fueling gun, a fuel tank detection subsystem for detecting and analyzing the information of the fuel tank, a fueling motion control subsystem for controlling fueling motion, a communication management subsystem for communicating with the fueling station, a data acquisition subsystem for acquiring fueling data, and a data processing subsystem for processing the fueling data and generating fueling reports. In this way, intelligent fueling can be achieved.
Other features and aspects of the present application will become apparent from the following detailed description of the application with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present application and together with the description, serve to explain the principles of the present application.
Fig. 1 shows a block diagram of a control system of an intelligent fully automatic fueling robot according to an embodiment of the present application.
Fig. 2 shows a flowchart of a control method of the intelligent fully automatic fueling robot according to an embodiment of the present application.
Fig. 3 shows a schematic architecture diagram of substep S120 of the control method of the intelligent fully-automatic fueling robot according to the embodiment of the present application.
Fig. 4 shows an application scenario diagram of a control system of an intelligent fully-automatic fueling robot according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are also within the scope of the present application.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
The application provides a control system of an intelligent full-automatic oiling robot, and fig. 1 shows a block diagram schematic diagram of the control system of the intelligent full-automatic oiling robot according to an embodiment of the application. As shown in fig. 1, a control system 100 of an intelligent fully-automatic oiling robot according to an embodiment of the present application includes: a fueling station positioning subsystem 110 for determining the position of the fueling station by means of the sensor group; a fuel nozzle identification subsystem 120 for identifying the type of fuel nozzle; a tank detection subsystem 130 for detecting and analyzing tank information; a fueling action control subsystem 140 for controlling fueling actions; a communication management subsystem 150 for communicating with the gas station; a data acquisition subsystem 160 for acquiring fueling data; and a data processing subsystem 170 for processing the fueling data to generate a fueling report.
In actual application scene, the oil gun needed by different automobile brands and models has difference, and the robot can be ensured to be correctly inserted into the oil tank port of the automobile through identifying the type of the oil gun, so that the situation of oil filling errors or automobile damage is avoided. In order to effectively achieve the object of "identifying the type of fuel dispenser", in the technical idea of the present application, it is expected that the fuel dispenser is identified by processing and analyzing a digital image of the fuel dispenser acquired by a camera using an image processing technique based on deep learning, and excavating the category characteristics of the fuel dispenser therefrom. Therefore, the robot is ensured to select a proper oiling gun, and the accuracy and the safety of oiling are improved.
Based thereon, the fuel dispenser identification subsystem 120 includes: the digital image acquisition module is used for acquiring the digital image of the oiling gun acquired by the camera; the multidimensional image characterization module is used for extracting multidimensional image characterization information of the digital image of the oiling gun to obtain a multichannel characterization image of the oiling gun; the oil gun image feature extraction module is used for extracting image features of the oil gun multichannel characterization image to obtain an oil gun image characterization feature map; and the fuel gun type determining module is used for determining the type of the fuel gun based on the fuel gun image characterization feature map.
Specifically, in the technical scheme of the application, firstly, a digital image of the oiling gun acquired by the camera is acquired. Here, the digital image of the fuel dispenser refers to an image including the fuel dispenser to be identified, which is acquired by a camera. In this way, visual information about the fuel dispenser to be identified, such as appearance and color information, etc., can be obtained.
Then, calculating a direction gradient histogram of the digital image of the oiling gun; calculating a color gradient histogram of the digital image of the oiling gun; and simultaneously, calculating a position gradient histogram of the digital image of the oiling gun. Here, calculating the histogram of the directional gradient of the fuel dispenser digital image may extract edge information and texture information from the fuel dispenser digital image. Specifically, in image processing, gradients represent the variation in pixel intensity in an image. The variation in pixel intensity in turn describes and characterizes the edge information in the image. By calculating the histogram of the directional gradients of the digital image of the fuel dispenser, information about the characteristics of the fuel dispenser, such as texture, shape, etc., can be obtained. These features may help distinguish between different types of fuel guns. For example, certain types of fuel guns may have particular texture or shape characteristics, and their directional gradient histograms may exhibit particular distribution patterns. Additionally, calculating a color gradient histogram of a digital image of a fuel dispenser may extract information related to the color characteristics of the fuel dispenser from the digital image of the fuel dispenser. Specifically, in image processing, a color gradient represents a change in color in an image. By calculating the color gradient histogram of the digital image of the fuel dispenser, the occurrence frequency of different colors in the image can be counted, so that the color change of the fuel dispenser can be described and characterized. It should be appreciated that in a practical application scenario, the color characteristics of the fuel dispenser are important to identify different types of fuel dispensers. Different types of fuel guns may have different color characteristics, for example, some types of fuel guns may be red, while other types may be blue or yellow. And calculating the position gradient histogram of the digital image of the oiling gun can acquire the position information of the oiling gun from the digital image of the oiling gun, so as to assist in identifying the type of the oiling gun. Specifically, by calculating a histogram of the position gradients of the digital image of the fuel dispenser, information about the position distribution of the fuel dispenser in the image can be obtained. The position gradient histogram counts the number of gradients in different position ranges, thereby reflecting the position characteristics of the fuel dispenser in the image. In the identification of the type of fuel dispenser, in addition to the texture and shape characteristics of the fuel dispenser, the location information of the fuel dispenser is also an important identification basis. Different types of fuel guns are typically placed in different locations, for example, some types of fuel guns may be placed to the left of the fuel filler port while other types of fuel guns are placed to the right.
Next, aggregating the fuel dispenser digital image, the directional gradient histogram of the fuel dispenser digital image, the color gradient histogram of the fuel dispenser digital image, and the location gradient histogram of the fuel dispenser digital image along a channel dimension to obtain a fuel dispenser multi-channel characterization image. The source domain information, the edge characteristics, the color characteristics and the position characteristics of the digital image of the fuel dispenser are fused in a mode of aggregation along the channel dimension, so that the multi-channel characterization image of the fuel dispenser has richer characteristic representation, and the distinguishing capability of the fuel dispenser type is enhanced.
Accordingly, the multi-dimensional image characterization module includes: a direction gradient histogram calculation unit for calculating a direction gradient histogram of the digital image of the fuel truck nozzle; a color gradient histogram calculation unit for calculating a color gradient histogram of the digital image of the fuel truck nozzle; a position gradient histogram calculation unit for calculating a position gradient histogram of the digital image of the fuel truck nozzle; and the aggregation unit is used for aggregating the fuel gun digital image, the direction gradient histogram of the fuel gun digital image, the color gradient histogram of the fuel gun digital image and the position gradient histogram of the fuel gun digital image along the channel dimension to obtain the fuel gun multichannel characterization image.
Wherein, in one example, the direction gradient histogram calculation unit includes: the uniform dividing subunit is used for uniformly dividing the digital image of the oiling gun to obtain a plurality of cell spaces; the gradient distribution generation subunit is used for calculating gradients of pixel points in each cell space in the plurality of cell spaces and generating a plurality of cell direction gradient histograms according to gradient distribution; and a directional gradient histogram generation subunit configured to generate the directional gradient histogram based on the plurality of cell directional gradient histograms.
Wherein, in one example, the color gradient histogram calculation unit includes: an image conversion subunit, configured to convert the digital image of the fuel dispenser into a Lab color space map; a color cell dividing subunit, configured to perform cell division on the Lab color space map to obtain a plurality of cells; a color gradient distribution generation subunit, configured to calculate color gradients of the plurality of cells, and generate a plurality of cell color gradient histograms based on the color gradient distribution; and a color gradient histogram generation subunit configured to generate the color gradient histogram based on the plurality of cell color gradient histograms.
Wherein, in one example, the location gradient histogram calculation unit includes: the position cell division subunit is used for carrying out cell division on the digital image of the oil gun so as to obtain a plurality of cells; a center point calculating subunit, configured to calculate a center point of each cell to obtain a plurality of cell center points; and a position gradient histogram generation subunit, configured to calculate a relative position from a center point of each cell to a center of the fuel dispenser digital image to obtain the position gradient histogram.
The fuel dispenser multi-channel characterization image is then passed through a fuel dispenser image feature extractor based on a convolutional neural network model to obtain a fuel dispenser image characterization feature map. The convolutional neural network (Convolutional Neural Network, CNN) has strong characterization learning capability in a computer vision task, and can automatically learn the characteristics in the image and convert the characteristics into meaningful representations. Here, by using the fuel gun image feature extractor based on the convolutional neural network model, more abstract and more discriminant features can be extracted from the fuel gun multichannel characterization image. More specifically, the convolutional neural network model gradually extracts local neighborhood features of an image through a multi-layer convolution and pooling operation. This hierarchical feature extraction process allows the network to gradually understand the semantics and structure of the image, thereby better distinguishing between different types of fuel guns.
Correspondingly, the oil gun image feature extraction module is used for: and the fuel gun multichannel characterization image is passed through a fuel gun image feature extractor based on a convolutional neural network model to obtain the fuel gun image characterization feature map.
It is worth mentioning that convolutional neural network (Convolutional Neural Network, CNN for short) is a deep learning model for processing data having a grid structure, such as images and videos. The core idea of the convolutional neural network is to extract the features of the image and perform classification or regression tasks through a convolutional layer, a pooling layer and a full-connection layer. The convolution layer performs feature extraction on the input image by using convolution operation, and extracts local features on the image in a sliding window mode. The pooling layer is used for reducing the space size of the feature map and reducing the number of parameters, and common pooling modes are maximum pooling and average pooling. The fully connected layer is used to map the extracted features to final output categories or values. In the fuel dispenser image feature extraction module, the fuel dispenser image feature extractor based on the convolutional neural network model refers to the feature extraction of the fuel dispenser image by using the convolutional neural network. The feature extractor automatically extracts characterization features from the fuel dispenser image by training a convolutional neural network model that learns a large number of fuel dispenser image samples. These characterization features may be used for subsequent gun image classification, detection, or other related tasks.
And then the oil gun image characterization feature map is passed through an adaptive attention module to obtain an adaptive attention-strengthening oil gun image characterization feature map. The self-adaptive attention module can automatically learn and adjust the weights of the features according to the input characteristic images of the fuel dispenser, so that the network can pay more attention to important feature areas, and the characteristic and identification capability of the fuel dispenser is enhanced. More specifically, the adaptive attention module highlights important information of a specific area and a specific channel by adjusting the channel attention to the feature map to a certain extent. In a digital image of a fuel dispenser, there may be some critical image areas, such as the dispenser head, handle, etc., which are more important to distinguish between different types of fuel dispensers. Through the self-adaptive attention module, the network can pay more attention to the key areas, and the characterization capability of the oiling gun is improved.
Further, the adaptive attention-enhancing fuel dispenser image characterization feature map is passed through a classifier to obtain a classification result that is used to represent a fuel dispenser type label. The classifier can be trained according to the learned feature representation and the known gun type label, so that the input self-adaptive attention-strengthening gun image characterization feature map is distributed or classified into corresponding categories. Here, the type tags of the fuel dispenser may be defined according to actual requirements and application scenarios, and the following are some examples of possible fuel dispenser type tags: 1. gasoline and diesel guns: the fuel guns are classified into two types, gasoline and diesel. 2. Conventional fuel guns and self-service fuel guns: the automatic oiling gun used by the traditional manual operation oiling gun and the self-service oiling robot is distinguished. 3. Different nozzle brands: the fuel guns are categorized into different brands according to the brands of the different fuel stations or suppliers. 4. Size of the fuel gun: according to the size and structural characteristics of the oil gun, the oil gun is divided into large, medium and small categories with different sizes. 5. The model of the oil gun: the fuel guns are classified according to their model numbers and specifications. It is noted that these classification tags may be defined according to specific requirements to meet the requirements for identifying and classifying fuel dispenser types in different application scenarios.
Correspondingly, the fuel gun type determining module comprises: the self-adaptive attention strengthening unit is used for enabling the oil gun image characterization feature map to pass through the self-adaptive attention module to obtain the self-adaptive attention strengthening oil gun image characterization feature map; and the tag classification unit is used for enabling the self-adaptive attention-strengthening oiling gun image characterization feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for representing the type tag of the oiling gun.
Wherein, in one example, the adaptive attention enhancement unit is configured to: processing the fuel dispenser image characterization feature map with the following adaptive attention formula to obtain the adaptive attention enhanced fuel dispenser image characterization feature map; wherein, the self-adaptive attention formula is:
wherein,characterizing a feature map for said fuel dispenser image, < >>For pooling treatment, +.>For pooling vectors, ++>Is a weight matrix, < >>Is a bias vector, ++>For activating the treatment->For the initial meta-weight feature vector, +.>Is the +.f. in the initial meta-weight feature vector>Characteristic value of individual position->For correcting the metadata feature vector, +.>Is the image characterization characteristic diagram of the self-adaptive attention-strengthening oiling gun, and is- >And multiplying the characteristic value in the correction element weight characteristic vector by each characteristic matrix of the characteristic image of the oil gun image along the channel dimension.
Wherein, in one example, the label classification unit is configured to: expanding the self-adaptive attention-strengthening oiling gun image characterization feature map into classification feature vectors according to row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a full-connection layer of the classifier to obtain coded classification feature vectors; and inputting the coding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
It should be appreciated that the role of the classifier is to learn the classification rules and classifier using a given class, known training data, and then classify (or predict) the unknown data. Logistic regression (logistics), SVM, etc. are commonly used to solve the classification problem, and for multi-classification problems (multi-class classification), logistic regression or SVM can be used as well, but multiple bi-classifications are required to compose multiple classifications, but this is error-prone and inefficient, and the commonly used multi-classification method is the Softmax classification function.
Further, in the technical scheme of the application, the control system of the intelligent full-automatic oiling robot further comprises a training module for training the oiling gun image feature extractor, the self-adaptive attention module and the classifier based on the convolutional neural network model.
Wherein, in one example, the training module comprises: the system comprises a training data acquisition unit, a control unit and a control unit, wherein the training data acquisition unit is used for acquiring training data, and the training data comprises training oiling gun digital images acquired by a camera and a true value of a type tag of the oiling gun; the training direction gradient histogram calculation unit is used for calculating a direction gradient histogram of the digital image of the training oiling gun; the training color gradient histogram calculation unit is used for calculating a color gradient histogram of the digital image of the training oiling gun; the training position gradient histogram calculation unit is used for calculating a position gradient histogram of the digital image of the training oiling gun; the training aggregation unit is used for aggregating the training oil gun digital image, the direction gradient histogram of the training oil gun digital image, the color gradient histogram of the training oil gun digital image and the position gradient histogram of the training oil gun digital image along the channel dimension to obtain a training oil gun multichannel characterization image; the training oil gun image feature extraction unit is used for enabling the training oil gun multichannel characterization image to pass through the oil gun image feature extractor based on the convolutional neural network model so as to obtain a training oil gun image characterization feature map; the training self-adaptive attention strengthening unit is used for enabling the training oil gun image representation feature map to pass through the self-adaptive attention module so as to obtain the training self-adaptive attention strengthening oil gun image representation feature map; the feature distribution optimizing unit is used for carrying out feature distribution optimization on the training self-adaptive attention-strengthening oiling gun image characterization feature map so as to obtain an optimized self-adaptive attention-strengthening oiling gun image characterization feature map; the classification loss function value calculation unit is used for enabling the optimized self-adaptive attention-strengthening oiling gun image characterization feature map to pass through the classifier so as to obtain a classification loss function value; and a loss training unit for training the filler gun image feature extractor, the adaptive attention module and the classifier based on the convolutional neural network model with the classification loss function value.
In the technical scheme of the application, when the training oil gun multichannel characterization image passes through the oil gun image feature extractor based on the convolutional neural network model, each feature matrix of the obtained training oil gun image characterization feature image respectively represents the training oil gun digital image and image semantic features of a direction gradient histogram, a color gradient histogram and a position gradient histogram thereof, so that the training oil gun image characterization feature image has sparsity of inter-channel image semantic feature distribution due to source image semantic difference. Therefore, after the training oiling gun image characterization feature map passes through the self-adaptive attention module, the obtained training self-adaptive attention-strengthening oiling gun image characterization feature map has local spatial distribution sparsity of image semantic features due to local self-adaptive attention strengthening of image semantic feature spatial distribution, so that probability density representation sparsity under a class probability density domain is caused when the training self-adaptive attention-strengthening oiling gun image characterization feature map is classified by a classifier, and regression convergence effect is affected when the training self-adaptive attention-strengthening oiling gun image characterization feature map is classified by the classifier.
Based on this, the applicant of the present application characterizes the training adaptive attention-enhancing fuel dispenser imageAnd (5) optimizing. Accordingly, the feature distribution optimizing unit is configured to: carrying out feature distribution optimization on the training self-adaptive attention-strengthening fuel gun image characterization feature map by using the following optimization formula to obtain the optimized self-adaptive attention-strengthening fuel gun image characterization feature map; wherein, the optimization formula is:
wherein,representation of the training adaptive attention-enhancing fuel gun image characterization profile>Position-by-position square of>Intermediate weight graphs trainable for parameters, e.g. based on the training of adaptive attention-enhancing fuel gunsThe sparsity of the channel distribution and the spatial distribution of the image characterization feature map is initially set as the feature value of each feature matrix of the image characterization feature map is +.>Is also defined by the global eigenvalue mean value of>For all single bitmaps with characteristic value 1, +.>Addition of the representation feature map, ++>Representing multiplication by location +.>To display the weight feature map, +.>And (5) characterizing the characteristic diagram for the optimized self-adaptive attention-strengthening oil gun image.
Here, the adaptive attention-enhancing fuel dispenser image characterization feature map is optimized for the training Distribution uniformity and consistency of sparse probability density in the whole probability space, and characteristic diagram of training self-adaptive attention-strengthening oil gun image characterization is +.>Distance type space distribution in a high-dimensional characteristic space is subjected to space angle inclination-based distance distribution optimization so as to realize the training self-adaptive attention-strengthening oiling gun image characterization characteristic diagram +.>Is weakly correlated with the distance of the respective local feature distributionSpatial resonance, thereby enhancing the training adaptive attention-enhancing fuel dispenser image characterization profile +.>The uniformity and consistency of the overall probability density distribution layer relative to regression probability convergence improve the classification convergence effect, namely the classification convergence speed and the classification result accuracy.
In summary, the control system 100 of the intelligent fully-automatic oiling robot according to the embodiments of the present application is illustrated, which can ensure that the robot selects a suitable oiling gun, and improve the accuracy and safety of oiling.
As described above, the control system 100 of the intelligent full-automatic oiling robot according to the embodiment of the present application may be implemented in various terminal devices, for example, a server or the like having a control algorithm of the intelligent full-automatic oiling robot. In one example, the control system 100 of the intelligent fully automated fueling robot may be integrated into the terminal device as a software module and/or hardware module. For example, the control system 100 of the intelligent fully automatic fueling robot may be a software module in the operating system of the terminal device or may be an application developed for the terminal device; of course, the control system 100 of the intelligent fully automatic oiling robot can also be one of the numerous hardware modules of the terminal device.
Alternatively, in another example, the control system 100 of the intelligent fully automatic oiling robot and the terminal device may be separate devices, and the control system 100 of the intelligent fully automatic oiling robot may be connected to the terminal device through a wired and/or wireless network and transmit interactive information in a agreed data format.
Fig. 2 shows a flowchart of a control method of the intelligent fully automatic fueling robot according to an embodiment of the present application. As shown in fig. 2, a control method of an intelligent fully-automatic oiling robot according to an embodiment of the application includes: s110, determining the position of a gas station through a sensor group; s120, identifying the type of the oiling gun; s130, detecting and analyzing information of the oil tank; s140, controlling the oiling action; s150, communicating with the gas station; s160, collecting fueling data; and S170, processing the oiling data to generate an oiling report.
In one possible implementation, as shown in FIG. 3, identifying the type of fuel dispenser includes: acquiring a digital image of the oiling gun acquired by a camera; extracting multi-dimensional image characterization information of the digital image of the oiling gun to obtain a multi-channel characterization image of the oiling gun; extracting image features of the oil gun multichannel characterization image to obtain an oil gun image characterization feature map; and determining the type of the fuel truck nozzle based on the fuel truck nozzle image characterization feature map.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described control method of the intelligent fully automatic oiling robot have been described in detail in the above description of the control system of the intelligent fully automatic oiling robot with reference to fig. 1, and thus, repetitive descriptions thereof will be omitted.
Fig. 4 shows an application scenario diagram of a control system of an intelligent fully-automatic fueling robot according to an embodiment of the present application. As shown in fig. 4, in this application scenario, first, a digital image of a fuel dispenser acquired by a camera (e.g., D illustrated in fig. 4) is acquired, and then, the digital image of the fuel dispenser is input to a server (e.g., S illustrated in fig. 4) in which a control algorithm of an intelligent fully-automatic fuel dispenser is deployed, wherein the server can process the digital image of the fuel dispenser using the control algorithm of the intelligent fully-automatic fuel dispenser to obtain a classification result for representing a type tag of the fuel dispenser.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory including computer program instructions executable by a processing component of an apparatus to perform the above-described method.
The present application may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. Control system of full-automatic oiling robot of intelligence, its characterized in that includes:
a gas station positioning subsystem for determining the position of the gas station by means of a sensor group;
the fuel gun identification subsystem is used for identifying the type of the fuel gun;
the oil tank detection subsystem is used for detecting and analyzing information of the oil tank;
a fueling action control subsystem for controlling fueling actions;
a communication management subsystem for communicating with the gas station;
the data acquisition subsystem is used for acquiring fueling data; and
and the data processing subsystem is used for processing the oiling data and generating an oiling report.
2. The intelligent, fully automated fueling robot control system of claim 1 wherein the fueling gun identification subsystem comprises:
the digital image acquisition module is used for acquiring the digital image of the oiling gun acquired by the camera;
the multidimensional image characterization module is used for extracting multidimensional image characterization information of the digital image of the oiling gun to obtain a multichannel characterization image of the oiling gun;
the oil gun image feature extraction module is used for extracting image features of the oil gun multichannel characterization image to obtain an oil gun image characterization feature map; and
And the fuel gun type determining module is used for determining the type of the fuel gun based on the fuel gun image characterization feature map.
3. The control system of an intelligent fully automated fueling robot of claim 2 wherein the multi-dimensional image characterization module comprises:
a direction gradient histogram calculation unit for calculating a direction gradient histogram of the digital image of the fuel truck nozzle;
a color gradient histogram calculation unit for calculating a color gradient histogram of the digital image of the fuel truck nozzle;
a position gradient histogram calculation unit for calculating a position gradient histogram of the digital image of the fuel truck nozzle; and
and the aggregation unit is used for aggregating the fuel gun digital image, the direction gradient histogram of the fuel gun digital image, the color gradient histogram of the fuel gun digital image and the position gradient histogram of the fuel gun digital image along the channel dimension to obtain the fuel gun multichannel characterization image.
4. The control system of an intelligent fully automatic oiling robot according to claim 3, wherein the direction gradient histogram calculation unit comprises:
the uniform dividing subunit is used for uniformly dividing the digital image of the oiling gun to obtain a plurality of cell spaces;
The gradient distribution generation subunit is used for calculating gradients of pixel points in each cell space in the plurality of cell spaces and generating a plurality of cell direction gradient histograms according to gradient distribution; and
a directional gradient histogram generation subunit for generating the directional gradient histogram based on the plurality of cell directional gradient histograms.
5. The control system of an intelligent fully automatic fueling robot of claim 4 wherein the color gradient histogram calculation unit comprises:
an image conversion subunit, configured to convert the digital image of the fuel dispenser into a Lab color space map;
a color cell dividing subunit, configured to perform cell division on the Lab color space map to obtain a plurality of cells;
a color gradient distribution generation subunit, configured to calculate color gradients of the plurality of cells, and generate a plurality of cell color gradient histograms based on the color gradient distribution; and
and a color gradient histogram generation subunit configured to generate the color gradient histogram based on the color gradient histograms of the plurality of cells.
6. The control system of an intelligent fully automatic fueling robot of claim 5 wherein the position gradient histogram calculation unit comprises:
The position cell division subunit is used for carrying out cell division on the digital image of the oil gun so as to obtain a plurality of cells;
a center point calculating subunit, configured to calculate a center point of each cell to obtain a plurality of cell center points; and
and the position gradient histogram generation subunit is used for calculating the relative positions from the central point of each cell to the center of the digital image of the oiling gun so as to obtain the position gradient histogram.
7. The control system of the intelligent fully-automatic fueling robot of claim 6 wherein said fueling gun image feature extraction module is configured to:
and the fuel gun multichannel characterization image is passed through a fuel gun image feature extractor based on a convolutional neural network model to obtain the fuel gun image characterization feature map.
8. The control system of an intelligent fully automated fueling robot of claim 7 wherein the fueling gun type determining module comprises:
the self-adaptive attention strengthening unit is used for enabling the oil gun image characterization feature map to pass through the self-adaptive attention module to obtain the self-adaptive attention strengthening oil gun image characterization feature map; and
The tag classification unit is used for enabling the self-adaptive attention-strengthening oiling gun image characterization feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for representing the type tag of the oiling gun;
wherein the self-adaptive attention strengthening unit is used for:
processing the fuel dispenser image characterization feature map with the following adaptive attention formula to obtain the adaptive attention enhanced fuel dispenser image characterization feature map; wherein, the self-adaptive attention formula is:
wherein,characterizing a feature map for said fuel dispenser image, < >>For pooling treatment, +.>For pooling vectors, ++>Is a weight matrix, < >>Is a bias vector, ++>For activating the treatment->For the initial meta-weight feature vector, +.>Is the +.f. in the initial meta-weight feature vector>Characteristic value of individual position->For correcting the metadata feature vector, +.>Is the image characterization characteristic diagram of the self-adaptive attention-strengthening oiling gun, and is->And multiplying the characteristic value in the correction element weight characteristic vector by each characteristic matrix of the characteristic image of the oil gun image along the channel dimension.
9. The control system of an intelligent fully automatic fueling robot of claim 8 further comprising a training module for training the convolutional neural network model-based fueling gun image feature extractor, the adaptive attention module, and the classifier;
Wherein, training module includes:
the system comprises a training data acquisition unit, a control unit and a control unit, wherein the training data acquisition unit is used for acquiring training data, and the training data comprises training oiling gun digital images acquired by a camera and a true value of a type tag of the oiling gun;
the training direction gradient histogram calculation unit is used for calculating a direction gradient histogram of the digital image of the training oiling gun;
the training color gradient histogram calculation unit is used for calculating a color gradient histogram of the digital image of the training oiling gun;
the training position gradient histogram calculation unit is used for calculating a position gradient histogram of the digital image of the training oiling gun;
the training aggregation unit is used for aggregating the training oil gun digital image, the direction gradient histogram of the training oil gun digital image, the color gradient histogram of the training oil gun digital image and the position gradient histogram of the training oil gun digital image along the channel dimension to obtain a training oil gun multichannel characterization image;
the training oil gun image feature extraction unit is used for enabling the training oil gun multichannel characterization image to pass through the oil gun image feature extractor based on the convolutional neural network model so as to obtain a training oil gun image characterization feature map;
The training self-adaptive attention strengthening unit is used for enabling the training oil gun image representation feature map to pass through the self-adaptive attention module so as to obtain the training self-adaptive attention strengthening oil gun image representation feature map;
the feature distribution optimizing unit is used for carrying out feature distribution optimization on the training self-adaptive attention-strengthening oiling gun image characterization feature map so as to obtain an optimized self-adaptive attention-strengthening oiling gun image characterization feature map;
the classification loss function value calculation unit is used for enabling the optimized self-adaptive attention-strengthening oiling gun image characterization feature map to pass through the classifier so as to obtain a classification loss function value; and
and the loss training unit is used for training the oil gun image feature extractor, the self-adaptive attention module and the classifier based on the convolutional neural network model by using the classification loss function value.
10. The control method of the intelligent full-automatic oiling robot is characterized by comprising the following steps of:
determining the position of the gas station through a sensor group;
identifying the type of the oil gun;
detecting and analyzing information of the oil tank;
controlling the oiling action;
communicating with the gas station;
Collecting oiling data; and
and processing the oiling data to generate an oiling report.
CN202410170055.0A 2024-02-06 2024-02-06 Control system and method of intelligent full-automatic oiling robot Pending CN117842923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410170055.0A CN117842923A (en) 2024-02-06 2024-02-06 Control system and method of intelligent full-automatic oiling robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410170055.0A CN117842923A (en) 2024-02-06 2024-02-06 Control system and method of intelligent full-automatic oiling robot

Publications (1)

Publication Number Publication Date
CN117842923A true CN117842923A (en) 2024-04-09

Family

ID=90529200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410170055.0A Pending CN117842923A (en) 2024-02-06 2024-02-06 Control system and method of intelligent full-automatic oiling robot

Country Status (1)

Country Link
CN (1) CN117842923A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850852A (en) * 2015-04-27 2015-08-19 小米科技有限责任公司 Feature vector calculation method and device
CN110937562A (en) * 2018-09-25 2020-03-31 龙安良 Multifunctional gas station monitoring and recognizing system
CN111302293A (en) * 2020-03-05 2020-06-19 博识峰云(深圳)信息技术有限公司 Automatic vehicle refueling identification system and server
TWM600298U (en) * 2020-06-03 2020-08-21 北基國際股份有限公司 Improved fuel dispenser device
CN112777555A (en) * 2021-03-23 2021-05-11 江苏华谊广告设备科技有限公司 Intelligent oiling device and method
CN113283338A (en) * 2021-05-25 2021-08-20 湖南大学 Method, device and equipment for identifying driving behavior of driver and readable storage medium
CN114841969A (en) * 2022-05-07 2022-08-02 辽宁大学 Forged face identification method based on color gradient texture representation
US20230098133A1 (en) * 2021-09-28 2023-03-30 International Business Machines Corporation Selection of image label color based on image understanding

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850852A (en) * 2015-04-27 2015-08-19 小米科技有限责任公司 Feature vector calculation method and device
CN110937562A (en) * 2018-09-25 2020-03-31 龙安良 Multifunctional gas station monitoring and recognizing system
CN111302293A (en) * 2020-03-05 2020-06-19 博识峰云(深圳)信息技术有限公司 Automatic vehicle refueling identification system and server
TWM600298U (en) * 2020-06-03 2020-08-21 北基國際股份有限公司 Improved fuel dispenser device
CN112777555A (en) * 2021-03-23 2021-05-11 江苏华谊广告设备科技有限公司 Intelligent oiling device and method
CN113283338A (en) * 2021-05-25 2021-08-20 湖南大学 Method, device and equipment for identifying driving behavior of driver and readable storage medium
US20230098133A1 (en) * 2021-09-28 2023-03-30 International Business Machines Corporation Selection of image label color based on image understanding
CN114841969A (en) * 2022-05-07 2022-08-02 辽宁大学 Forged face identification method based on color gradient texture representation

Similar Documents

Publication Publication Date Title
CN109190752B (en) Image semantic segmentation method based on global features and local features of deep learning
CN109063723B (en) Weak supervision image semantic segmentation method based on common features of iteratively mined objects
US11062453B2 (en) Method and system for scene parsing and storage medium
Dornaika et al. Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors
CN108629367B (en) Method for enhancing garment attribute identification precision based on deep network
WO2018052586A1 (en) Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN109284779A (en) Object detecting method based on the full convolutional network of depth
CN109241816B (en) Image re-identification system based on label optimization and loss function determination method
CN111027636B (en) Unsupervised feature selection method and system based on multi-label learning
CN111259707B (en) Training method of special linear lane line detection model
CN115410059B (en) Remote sensing image part supervision change detection method and device based on contrast loss
CN111259710B (en) Parking space structure detection model training method adopting parking space frame lines and end points
CN116740384B (en) Intelligent control method and system of floor washing machine
WO2023116565A1 (en) Method for intelligently designing network security architecture diagram
CN116342536A (en) Aluminum strip surface defect detection method, system and equipment based on lightweight model
CN113223011A (en) Small sample image segmentation method based on guide network and full-connection conditional random field
Dornaika et al. A comparative study of image segmentation algorithms and descriptors for building detection
CN117842923A (en) Control system and method of intelligent full-automatic oiling robot
CN113435480B (en) Method for improving long tail distribution visual recognition capability through channel sequential switching and self-supervision
CN113673534B (en) RGB-D image fruit detection method based on FASTER RCNN
CN115512174A (en) Anchor-frame-free target detection method applying secondary IoU loss function
CN111126455A (en) Abrasive particle two-stage identification method based on Lightweight CNN and SVM
CN114882263B (en) Convection weather similarity identification method based on CNN image mode
CN117575485B (en) Intelligent scheduling method, system and storage medium based on visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination