CN211956492U - Image processing system for visual navigation robot - Google Patents
Image processing system for visual navigation robot Download PDFInfo
- Publication number
- CN211956492U CN211956492U CN202020375445.9U CN202020375445U CN211956492U CN 211956492 U CN211956492 U CN 211956492U CN 202020375445 U CN202020375445 U CN 202020375445U CN 211956492 U CN211956492 U CN 211956492U
- Authority
- CN
- China
- Prior art keywords
- module
- robot
- model
- picture
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The utility model discloses a be used for vision navigation robot image processing system, including robot and computer terminal, the robot includes convolution neural network module, categorised module, storage module and first wireless communication module, computer terminal includes wireless camera module, picture processing module, categorised model module and second wireless communication module. The utility model only takes the uppermost pixel point of the picture as the training sample to match with the full-connection neural network more simply and rapidly, thereby greatly reducing the training time of the model; after the picture is cut into the top layer, the characteristic values of the pictures of different classifications are clear and have large differences, and the model training and the model judgment have higher accuracy and anti-interference performance; meanwhile, when the robot is driven by visual navigation, pictures are sent to the network judgment classification model more quickly, and the real-time performance of the robot is improved.
Description
Technical Field
The utility model belongs to the technical field of the robot, concretely relates to be used for vision navigation robot image processing system.
Background
In the research of the intelligent robot, the navigation of the robot is a very important problem, and the navigation is a core technology of the intelligent robot and a key technology for realizing real intellectualization and complete autonomous movement. There are many ways in which robots navigate, such as inertial navigation, visual navigation, GPS positioning navigation, data navigation using sensors, and so forth. The vision sensor provides abundant external information for the robot, and can achieve the recognition of the environment and the target under the conditions of no need of the motion of the sensor and no contact of objects, which is difficult to achieve by other sensors.
In the existing scheme, the whole picture is used as a training sample to train the neural network, and due to the fact that the information amount of the picture is large, when the training sample is huge, the training process of the neural network is abnormal and time-consuming, and a computer configuration is required; due to the fact that the number of the features of one picture is large, the anti-interference performance of the trained model is poor, and the robustness of the system is not improved; in addition, in the process of the robot vision navigation driving, the pictures acquired in real time are sent to the neural network for judgment, a part of time also needs to be consumed, and the real-time performance of the robot is reduced.
SUMMERY OF THE UTILITY MODEL
An object of the utility model is to provide a be used for vision navigation robot image processing system to provide the big neural network's of picture volume problem consuming time unusually among the above-mentioned background art of solution among the prior art.
In order to achieve the above purpose, the utility model adopts the following technical scheme:
the utility model provides a be used for vision navigation robot image processing system, includes robot and computer terminal, the robot includes convolution neural network module, categorised module, storage module and first wireless communication module, computer terminal includes wireless camera module, picture processing module, categorised model module and second wireless communication module, robot and computer terminal are through first wireless communication module and second wireless communication module signal connection.
Preferably, the wireless camera module adopts a multi-wide-angle CMOS camera.
Preferably, the image processing module comprises an image capturing module and an image identifying module, the image capturing module is used for capturing images in the image processing module, and the image identifying module is used for identifying the images captured by the image capturing module and outputting the identification results to the computer terminal.
Preferably, the classification module includes a feature identification module, configured to identify features of the extracted picture, and compare the identified features with the set basic features of the picture.
The utility model discloses a technological effect and advantage: the utility model provides a be used for vision navigation robot image processing system compares with prior art, has following advantage:
the utility model relates to a be used for vision navigation robot image processing system, including robot and computer terminal, the robot includes convolution neural network module, categorised module, storage module and first wireless communication module, and computer terminal includes wireless camera module, picture processing module, categorised model module and second wireless communication module, and robot and computer terminal are through first wireless communication module and second wireless communication module signal connection. The training time of the model is greatly reduced by taking only the uppermost pixel point of the picture as a training sample and matching the pixel point with the fully-connected neural network more simply and quickly; after the picture is cut into the top layer, the characteristic values of the pictures of different classifications are clear and have large differences, and the model training and the model judgment have higher accuracy and anti-interference performance; meanwhile, when the robot is driven by visual navigation, pictures are sent to the network judgment classification model more quickly, and the real-time performance of the robot is improved.
Drawings
Fig. 1 is a schematic structural diagram of an image processing system for a visual navigation robot according to the present invention;
FIG. 2 is a schematic view of the operation of the present invention;
fig. 3 is a schematic view of a compressed and cut picture in embodiment 1 of the present invention;
fig. 4 is a schematic diagram of a filtered and binarized picture in embodiment 1 of the present invention;
fig. 5 is a schematic diagram of a training sample in embodiment 1 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. The specific embodiments described herein are merely illustrative of the invention and are not intended to be limiting of the invention. Based on the embodiments in the present invention, all other embodiments obtained by a person skilled in the art without creative work belong to the protection scope of the present invention.
The utility model provides a be used for visual navigation robot image processing system as shown in fig. 1-5, including robot and computer terminal, the robot includes convolution neural network module, categorised module, storage module and first wireless communication module, computer terminal includes wireless camera module, picture processing module, categorised model module and second wireless communication module, robot and computer terminal are through first wireless communication module and second wireless communication module signal connection. The wireless camera module adopts a multi-wide-angle CMOS camera. The image processing module comprises an image capturing module and an image identification module, the image capturing module is used for capturing images in the image processing module, and the image identification module is used for identifying the images captured by the image capturing module and outputting identification results to the computer terminal. The classification module comprises a feature identification module for identifying the features of the extracted picture and comparing the identified features with the set basic features of the picture.
Example 1
The utility model also provides a be used for vision navigation robot image processing method, including following step:
s1, shooting a picture at the track stage by using the wireless camera module and transmitting the picture to the computer terminal, wherein the picture processing module in the computer terminal compresses, filters and cuts the picture, cuts the topmost end of the picture and stores the picture;
s2, transmitting the processed pictures to a robot through a second wireless communication module by using a computer terminal, classifying and judging the pictures by a convolutional neural network module in the robot to obtain a classification result, wherein the picture processing is zooming processing, the acquired picture pixel value is 640 x 480, and the pixel value is 320 x 240 after zooming, so that the robot is guided to make corresponding actions to complete driving of roads and is stored by a storage module, so that the neural network can more easily classify various road conditions more accurately, and meanwhile, in order to reduce various problems caused by light and environmental problems as much as possible, the cut images are subjected to filtering and binarization processing;
s3, in the picture processing module, only one row of pixel points on the uppermost layer of the picture are taken as training samples to train a fully-connected neural network to obtain a classification model, and the classification model module is used for classifying the cut pictures and comparing the classified pictures with the path of the whole track;
s4, processing the picture acquired in real time during the robot visual navigation, and then delivering the pixel points in the top row to the network for judgment, so that the robot makes a corresponding response;
s5, after the picture processing module is used to remove some fuzzy or direction-indistinguishable pictures by human, firstly, the pictures are compressed and cut appropriately, the information content of the pictures is reduced, and the compressed and cut pictures are processed.
The convolutional neural network module code is:
reset _ default _ graph () # clears the default graphics stack and resets the global default graphics.
# input layer
Plane holder (tf. float32, [ None, width ], name ═ input') # image size n 1804801
tf_Y=tf.placeholder(tf.float32,[None,4])#n 4
# full ligation layer
fc _ w1 ═ tf. variable (tf. random _ normal ([480,100])) #50 neurons
fc_b1=tf.Variable(tf.random_normal([100]))
Use relu activation function for fc _ out1 ═ tf.nn. relu (tf.matmul (tf _ X, fc _ w1) + fc _ b1) #
# dropout layer placeholder
dropout_keep_prob=tf.placeholder(tf.float32,name='keep')
fc1_drop=tf.nn.dropout(fc_out1,dropout_keep_prob)
# output layer
Variable (tf. random _ normal ([100,4])) #100 neurons out _ w1 ═ tf. the output result was 4 classes
out_b1=tf.Variable(tf.random_normal([4]))
pred=tf.nn.softmax(tf.matmul(fc1_drop,out_w1)+out_b1,name='pred')
# the vector of N1 is normalized to a value of 0-1, and classification is generally performed using this excitation function
# define loss function and training procedure
loss=-tf.reduce_mean(tf_Y*tf.log(tf.clip_by_value(pred,1e-11,1.0)))
train_step=tf.train.AdamOptimizer(1e-3,name='train_step').minimize(loss)。
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications and variations can be made in the embodiments or in part of the technical features of the embodiments without departing from the spirit and the scope of the invention.
Claims (4)
1. An image processing system for a visual navigation robot, comprising a robot and a computer terminal, characterized in that: the robot comprises a convolutional neural network module, a classification module, a storage module and a first wireless communication module, the computer terminal comprises a wireless camera module, a picture processing module, a classification model module and a second wireless communication module, and the robot and the computer terminal are in signal connection through the first wireless communication module and the second wireless communication module.
2. An image processing system for a visual navigation robot according to claim 1, characterized in that: the wireless camera module adopts a multi-wide-angle CMOS camera.
3. An image processing system for a visual navigation robot according to claim 1, characterized in that: the image processing module comprises an image capturing module and an image identification module, the image capturing module is used for capturing images in the image processing module, and the image identification module is used for identifying the images captured by the image capturing module and outputting identification results to the computer terminal.
4. An image processing system for a visual navigation robot according to claim 1, characterized in that: the classification module comprises a feature identification module for identifying the features of the extracted picture and comparing the identified features with the set basic features of the picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202020375445.9U CN211956492U (en) | 2020-03-23 | 2020-03-23 | Image processing system for visual navigation robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202020375445.9U CN211956492U (en) | 2020-03-23 | 2020-03-23 | Image processing system for visual navigation robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN211956492U true CN211956492U (en) | 2020-11-17 |
Family
ID=73185022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202020375445.9U Expired - Fee Related CN211956492U (en) | 2020-03-23 | 2020-03-23 | Image processing system for visual navigation robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN211956492U (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111339999A (en) * | 2020-03-23 | 2020-06-26 | 东莞理工学院 | Image processing system and method for visual navigation robot |
-
2020
- 2020-03-23 CN CN202020375445.9U patent/CN211956492U/en not_active Expired - Fee Related
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111339999A (en) * | 2020-03-23 | 2020-06-26 | 东莞理工学院 | Image processing system and method for visual navigation robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111080693A (en) | Robot autonomous classification grabbing method based on YOLOv3 | |
CN101198987B (en) | Object detecting device and its learning device | |
CN110751185A (en) | Training method and device of target detection model | |
CN108830254B (en) | Fine-grained vehicle type detection and identification method based on data balance strategy and intensive attention network | |
CN108133235B (en) | Pedestrian detection method based on neural network multi-scale feature map | |
CN113065495B (en) | Image similarity calculation method, target object re-recognition method and system | |
CN211956492U (en) | Image processing system for visual navigation robot | |
CN114721403B (en) | Automatic driving control method and device based on OpenCV and storage medium | |
CN115409992A (en) | Remote driving patrol car system | |
CN113052071B (en) | Method and system for rapidly detecting distraction behavior of driver of hazardous chemical substance transport vehicle | |
CN114067268A (en) | Method and device for detecting safety helmet and identifying identity of electric power operation site | |
CN113408630A (en) | Transformer substation indicator lamp state identification method | |
CN113723176A (en) | Target object determination method and device, storage medium and electronic device | |
Prakash et al. | Automatic feature extraction and traffic management using machine learning and open CV model | |
CN111339999A (en) | Image processing system and method for visual navigation robot | |
CN114693556B (en) | High-altitude parabolic frame difference method moving object detection and smear removal method | |
CN113454649B (en) | Target detection method, apparatus, electronic device, and computer-readable storage medium | |
CN115908886A (en) | Image classification method, image processing apparatus, and storage device | |
Pagire et al. | Underwater fish detection and classification using deep learning | |
CN112907553A (en) | High-definition image target detection method based on Yolov3 | |
CN112418055A (en) | Scheduling method based on video analysis and personnel trajectory tracking method | |
CN110443197A (en) | A kind of visual scene intelligent Understanding method and system | |
Połap et al. | Lightweight CNN based on Spatial Features for a Vehicular Damage Detection System | |
CN114613007A (en) | Examinee abnormal behavior detection method based on deep learning | |
CN109598213B (en) | Face orientation aggregation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201117 |
|
CF01 | Termination of patent right due to non-payment of annual fee |