CN111230856A - Robot based on FPGA target recognition - Google Patents

Robot based on FPGA target recognition Download PDF

Info

Publication number
CN111230856A
CN111230856A CN201811439161.5A CN201811439161A CN111230856A CN 111230856 A CN111230856 A CN 111230856A CN 201811439161 A CN201811439161 A CN 201811439161A CN 111230856 A CN111230856 A CN 111230856A
Authority
CN
China
Prior art keywords
target
robot
fpga
design
circuit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811439161.5A
Other languages
Chinese (zh)
Inventor
张亮
龚程程
徐宝林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN201811439161.5A priority Critical patent/CN111230856A/en
Publication of CN111230856A publication Critical patent/CN111230856A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot based on FPGA target recognition. The system is divided into two parts: hardware design and software design. When a target with red, green and blue colors enters the visual range of the camera, the targets with the three colors can be simultaneously identified, the result is displayed on the liquid crystal screen, and simultaneously the target with any one color can be tracked. Under the condition that illumination changes or the target is partially shielded, the system can identify the target; when the rotating speed of the motor is 45r/min, the tracked robot can track the target. The method is widely applied to the identification and tracking of the home service robot on the target color. The product is convenient to operate, low in consumption and in line with the concept of energy conservation and emission reduction.

Description

Robot based on FPGA target recognition
Technical Field
The invention relates to a robot based on FPGA target recognition, which is mainly applied to the occasions of recognizing and tracking color targets. When the targets with red, green and blue colors enter the visual range of the camera, the targets with the three colors can be identified simultaneously, the result is displayed on the liquid crystal screen, and the target with any one color can be tracked. Under the condition that illumination changes or the target is partially shielded, the system can identify the target; when the rotating speed of the motor is 45r/min, the tracked robot can track the target. The method is widely applied to the identification and tracking of the home service robot on the target color.
Background
At present, with the rapid development of science and technology, target recognition and target tracking systems are widely applied in various fields. The target recognition and tracking play an important role in computer vision, the computer vision carries out relevant preprocessing on digital images acquired by a camera, the preprocessed results are analyzed, and a proper algorithm is selected, so that the machine can know the external environment, the man-machine interaction is realized, and the machine vision is the same as the eyes of human beings for the machine. It is the recognition of the tracked object, and then the tracking is realized according to the recognition result. The identification of the target object can be realized through the characteristics of the target object such as size, color and shape, and finally the tracking is realized through the calculated characteristics of the target shape and coordinates. For example, in terms of medical and ingenious aspects, detection of the shape and color of a medicine, a technology for assisting a doctor in medical image analysis, an oral medical technology and the like; in the industrial field, automatic identification technology in industrial production, and the like; in the aspect of military affairs, the identification of targets in air, ground and sea is carried out, and then accurate guidance striking and the like are realized; in the aspect of intelligent security monitoring, the identification of license plate photos, the identification of vehicle photos, the identification of passersby and the like are carried out; in the aspect of sports and entertainment, targets in movie works are identified and tracked, the target objects are attached to the backs of actors, and tracking of the actors, identification and tracking of lamplight on a stage, identification and tracking of athletes and the like are achieved through identification of the target objects. In the development of the present society, a robot integrating high, new and advanced technologies such as automation, machinery, artificial intelligence computers and the like becomes an important force for pushing the "industrial 4.0" process. The robot based on FPGA target recognition accords with the background of the 'industry 4.0' era,
disclosure of Invention
In order to overcome the defects of the traditional target identification mode, the invention provides a robot based on FPGA target identification, which is used for identifying and tracking a color target. The FPGA is mainly responsible for image acquisition, image storage, image recognition, image display and sending the target shape coordinate calculated on the FPGA to a main control board of the tracked robot through a USRAT serial port; the Atmega64 single chip microcomputer is mainly responsible for receiving a target centroid coordinate sent by the FPGA through a USART serial port, and generating corresponding PWM (pulse width modulation) by using a PID (proportion integration differentiation) control algorithm to drive a motor to enable the crawler robot to track a target.
The technical scheme adopted by the invention for solving the technical problems is as follows: a robot based on FPGA target recognition, the system is divided into two parts: hardware design and software design. The method is characterized in that the hardware design is divided into the hardware design of a robot vision system and the hardware design of a robot moving system. The hardware design of the robot vision system takes EP4CE6F17C8 as a core processor, and designs a power supply circuit, a 0V7670 camera interface circuit, an SDRAM memory circuit, a 7-inch TFT LCD interface circuit, a USRAT serial interface circuit and the like; the hardware design of the robot mobile system takes an Atmega64 singlechip as a core controller, and designs a power supply circuit, a motor driving circuit, a USART serial port interface circuit and the like. The software design is divided into a robot vision system and a robot moving system. The robot vision system collects image data by using a 0V7670 camera, writes the collected image data into SDRAM, identifies a target by using a broad color algorithm in an RGB space from the data read from the SDRAM, calculates the centroid coordinate of the target, and finally sends the centroid coordinate to a main control board of the robot mobile system through a USART serial port; the robot moving system takes an Atmega64 singlechip as a core processor, receives a target centroid row coordinate through a USART serial port, makes a difference between the target centroid row coordinate and a liquid crystal screen center row coordinate, and outputs corresponding PWM through a PID control algorithm to control the tracked robot to accurately track a target.
The invention has the beneficial effects that the FPGA and the Atmega64 singlechip are combined, and the color target tracking method is mainly applied to occasions of identifying and tracking color targets. When the targets with red, green and blue colors enter the visual range of the camera, the targets with the three colors can be identified simultaneously, the result is displayed on the liquid crystal screen, and simultaneously, the targets with any one color can be tracked, so that the method has wide application value and market prospect.
Drawings
FIG. 1 is a general system block diagram
FIG. 2 is a diagram of an overall scheme of a robot vision system
Detailed Description
The following description of specific embodiments with reference to the drawings further illustrates how the invention may be implemented.
In fig. 1, a robot based on FPGA target recognition is divided into two parts: hardware design and software design. The method is characterized in that the hardware design is divided into the hardware design of a robot vision system and the hardware design of a robot moving system. The hardware design of the robot vision system takes EP4CE6F17C8 as a core processor, and designs a power supply circuit, a 0V7670 camera interface circuit, an SDRAM memory circuit, a 7-inch TFT LCD interface circuit, a USRAT serial interface circuit and the like; the hardware design of the robot mobile system takes an Atmega64 singlechip as a core controller, and designs a power supply circuit, a motor driving circuit, a USART serial port interface circuit and the like. The software design is divided into a robot vision system and a robot moving system. The robot vision system collects image data by using a 0V7670 camera, writes the collected image data into SDRAM, identifies a target by using a broad color algorithm in an RGB space from the data read from the SDRAM, calculates the centroid coordinate of the target, and finally sends the centroid coordinate to a main control board of the robot mobile system through a USART serial port; the robot moving system takes an Atmega64 singlechip as a core processor, receives a target centroid row coordinate through a USART serial port, makes a difference between the target centroid row coordinate and a liquid crystal screen center row coordinate, and outputs corresponding PWM through a PID control algorithm to control the tracked robot to accurately track a target.
In fig. 1, the hardware design of the track robot vision system mainly includes: the design of a power module circuit, the design of a camera interface circuit, the design of a liquid crystal screen interface circuit, the design of an SDRAM memory circuit, the design of a USART serial interface circuit and the design of a JTAG interface circuit for downloading programs. The power circuit supplies power for the track robot vision system. The whole vision system adopts five power supplies: 5V, 3.3V, 2.8V, 2.5V and 1.2V, 5V power is provided by the main control board of the tracked robot mobile system, and the voltage is reduced to CMOS-SCLK and CMOS-SDAT ports of the OV7670 camera through the LM1117-3.3 chip, an SDRAM memory circuit, a 7-inch TFT liquid crystal screen circuit, a USART serial port circuit, an ADP323 voltage regulator circuit, a power indicator lamp circuit and the like, and stable 3.3V voltage is output. The basis of the camera model selection is mainly related to a target recognition and tracking system, and a camera with small volume, low price, low power consumption and low resolution is selected according to the system function, namely an OV7670 camera produced by OmniVision. The SDRAM is a synchronous dynamic random access memory, and the synchronous means that the clock frequency of the memory is the same as that of a processor bus when the memory works; dynamic means that the contents of the memory need to be refreshed continuously to charge the capacitor continuously in order not to lose the data in the memory; random refers to the content of the memory being accessible arbitrarily and out of order. The interior of the system is divided into 4L-BANK, and the addressing mode is that the L-BANK address is selected firstly, then the row address is selected, and finally the column address is selected to determine the addressing unit. The FPGA image processing system requires a memory with fast processing and high speed, so a piece of SDRAM is designed and adopted, the model is H57V2562GTR-75C, the chip is packaged by a 54-pin TSOP, and the working voltage is 3.3V. The liquid crystal screen adopts a 7-inch TFT LCD, the model is A070VW05V2, the power supply voltage is 5V, the system has a backlight control function, R, G and B respectively occupy 8 bits, the system only uses the high 5 bits of R, the high 6 bits of G and the high 5 bits of B, and the system is connected with a robot vision system board through FPC flat cables. And the USART serial port interface circuit is used for sending the target position coordinates calculated on the FPGA to a main control board of the tracked robot mobile system through the USRAT serial port, and the tracked robot tracks the target according to the received target position information.
In fig. 1, the hardware design of the tracked robot mobile system comprises: the design of a power supply circuit, the design of a main control system circuit, the design of a motor interface circuit, the design of a USART interface circuit, the design of an indicating circuit, the design of a motor driving circuit and the like. The power module is mainly used for providing power for a main control board of a crawler robot mobile system, the power module is simple, a 7.2V rechargeable lithium battery is directly adopted, 5V voltage is output for a single chip microcomputer, a motor interface circuit, a USART serial port circuit, an indicator lamp circuit and a buzzer through LM1117, and a 100-micro-method tantalum capacitor is added to two output ports of the LM1117 in order to enable the LM1117 to output stable 5V voltage. The single chip microcomputer adopted by the main control system circuit is an AVR series ATmega64, and is an 8-bit CMOS microcontroller with low power consumption and high performance. The motor interface circuit is connected with the ATmega64 singlechip and the motor drive circuit, and further the control of the singlechip on the rotating speed and the steering of the motor is realized. The mobile system main control board is provided with 2 motor driving interfaces which are designed for 5 pins and comprise a PWM interface, two interfaces for changing the state of the motor, a 5V voltage interface and a grounding interface, wherein the PWM interface is used for changing the rotating speed and the rotating direction of the motor. The motor driving circuit adopts an H-bridge driving circuit and consists of two half-bridge driving chips and four MOSFETs IRF 3205. The power supply of the motor driving board adopts a rechargeable lithium battery, and the voltage is 7.2V. Therefore, the mobile system is a system which takes an ATmega64 singlechip as a core to control the rotation of the motor. Because the requirement on the moving speed of the robot is not high and the weight of the robot is light, a direct-current brush motor is selected, and when the voltage is 7.2V, the rotating speed is about 90 r/min. The health of robot mainly leans on two-layer steel sheet to support, the steel sheet leans on both sides respectively to have 3 wheels to support, and 3 wheels on both sides about and are the same, all constitute by 1 drive wheel and 2 bearing wheels, the power supply of main control board and motor is put on the steel sheet bottom, the main control board of robot passes through the copper post and installs the item layer at the steel sheet, the motor drive board is connected with the main control board with the copper post, rotation through the control motor, realize the removal of robot.
In fig. 2, the image acquisition and display in the software design of the tracked robot vision system are composed of five modules, namely a clock and reset module, an image acquisition module, an image storage module, a target identification and marking module and an image display module. At present, most of image acquisition, processing and display systems adopt a CMOS camera with low processing cost and low power consumption to capture images, the captured images are displayed through a liquid crystal display, and between image acquisition and display, because the clock domains of the two are different, SDRAM is adopted in the middle to buffer so as to realize cross-clock domain transmission of mass data. A system clock and reset module: two D triggers are used for continuous synchronous and asynchronous reset signals during reset, and when the reset signals are jittered, a certain filtering effect can be achieved through multiple triggering of the triggers. The clock provided by the development board is 50MHz, and the driving clocks of the camera, the liquid crystal display and the SDRAM are 26MHz, 26MHz and 100MHz, respectively, and thus the driving clocks required for the camera, the liquid crystal display and the SDRAM are generated by configuring a phase locked loop. The register configuration module: and initializing and configuring an internal register of the camera through the SCCB bus, so that the camera outputs data according to an initialized and configured mode. Read/write asynchronous FIFO module: image data collected by the camera is written into SDRAM for storage, then data are read from SDRAM to be displayed by a liquid crystal screen, data transmission between the image data and the SDRAM belongs to cross-clock-domain data transmission, FIFO solves the problem of cross-clock-domain data transmission, and the metastable state problem of the data in the transmission process can be effectively avoided. A target identification and tagging module: an algorithm for judging the target through a threshold value in an RGB space is selected, after the target is identified through the algorithm, the centroid coordinate of the target is solved, and the target is marked through a square block with a corresponding color. A liquid crystal screen display module; the method is mainly used for visually seeing the effect of identifying the target, and in the process of debugging the target identification program, when continuous software debugging and track robot tracking the target are carried out by observing the liquid crystal screen, whether the robot identifies the target or not is observed through the liquid crystal screen to realize accurate tracking, so that the portable liquid crystal screen is selected and installed on the track robot.
In fig. 2, the image acquisition module is the first step of the target recognition robot system based on the FPGA, and relates to the quality of the target recognition effect later. Firstly, the camera is initialized and configured through the SCCB bus, and after configuration is successful, the camera acquires images according to the mode of initialization and configuration and outputs 16-bit pixel data for a rear module to use. The initialization of the 0V7670 camera is realized through an SCCB bus, the FPGA carries out initialization configuration on 170 registers inside the camera through the SCCB bus, the format of data output by the camera, the frequency of driving a clock to select the internal or external part and pixel data output and the like can be configured, and the camera acquires video images according to the configuration mode. After the camera is initialized and configured, the acquisition of image data follows, and the module mainly realizes counting of the acquired image frames and conversion of 8-bit data output by the camera into data in a 16-bit RGB565 format. The data manual of the 0V7670 camera clearly indicates that after initial configuration, the first 10 frames of data collected by the camera are not stable, and for the reliability of subsequent image processing, the image data of the first 10 frames are discarded, and the storage and processing are started from the image data of 11 frames later. The image cache module adopts SDRAM to buffer to realize data interaction across clock domains.
Due to the adoption of the structure, the invention provides the design of the robot based on FPGA target recognition, the design is simple, and the robot is mainly applied to occasions of recognizing and tracking color targets. When the targets with red, green and blue colors enter the visual range of the camera, the targets with the three colors can be identified simultaneously, the result is displayed on the liquid crystal screen, and simultaneously, the targets with any one color can be tracked.

Claims (2)

1. The utility model provides a robot based on FPGA target recognition, the system adopts the modularized design, and the system divide into two parts: hardware design and software design; the method is characterized in that the hardware design is divided into the hardware design of a robot vision system and the hardware design of a robot mobile system, the hardware design of the robot vision system takes EP4CE6F17C8 as a core processor, and a power supply circuit, a 0V7670 camera interface circuit, an SDRAM (synchronous dynamic random access memory) circuit, a 7-inch TFT LCD (thin film transistor liquid crystal display) interface circuit, a USRAT (universal serial interface) interface circuit and the like are designed; the hardware design of the robot mobile system takes an Atmega64 singlechip as a core controller, and designs a power supply circuit, a motor driving circuit, a USART serial port interface circuit and the like; the software design is divided into a robot vision system and a robot moving system, wherein the robot vision system acquires image data by using a 0V7670 camera, writes the acquired image data into SDRAM, then identifies a target by using a color broad value algorithm in an RGB space from the data read out from the SDRAM, calculates the centroid coordinate of the target, and finally sends the centroid coordinate to a main control board of the robot moving system through a USART serial port; the robot moving system takes an Atmega64 singlechip as a core processor, receives a target centroid row coordinate through a USART serial port, makes a difference between the target centroid row coordinate and a liquid crystal screen center row coordinate, and outputs corresponding PWM through a PID control algorithm to control the tracked robot to accurately track a target.
2. The robot based on FPGA target recognition of claim 1, characterized in that the FPGA is adopted to be combined with an Atmega64 singlechip, and the FPGA is mainly responsible for image acquisition, image storage, image recognition, image display and sending the target shape and coordinates calculated on the FPGA to a main control board of the tracked robot through a USRAT serial port; the single chip microcomputer is mainly responsible for receiving the target centroid coordinates sent by the FPGA through a USART serial port and generating corresponding PWM (pulse width modulation) by using a PID (proportion integration differentiation) control algorithm to drive the motor to enable the crawler robot to track the target.
CN201811439161.5A 2018-11-28 2018-11-28 Robot based on FPGA target recognition Pending CN111230856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811439161.5A CN111230856A (en) 2018-11-28 2018-11-28 Robot based on FPGA target recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811439161.5A CN111230856A (en) 2018-11-28 2018-11-28 Robot based on FPGA target recognition

Publications (1)

Publication Number Publication Date
CN111230856A true CN111230856A (en) 2020-06-05

Family

ID=70862068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811439161.5A Pending CN111230856A (en) 2018-11-28 2018-11-28 Robot based on FPGA target recognition

Country Status (1)

Country Link
CN (1) CN111230856A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103256A (en) * 2021-04-22 2021-07-13 达斯琪(重庆)数字科技有限公司 Service robot vision system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103256A (en) * 2021-04-22 2021-07-13 达斯琪(重庆)数字科技有限公司 Service robot vision system

Similar Documents

Publication Publication Date Title
CN108197547B (en) Face pose estimation method, device, terminal and storage medium
WO2021147964A1 (en) Rotary display device and control method therefor, and rotary display system
CN107749286A (en) Display screen parameter read-in method and device
CN111230856A (en) Robot based on FPGA target recognition
CN211956466U (en) Storage mainboard based on processor soars
CN204795410U (en) Handy panorama shooting device
CN204069175U (en) A kind of video frequency following system based on machine vision technique
CN205563453U (en) Sight is tracked and people's eye zone location system interested
CN209070376U (en) A kind of holder tracing system based on image recognition
CN202309974U (en) FPGA (Field Programmable Gate Array)-based intelligent vehicle-mounted 360-degree panoramic imaging system
CN102945199A (en) System and method for intelligently detecting display card working condition
CN203839036U (en) Control circuit of wireless intelligent exhibition hall
CN206517531U (en) Overall view monitoring system
CN114115054A (en) Online detection robot control system based on neural network
CN114612523A (en) FPGA-based dynamic target tracking system and detection method thereof
CN207503284U (en) Image Edge-Detection system
Viwatpinyo et al. The Automatic Car to Implementation of Lane Detective using Raspberry Pi 3 Model B on OpenCV
Wang et al. Study on a real-time image object tracking system
Kao et al. Head pose recognition in advanced Driver Assistance System
CN100549620C (en) Miniature size measurer based on embedded system
CN205283736U (en) Embedded portable multi -functional image acquisition system based on ARM
CN220171649U (en) Device for detecting road obstacle in real time based on FPGA
CN112689121A (en) Motion tracking system based on FPGA
CN211827319U (en) Sample data generation system for machine learning
Zhang [Retracted] Research on Recognition Method of Basketball Goals Based on Image Analysis of Computer Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200605

WD01 Invention patent application deemed withdrawn after publication