CN113103256A - Service robot vision system - Google Patents

Service robot vision system Download PDF

Info

Publication number
CN113103256A
CN113103256A CN202110438347.4A CN202110438347A CN113103256A CN 113103256 A CN113103256 A CN 113103256A CN 202110438347 A CN202110438347 A CN 202110438347A CN 113103256 A CN113103256 A CN 113103256A
Authority
CN
China
Prior art keywords
image
service robot
target
visual
vision system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110438347.4A
Other languages
Chinese (zh)
Inventor
林海
林灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daski Chongqing Digital Technology Co ltd
Original Assignee
Daski Chongqing Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daski Chongqing Digital Technology Co ltd filed Critical Daski Chongqing Digital Technology Co ltd
Priority to CN202110438347.4A priority Critical patent/CN113103256A/en
Publication of CN113103256A publication Critical patent/CN113103256A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a service robot vision system, which consists of a hardware part and a software part, wherein the hardware part comprises an image acquisition system, an image signal processing part, a liquid crystal display part, a power supply system and a vision tracking part; the image acquisition system is used for completing external image acquisition and transmitting acquired image signals to the image processing chip DSP; the invention has the beneficial effects that: the system can identify a target object according to a human instruction, track a moving target object, realize a visual system function of a service robot, and construct an important part of a complete robot system for future realization; when the characteristics of the target image are extracted, the bilinear interpolation algorithm is used for zooming the image, the convolution kernel is used for performing convolution processing on the image, the size of each characteristic image of the processed image is determined, and the accuracy of the robot in target identification is improved.

Description

Service robot vision system
Technical Field
The invention belongs to the technical field of service robots, and particularly relates to a service robot vision system.
Background
With the development of computer science and control technology, more and more intelligent robots of different types appear in factories and in life, a robot vision system is taken as an important subsystem in an intelligent robot system and is also more and more emphasized by people, and the robot vision system relates to the fields of image processing, mode recognition, vision tracking and the like. Different kinds of robots have slightly different visual systems in software or hardware due to different work emphasis.
The visual system is a very complex system, which not only needs to accurately acquire and process images, but also needs to respond to external changes in real time, and simultaneously needs to track objects moving outside in real time. Therefore, vision systems place high demands on both hardware and software systems; the current popular football robot technology, the visual system of which belongs to the typical fast identification and reaction type, can achieve the satisfactory effect in the application, and the hardware design adopts the typical camera, CPLD or FPGA, DSP structure to achieve the purpose of image acquisition and processing; the identification of the team members and the targets is achieved through a color mark calibration method, a common seed filling method is adopted for image processing, and an image jacobian matrix derived through an image optical flow equation is used for achieving the self-adaptive tracking function of the images in the tracking of a visual system.
In order to identify a target object according to a human instruction, a moving target object is tracked, a visual system function of a service robot is realized, and an important part of a complete robot system is constructed for realizing the future.
Disclosure of Invention
The invention aims to provide a service robot vision system which can identify a target object according to a human instruction, track a moving target object, realize the function of a vision system of a service robot and construct an important part of a complete robot system for realizing the future.
In order to achieve the purpose, the invention provides the following technical scheme: a service robot vision system is composed of a hardware part and a software part, wherein,
the hardware part comprises an image acquisition system, an image signal processing system, a liquid crystal display, a power supply system and a visual tracking system;
the image acquisition system is used for completing external image acquisition and transmitting acquired image signals to the image processing chip DSP;
digital Signal Processing (DSP) which utilizes a computer or special digital equipment to analyze, synthesize, transform, estimate and identify and process signals, extracts useful information and effectively transmits and applies the information;
the power supply system is used for supplying power to the system;
the liquid crystal display is used for visually displaying the effect of the image acquired by the image sensor and visually seeing the target tracking state of the visual system in real time;
the visual tracking is used for carrying out self-adaptive tracking on the target object by the visual system when the target system carries out relative translation;
the software part comprises a time sequence control part software, an image processing and identifying part software and a tracking control part software;
the time sequence control part software completes the time sequence control when the image data is transmitted and the control of the display of the liquid crystal;
the image processing and identifying part software achieves the purpose of real-time and rapid identification of the robot through the combination of colors and shapes.
As a preferable technical scheme of the invention, the image acquisition system comprises an image sensor partial circuit, an FIFO memory circuit and a CPLD time sequence control partial circuit.
As a preferred technical solution of the present invention, the image signal processing includes a DSP chip part circuit, a flash memory circuit for Bootload, and an SDRAM circuit for storing data during processing.
As a preferred technical solution of the present invention, the image acquisition system is connected to the image signal processing and the liquid crystal display, respectively.
As a preferable technical scheme of the invention, the power supply system is respectively connected with the image acquisition system, the image signal processing system and the liquid crystal display system.
As a preferred embodiment of the present invention, the visual system performs recognition by using a recognition algorithm based on global feature vectors.
As a preferred technical solution of the present invention, a service robot vision recognition determination method is as follows:
the method comprises the following steps: collecting a target image shot by the service robot;
step two: comparing the acquired target image with a preset reference object to obtain a judgment result;
step three: and displaying the judgment result.
As a preferred embodiment of the present invention, in the feature extraction of the target image, the image is scaled using a bilinear interpolation algorithm, the convolution processing is performed on the image by using a convolution kernel, and the size of each feature map of the processed image is determined.
Compared with the prior art, the invention has the beneficial effects that:
(1) the system can identify a target object according to a human instruction, track a moving target object, realize a visual system function of a service robot, and construct an important part of a complete robot system for future realization;
(2) when the characteristics of the target image are extracted, the bilinear interpolation algorithm is used for zooming the image, the convolution kernel is used for performing convolution processing on the image, the size of each characteristic image of the processed image is determined, and the accuracy of the robot in target identification is improved.
Drawings
FIG. 1 is a system hardware block diagram of the present invention;
FIG. 2 is a flow chart of the system design of the present invention;
FIG. 3 is a block diagram of an image acquisition circuit of the present invention;
FIG. 4 is a block diagram of an image processing circuit of the present invention;
FIG. 5 is a diagram of a visual servo system according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 2, fig. 3, fig. 4 and fig. 5, the present invention provides a technical solution: a service robot vision system is composed of a hardware part and a software part, wherein,
the hardware part comprises an image acquisition system, an image signal processing system, a liquid crystal display, a power supply system and a visual tracking system;
the image acquisition system is the front end of the whole acquisition system and is used for completing external image acquisition and transmitting acquired image signals to the image processing chip DSP; the vision system achieves an important standard for identifying a target object by identifying target colors, so that a color image sensor is adopted when an image sensor is selected, an external image is acquired by the image sensor, and then an image digital signal is output; the OV7635 working mode has a master mode and a slave mode, in the master mode, the image sensor needs to have a single crystal oscillator, and the outside only carries out initialization control on the sensor through an IC port of the sensor when power-on starts; once the system works, the system works according to the own specific working mode all the time, under the slave mode, the output of the signal of the system is controlled by an external control signal all the time, and the crystal oscillator of the system is not needed; when the image sensor outputs digital signals, the digital signals are firstly sent to an FIFO memory for buffering, as the image sensor OV7635 can reach 30 frames/minute in a VGA mode, and the DSP image processor can not reach the speed for some reason, a data buffering space is required between the image digital signals and an image processing chip; meanwhile, the size of the capacity of the memory must be considered when the memory is selected, and in order to reduce the multiplexing of address lines, an SDROM type FIFO memory AL422B which does not need address lines is selected; based on the aim, the system achieves the aim by adopting a CPLD chip EPM7064 and adopts VHDL language and hardware graphical design in the design; the CPLD (complex programmable logic device) is a novel integrated circuit developed in the 80 s, and an integrated digital circuit can be constructed by a user, and the hardware resources of a logic gate and a trigger in a chip can be configured and connected by the user so as to realize a special user logic function, and compared with the traditional standard logic device (such as 74 series TTL devices) which can only realize fixed functions, the PLD device can be repeatedly modified and reused, and has higher flexibility and competitiveness in the aspect of meeting special personalized design requirements; the VHDL language supports a self-down and library-based design method and also supports the design of a synchronous circuit, an asynchronous circuit, an FPGA and other random circuits, the scope of the design is incomparable with other HDL languages, most EDA tools support the VHDL language almost in different degrees at present, the VHDL language has the capability of describing the hardware function of a system in multiple levels, and the system can be from a mathematical model of the system to a gate-level circuit; in addition, high-level behavior description can be mixed with formation-level RTL description and structure description for use, and VHDL language can carry out system-level hardware description, which is one of the most outstanding advantages of the system-level hardware description and can customize data types, so that the design result is convenient for multiplexing and communication, and the popularization and perfection of the VHDL language can be further promoted in turn;
the image signal processing comprises a DSP chip part circuit, a flash memory circuit for Bootload and an SDRAM circuit for storing data during processing; digital Signal Processing (DSP) which utilizes a computer or special digital equipment to analyze, synthesize, transform, estimate and identify and process signals, extracts useful information and effectively transmits and applies the information; based on the huge amount of image signal data, the processing speed and the special algorithm reason, the DSP chip selected by the system is the main characteristics of TMS320VC5509A and TMS320VC 5509A:
since the TMSC55X is developed on the basis of C54X and is compatible with C54X, the C55X enhances the operational capability of the DSP by adding functional units, and has higher performance and lower power consumption compared with the C54X;
C55X compares in hardware with C54X, which is more popular:
multiplication unit (MAC): 1C 54X and 2C 55X;
accumulator (ACC): 2 of C54X and 4 of C55X;
reading the bus: 2 of C54X and 3 of C55X;
writing the bus: there were 1C 54X and 2C 55X:
address bus: 4 of C54X and 6 of C55X;
instruction word length: C54X is position 16, C55X is 8/16/24/32/40/48;
data word length: C54X is position 16, C55X is position 16;
arithmetic Logic Unit (ALU): 1 (40) C54X and 2C 55X (40, 1, 6);
an auxiliary register: 8 in C54X and 8 in C55X;
storage space: C54X is a separate program/data space, C55X is a unified program/data space; in addition, the temporary registers, C54X none, and C55 have 4; for the reasons, the system selects a DSP chip TMS320VC5509A of TI company as an image processing and main control chip; the DSP chip used in the system does not have a FLASH structure and can not be programmed on line, once the system is powered down, all data are lost, aiming at the reason, a FLASH memory SA25F005 with an SPI bus interface is adopted, and after the DSP is powered on and reset, by sampling GPIO of the DSP0-GPIO3Determining a loading mode of the DSP, and performing data exchange with the FLASH memory by using a multifunctional port of the DSP; the design of the DSP target system boot load has two distinct advantages:
the application program code can be stored in an external program memory (such as FLASH and EPROM) with low relative speed outside the chip and data loss after power failure, and the program code is moved into the program memory inside or outside the DSP chip for operation in a guiding loading mode after power on; mask programming operation on the ROM in the DSP chip can be omitted;
the DSP system has the advantages that the guide loading function can bring great convenience to independent operation system design for DSP users, the guide loading modes are multiple, different application occasions can be met, the system mainly comprises a parallel I/O port guide loading mode, a serial port guide loading mode, an HPI guide loading mode and an external parallel guide loading mode, and the system adopts an SPI serial port guide transshipment mode;
because the internal storage resources of the TMS320VC5509A of the DSP are 128KX16bit on-chip RAMs (wherein 32KX16bit DARAM and 96KX16bit SARAM), the DSP has a large amount of intermediate data to be stored in the image processing process, and the resources on the chip are relatively limited, the system adopts a piece of SDRAM as off-chip data storage, relieves the tension of the system on the application of the internal resources of the DSP, the cost performance factor is considered when a memory is selected, and the SDRAM type is adopted, and the chip has the typical characteristics that: the cost performance of the chip is high;
a digital signal processing system is a product of electronic technology, signal processing technology and computer technology, the system design is generally divided into signal processing and non-signal processing parts, the signal processing part comprises the input and output of the system, the arrangement and processing of data, the realization of various algorithms, data display and transmission; the non-signal processing part comprises a power supply, a structure, cost, volume and reliability, and the design process of an application system can be roughly divided into the following parts: a description of system requirements; analyzing the signal; designing a signal processing algorithm; analyzing resources; analyzing and designing a hardware structure; designing and debugging software; system integration and debugging;
the power supply system is used for supplying power to the system; the power supply design part is a very important part in the whole system design, the quality of the power supply part design is directly related to the success of other parts of circuits, for the vision system, because a DSP chip for image processing is very expensive and very fragile, the following points must be considered in the power supply part design:
processing DSP core voltage and universal 3.3V voltage, wherein the DSP has two working voltages, namely core voltage and universal pin voltage; the two voltages are slightly mishandled, damage can be caused to chips, the core voltage of the system is 1.35V, and the general pin voltage is 3.3V; the power-on sequence of the voltage of the DSP chip TMS320VC5509A is that the core voltage is firstly powered on, then the universal pin voltage is 3.3V, the core voltage is also required to be firstly disconnected and then the universal pin voltage is required to be powered off, the power-on interval time between the core voltage and the universal pin voltage cannot exceed 1 second at most, otherwise, the DSP is permanently damaged or the service life of the chip is reduced, and when a system power supply is designed, chips TPS76801QD and TPS75733KTT are adopted; TPS76801QD chip produces 1.35V, TPS75733KTT produces 3.3V; after core voltage is generated, enabling TPS75733KTT through a PG end of the chip, and then outputting 3.3V voltage; the instantaneous current of the system during power-on must be considered, for the DSP chip, the current at the moment of power-on can reach the ampere level, after stable work, the current can be reduced to the MA level in the normal state, meanwhile, because the system adopts a TFT liquid crystal display, the same power system is also used by the circuit, the maximum tolerance of the selected TPS7533KTT chip which can generate 3.3V chip and can output current can reach the ampere level, and the maximum output current of the TPS7533KTT chip selected by the system can reach 3A, thereby meeting the requirement; the system design must consider the manual and power-on reset of the DSP chip and adopt the power watchdog function, and whether the power-on reset of any system is successful or not is related to whether the whole system can work normally or not;
the liquid crystal display is used for visually displaying the effect of the image acquired by the image sensor and visually seeing the target tracking state of the visual system in real time; after the liquid crystal display is adopted, great convenience is brought to the design, and the working condition of the whole system can be visually seen through the display picture of the liquid crystal; because the system adopts a color identification method similar to a football robot, namely, an identification mode for distinguishing the target object is achieved by identifying the color of the target object, a color TFT liquid crystal is adopted on the display to achieve the purpose; the liquid crystal adopted by the system is PT035TN01 of INNOLUX company, the driving power supply of the system has 4, wherein 3.3V and 5V can share one power supply with the image processing system, and the power supply is independently designed at-10V and-15V;
the visual tracking is used for carrying out self-adaptive tracking on the target object by the visual system when the target system carries out relative translation;
the software part comprises a time sequence control part software, an image processing and identifying part software and a tracking control part software;
the time sequence control part software completes the time sequence control when the image data is transmitted and the control of the display of the liquid crystal; the CPLD is a common method in the circuit design of the image acquisition system for time sequence control, and the JTAG online programming configured by the CPLD brings great convenience to the circuit design and has great flexibility;
the image processing and identifying part software achieves the purpose of real-time and rapid identification of the robot through the combination of color and shape; in order to achieve the purpose of real-time and quick robot, a software method mainly adopts a color recognition method of a football robot commonly used by people at present, and the following principle is color recognition:
when a color image segmentation-based method is adopted to identify a target, firstly, a proper color space is selected, the commonly used color spaces comprise RGB, YUV, HSV and CMY, and the selection of the color space directly influences the image segmentation and target identification effects; RGB is the most commonly used color space, where the luminance is equal to the sum of R, G, B3 components, RGB color space is a non-uniform color space, the perceptual difference between two colors is not linearly proportional to the euclidean distance between two points in the space, and the correlation between R, G, B values is high, for the same color attribute, the RGB values are very dispersed under different conditions (light source type, intensity and object reflection characteristics), for identifying a certain color, it is difficult to determine its threshold value and its distribution range in the color space, so it is common to choose a color space from which the luminance component can be separated, the most common being YUV and HSV color spaces; HSV: the mode of color perception close to human eyes, H is Hue (Hue), S is color Saturation (Saturation), and V is brightness (value), the Hue H can accurately reflect the color type, the sensitivity to the change of external illumination conditions is low, but H and S are both non-linear transformation of R, G, B, singular points exist, and even small change of R, G, B values in the vicinity of the singular points causes great jump of transformation values; YUV: the brightness-color space linearly changed by the RGB color space is proposed for solving the compatibility problem of the color television and the black and white television, Y represents the brightness (Luminance), UV represents the chromatic aberration (chromanane), the importance of the YUV representation method is that the brightness signal (Y) and the chrominance signal (U, V) are independent, the chromatic aberration refers to the difference between 3 component signals (R, G, B) in the primary color signals and the brightness signal, therefore, the system adopts the YUV color space when in design;
the YUV format has the following relationship with RGB:
Y=0.59G+0.31R+0.11B
V=R-Y
U=B-Y
when the threshold value is determined, firstly, training is carried out by collecting samples, so that the threshold values of the components of several preset colors in a YUV space are obtained;
when the position of a pixel to be judged in the color space falls in the rectangular parallelepiped, the pixel is considered to belong to the color to be found, in the Y space, the Y value represents the brightness, and only the values of U and V are considered because of the great change, and when the color judgment is carried out, the closed value vectors of U, V are firstly established respectively; because the digital signal of the image sensor is 8 bits in the system, namely 1Byte, which is 255 bytes, the system can judge 8 colors at most, the image is divided after the color identification, a seed filling algorithm is adopted in the image division, the filling of the whole seed is carried out simultaneously with the color of the pixel point, all pixels are not processed at first, but are carried out in blocks, the block adopted by the system is 32X24 pixels, the calculation greatly reduces the whole calculation amount, when the central point is the color to be identified, the point is taken as the seed to be diffused to the periphery, the color of the pixel points around is judged until the whole block is filled, and in the process, the shape identification is carried out on the target at the same time;
the system adopts a global eigenvector-based recognition algorithm to recognize, and obtains the required moment characteristic quantity for constructing a Jacobian matrix;
after a target object is identified, a lens is aligned to the target by a vision system, once the object moves, the vision system tracks the target object, on the robot vision tracking system, an uncalibrated vision tracking system is adopted, the uncalibrated vision tracking does not need to calibrate a camera in advance, but an image Jacobian matrix is adjusted on line in real time by applying the principle of an adaptive control surface, and the image Jacobian matrix is fed back through two-dimensional image characteristic information, so that the method is insensitive to camera model errors, robot model errors, image errors and image noise;
the control quantity c is a control system of the robot head, firstly, a target is placed in front of the robot visual field to acquire an expected image, an expected feature set is extracted from the expected image and is used as expected input of a visual field tracking control system, so that visual field feature set definition required by a task is completed, in the real-time control system, a real-time sampling image is acquired by an image sensor of the robot, a real-time feature set is acquired from the real-time sampling image, and thus visual field feedback is formed to guide the robot to complete the tracking task, the control quantity c is different from simple geometric features of the image, and the visual feature set selected by the system describes an image moment for a global image;
according to a relation matrix between the moment characteristic variation and the relative pose variation, namely an image jacobian matrix, designing a visual tracking controller by utilizing the deduced image jacobian matrix, and finishing the translation tracking of the system on the 3D target object;
when the expected image characteristic moment of a target object is obtained, a teaching box is usually utilized to guide a robot to move to a desired configuration with a target to obtain the desired image under the desired configuration, a visual characteristic set required for completing a task is defined to extract the desired characteristic set from the desired image, the desired characteristic set is used as the expected input of a visual tracking control system, a visual sensor is used for obtaining a real-time sampling image in the real-time control process, and a real-time visual characteristic set is obtained from the real-time sampling image, so that visual feedback is formed;
the visual trace software simulation results:
the focal length of a lens is 16mm, the expected height of an object in a lens frame is 160mm, the servo control period is 100ms, the moving speed of a target in the lens frame is Txo-7 cm/s, Tyo-8 cm/s and Tzo-9 cm/s, the speed is in 3D translation in a 3-dimensional space, and the moment characteristic value of the image after binarization is expected to be a shirt (m is m (m) characteristic value10,m01,m00) (-311, -2569,3816), taking the control parameters as:
q1=q2=1.0,q3=3000.0,K’1;=K'2=4.0,K'3=600.0,
P'=diag(p'1,p'2,p'3),p'1=p'2=5.0×10-10,p'3=5.0×10-12
in this embodiment, preferably, the image capturing system includes an image sensor circuit, a FIFO memory circuit, and a CPLD timing control circuit.
In this embodiment, preferably, the image acquisition system is connected to the image signal processing and the liquid crystal display, respectively.
In this embodiment, preferably, the power supply system is connected to the image acquisition system, the image signal processing system, and the liquid crystal display system, respectively.
In this embodiment, the vision system preferably performs recognition by using a global feature vector-based recognition algorithm.
In this embodiment, preferably, the service robot visual recognition determination method is as follows:
the method comprises the following steps: collecting a target image shot by the service robot;
step two: comparing the acquired target image with a preset reference object to obtain a judgment result;
step three: and displaying the judgment result.
In this embodiment, preferably, when extracting the features of the target image, the bilinear interpolation algorithm is used to scale the image, the convolution kernel is used to perform convolution processing on the image, and the size of each feature map of the processed image is determined.
The vision system mainly completes the following work:
a color CMOS image sensor is adopted to obtain an external image, the front end of a typical image acquisition system is constructed through sequential logic control of the image sensor, an FIFO memory and a CPLD, and an image signal is output through debugging;
the image processing and central control system is constructed by adopting a DSP chip TMS320vC5509A as an image processing and central control chip and through a DSP, an external SDRAM and a power-on loading FLASH chip;
a TFT liquid crystal is adopted to visually display an external image and a visual tracking effect, and a power driving circuit is designed;
designing a circuit schematic diagram of the whole hardware system, and manufacturing a circuit board for debugging;
determining an upper threshold value and a lower threshold value for color identification by using the difference of UV components of different colors in a YUV mode, and performing image segmentation by using a seed filling method;
a non-calibration visual servo system is adopted, real-time image Jacobian matrixes are used for self-adaptive control of the servo system, a visual servo controller is designed by utilizing a derived image Jacobian matrix through two-dimensional image characteristic information feedback, self-adaptive translation tracking of a moving target object is completed, the control principle of the whole controller is derived, and simulation is carried out by software.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A service robot vision system, characterized by: consists of a hardware part and a software part, wherein,
the hardware part comprises an image acquisition system, an image signal processing system, a liquid crystal display, a power supply system and a visual tracking system;
the image acquisition system is used for completing external image acquisition and transmitting acquired image signals to the image processing chip DSP;
digital Signal Processing (DSP) which utilizes a computer or special digital equipment to analyze, synthesize, transform, estimate and identify and process signals, extracts useful information and effectively transmits and applies the information;
the power supply system is used for supplying power to the system;
the liquid crystal display is used for visually displaying the effect of the image acquired by the image sensor and visually seeing the target tracking state of the visual system in real time;
the visual tracking is used for carrying out self-adaptive tracking on the target object by the visual system when the target system carries out relative translation;
the software part comprises a time sequence control part software, an image processing and identifying part software and a tracking control part software;
the time sequence control part software completes the time sequence control when the image data is transmitted and the control of the display of the liquid crystal;
the image processing and identifying part software achieves the purpose of real-time and rapid identification of the robot through the combination of colors and shapes.
2. A service robot vision system as claimed in claim 1, characterized by: the image acquisition system comprises an image sensor partial circuit, an FIFO memory circuit and a CPLD time sequence control partial circuit.
3. A service robot vision system as claimed in claim 1, characterized by: the image signal processing comprises a DSP chip part circuit, a flash memory circuit used for Bootload and an SDRAM circuit used for storing data during processing.
4. A service robot vision system as claimed in claim 1, characterized by: and the image acquisition system is respectively connected with the image signal processing and the liquid crystal display.
5. A service robot vision system as claimed in claim 1, characterized by: and the power supply system is respectively connected with the image acquisition system, the image signal processing system and the liquid crystal display system.
6. A service robot vision system as claimed in claim 1, characterized by: the vision system employs a global feature vector based recognition algorithm for recognition.
7. A service robot vision system as claimed in claim 1, characterized by: the service robot visual identification determination method comprises the following steps:
the method comprises the following steps: collecting a target image shot by the service robot;
step two: comparing the acquired target image with a preset reference object to obtain a judgment result;
step three: and displaying the judgment result.
8. A service robot vision system as claimed in claim 7, characterized by: and when the characteristics of the target image are extracted, zooming the image by using a bilinear interpolation algorithm, performing convolution processing on the image by using a convolution kernel, and determining the size of each characteristic image of the processed image.
CN202110438347.4A 2021-04-22 2021-04-22 Service robot vision system Pending CN113103256A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110438347.4A CN113103256A (en) 2021-04-22 2021-04-22 Service robot vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110438347.4A CN113103256A (en) 2021-04-22 2021-04-22 Service robot vision system

Publications (1)

Publication Number Publication Date
CN113103256A true CN113103256A (en) 2021-07-13

Family

ID=76719600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110438347.4A Pending CN113103256A (en) 2021-04-22 2021-04-22 Service robot vision system

Country Status (1)

Country Link
CN (1) CN113103256A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1721144A (en) * 2004-07-13 2006-01-18 中国科学院自动化研究所 A kind of fast tracking method and device based on color of object surface
CN105643664A (en) * 2016-04-12 2016-06-08 上海应用技术学院 Vision recognition determination method of service robot and vision system
CN110826629A (en) * 2019-11-08 2020-02-21 华南理工大学 Otoscope image auxiliary diagnosis method based on fine-grained classification
CN111145164A (en) * 2019-12-30 2020-05-12 上海感图网络科技有限公司 IC chip defect detection method based on artificial intelligence
CN111230856A (en) * 2018-11-28 2020-06-05 天津工业大学 Robot based on FPGA target recognition
CN111967527A (en) * 2020-08-21 2020-11-20 菏泽学院 Peony variety identification method and system based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1721144A (en) * 2004-07-13 2006-01-18 中国科学院自动化研究所 A kind of fast tracking method and device based on color of object surface
CN105643664A (en) * 2016-04-12 2016-06-08 上海应用技术学院 Vision recognition determination method of service robot and vision system
CN111230856A (en) * 2018-11-28 2020-06-05 天津工业大学 Robot based on FPGA target recognition
CN110826629A (en) * 2019-11-08 2020-02-21 华南理工大学 Otoscope image auxiliary diagnosis method based on fine-grained classification
CN111145164A (en) * 2019-12-30 2020-05-12 上海感图网络科技有限公司 IC chip defect detection method based on artificial intelligence
CN111967527A (en) * 2020-08-21 2020-11-20 菏泽学院 Peony variety identification method and system based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱木建: "基于服务机器人视觉系统的设计", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Similar Documents

Publication Publication Date Title
US9727775B2 (en) Method and system of curved object recognition using image matching for image processing
US10726580B2 (en) Method and device for calibration
US10824910B2 (en) Image processing method, non-transitory computer readable storage medium and image processing system
US10068149B2 (en) Image processing utilizing reference images
CN110443853A (en) Scaling method, device, terminal device and storage medium based on binocular camera
US20220383523A1 (en) Hand tracking method, device and system
CN109920018A (en) Black-and-white photograph color recovery method, device and storage medium neural network based
CN109639956A (en) Twin-lens image processing apparatus and method
CN117876608B (en) Three-dimensional image reconstruction method, three-dimensional image reconstruction device, computer equipment and storage medium
CN109166172B (en) Clothing model construction method and device, server and storage medium
CN116704125B (en) Mapping method, device, chip and module equipment based on three-dimensional point cloud
CN113103256A (en) Service robot vision system
KR20180080618A (en) Method and apparatus for realistic rendering based augmented reality
CN111080589A (en) Target object matching method, system, device and machine readable medium
US20240212239A1 (en) Logo Labeling Method and Device, Update Method and System of Logo Detection Model, and Storage Medium
CN116053549A (en) Battery cell positioning method, device and system
CN114782692A (en) House model repairing method and device, electronic equipment and readable storage medium
CN115546314A (en) Sensor external parameter calibration method and device, equipment and storage medium
CN115482285A (en) Image alignment method, device, equipment and storage medium
CN113538538A (en) Binocular image alignment method, electronic device, and computer-readable storage medium
CN113034449A (en) Target detection model training method and device and communication equipment
JP2020177618A (en) Method, apparatus, and medium for interactive image processing using depth engine and digital signal processor
CN110473257A (en) Information scaling method, device, terminal device and storage medium
TWI696980B (en) Method, apparatus, medium for interactive image processing using depth engine and digital signal processor
CN117827013B (en) Intelligent learning method and system for production skills based on image processing and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination