CN112351181A - Intelligent camera based on CMOS chip and ZYNQ system - Google Patents

Intelligent camera based on CMOS chip and ZYNQ system Download PDF

Info

Publication number
CN112351181A
CN112351181A CN202011251121.5A CN202011251121A CN112351181A CN 112351181 A CN112351181 A CN 112351181A CN 202011251121 A CN202011251121 A CN 202011251121A CN 112351181 A CN112351181 A CN 112351181A
Authority
CN
China
Prior art keywords
image
video
zynq
hardware unit
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011251121.5A
Other languages
Chinese (zh)
Inventor
孟明辉
周传德
赵珍祥
高晓飞
朱志强
张曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Science and Technology
Original Assignee
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Science and Technology filed Critical Chongqing University of Science and Technology
Priority to CN202011251121.5A priority Critical patent/CN112351181A/en
Publication of CN112351181A publication Critical patent/CN112351181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent camera based on a CMOS chip and a ZYNQ system, which comprises a ZYNQ platform and a CMOS sensor, wherein the ZYNQ platform is connected with the CMOS sensor through a ZYNQ interface; the ZYNQ platform comprises an FPGA hardware unit and an ARM hardware unit; the FPGA hardware unit comprises a video acquisition module and a video preprocessing module; the video acquisition module acquires video signal information through the CMOS sensor and sends the video signal information to the video preprocessing module, and the video preprocessing module sends the processed video signal to the ARM hardware unit; the ARM hardware unit runs a linux operating system, and an application layer video image processing module is installed in the linux operating system. The intelligent camera hardware system has the advantages that the intelligent camera hardware system provided by the invention designs the image processing application program on the application layer to realize the video data flow control of FPGA hardware processing and ARM cooperative processing, completes the image processing of a large number of involved image processing algorithms and realizes the real-time acquisition and processing of high-definition video data streams.

Description

Intelligent camera based on CMOS chip and ZYNQ system
Technical Field
The invention relates to the field of machine vision, in particular to an intelligent camera based on a CMOS chip and a ZYNQ system.
Background
The machine vision detection technology plays a very important role in the field of industrial application, particularly the field of industrial robots, and as an important input channel for sensing external environment information of the industrial robot, the machine vision detection technology plays a very important role in understanding surrounding scenes and assisting in completing specific tasks of the industrial robot. At present, the application of the visual recognition technology in the field of robots mainly comprises environment understanding, self-learning object recognition and intelligent interaction, navigation, obstacle avoidance and the like.
In the field of machine vision, particularly in the field of mechanical measurement based on industrial robots, the existing machine vision detection technology has the defects that a measurement system is relatively independent, the construction is complex, the system is huge, and the moving and the construction are inconvenient; the vision measuring system is fixed outside the industrial robot (eye-to-hand) without moving with the arm, and the problems of low intelligent degree, poor applicability and the like such as large system error and the like are solved.
Disclosure of Invention
The invention aims to solve the technical problems that the existing machine vision detection technology has relatively independent measurement systems, complex construction, huge system and inconvenience in moving and building in the field of machine vision, especially in the field of mechanical measurement based on industrial robots; the vision measuring system is fixed outside the industrial robot (eye-to-hand) without moving with the arm, and has low intelligent degree and poor applicability such as large system error.
The invention provides an intelligent camera based on a CMOS chip and a ZYNQ system, which comprises,
ZYNQ platform, CMOS sensor;
the ZYNQ platform comprises an FPGA hardware unit and an ARM hardware unit;
the FPGA hardware unit comprises a video acquisition module and a video preprocessing module;
the video acquisition module acquires video signal information through the CMOS sensor and sends the video signal information to the video preprocessing module, and the video preprocessing module sends the processed video signal to the ARM hardware unit;
the ARM hardware unit runs a linux operating system, and an application layer video image processing module is installed in the linux operating system.
Further, the application layer video image processing module executes a template matching target based search algorithm,
the template matching based object finding algorithm comprises the following steps,
an image preprocessing step:
Figure BDA0002771639480000021
wherein f (x, y) is the preprocessed image; θ is a local neighborhood of the current pixel (m, n). One-sided leave-on function u exists with an inverse function u-1A (i, j) is a weighting coefficient, g (i, j) is an input image;
template matching:
Figure BDA0002771639480000022
s (m, n) is a template image, M, N is the dimension of the template image, f (m, n) is a sub-image in the f (x, y) image with the same size as the template image, and D (x, y) is the measure of matching error;
a geometric transformation step:
mapping the template image to the position of the processed image through geometric transformation,
Figure BDA0002771639480000031
t is a vector function, (x, y) is the pixel coordinates of the template image, and (x ', y') is the new coordinates of the transformed pixels in the processed image.
Further, the application layer video image processing module executes an edge extraction segmentation algorithm,
the edge extraction segmentation algorithm comprises the following steps,
an edge extraction step:
the image is first convolved with a gaussian function of scale sigma,
secondly, for each pixel in the image, the normal n of the local edge is estimated,
Figure BDA0002771639480000032
the position of the edge is found again and,
Figure BDA0002771639480000033
the edge strength is calculated again and the edge strength is calculated,
Figure BDA0002771639480000034
finally, hysteresis thresholding is carried out on the edge image to eliminate false response, and a characteristic synthesis method is used to collect final edge information from multiple scales,
wherein
Figure BDA0002771639480000035
F represents an image, and x and y represent coordinates of image pixels;
a uniform shift discontinuity preserving filtering step:
first, for each image pixel Xi, the initialization step number j is 1, Y(i,1)=XiSecond, calculate Y(i,j+1)Until convergence on Y(i,con)Finally defining the filtered pixel values
Figure BDA0002771639480000036
Namely at
Figure BDA0002771639480000037
The filtered pixel value of (a) is assigned as the convergence point
Figure BDA0002771639480000041
I denotes the number of the pixel, ZiFor each image pixel;
mean shift image segmentation step:
first, mean shift discontinuity preserving filtering is adopted to preserve the convergence point of each d-dimension
Figure BDA0002771639480000042
All information of, secondly all ZiClustering according to a kernel Hs in an airspace and a kernel Hr in a value range to obtain { Cp } p ═ 1i={p|Zi∈Cp1, n, and eliminating regions smaller than p pixels, where m is a natural number, Cp is the attraction field of the convergence point, and p and d represent the spatial dimensions of the image.
The intelligent camera hardware system provided by the invention has the beneficial effects that the intelligent camera hardware system takes ZYNQ as a core, and consists of a core board and an image acquisition board, wherein an image acquisition module consists of an optical component system, an image sensor, an AD conversion module and other components. The intelligent camera selects an embedded Linux operating system as a software platform, designs and develops intelligent camera configuration software by combining an OpenCV open source vision library, designs an image processing application program on an application layer to realize video data flow control of FPGA hardware processing and ARM cooperative processing, completes image processing of a large number of involved image processing algorithms, and realizes real-time acquisition and processing of high-definition video data streams.
Drawings
FIG. 1 is a system framework diagram of the present invention.
Detailed Description
The invention provides a small-sized wireless electrical and mechanical system based on 500 ten thousand CMOS chips and a ZYNQ system at the front end of an intelligent camera, which takes ZYNQ as a core and adopts an embedded Linux operating system, can directly process required image detection and measurement information in the camera, and transmits the information to external equipment through an RS232 serial port, Ethernet communication and input/output GPIO.
The intelligent camera provided by the invention has a small integral structure, integrates more than 40 image processing algorithm libraries such as horizontal mirror image, image zooming, image rotation, binarization, sub-pixel positioning and the like, can directly process and collect images internally, outputs results to control external equipment or outputs the results to other equipment through a serial port and a network port, and the results of visual detection and measurement of the intelligent camera are used by other equipment.
In the implementation process of the invention, the hardware of the intelligent camera adopts a low-power-consumption main chip xc7z020clg484 and a memory chip MT41J256M16, and integrates an Ethernet interface, a USB, a memory card, an RS232 serial port and an input/output GPIO into a whole. The intelligent camera completes the functions of an image acquisition module, an image preprocessing module and an image display module on the FPGA through the combination of a C/CS lens and a CMOS module, a Linux operating system is built on an ARM, the control function of the whole video acquisition processing flow is realized on an application layer, the idea of software and hardware cooperation is adopted in the image processing of the application layer, the acceleration of image preprocessing hardware is realized, and the whole image processing speed of the system is improved. The intelligent camera integrates 40 image processing algorithm libraries such as image template matching target searching, edge extraction and segmentation and the like on an application layer, and develops flexible image processing system software. The intelligent camera has the support of the hardware environment of the image processing unit, can directly complete the image processing function, can transmit an output result to other equipment through a serial port and Ethernet communication, and can directly control the output equipment through GPIO according to the processing result.
The following explains the intelligent camera integrated partial image processing algorithm provided by the invention:
1. target searching algorithm based on template matching
The algorithm is implemented as follows, and an input image is set as g (i, j).
(1) Image pre-processing
Figure BDA0002771639480000051
Wherein f (x, y) is the preprocessed image; θ is a local neighborhood of the current pixel (m, n). One-sided leave-on function existence inverse function u-1And a (i, j) is a weighting coefficient.
(2) Template matching
Figure BDA0002771639480000061
S (m, n) is the template image, M, N is the dimension of the template image, f (m, n) is a sub-image in the f (x, y) image that is the same size as the template image, and D (x, y) is a measure of matching error.
(3) Geometric transformation
And mapping the template image to the position of the processed image through geometric transformation.
Figure BDA0002771639480000062
T is a vector function. (x, y) are the pixel coordinates of the template image, and (x ', y') are the new coordinates of the transformed pixels in the processed image.
2. Edge extraction segmentation
(1) Edge extraction
The image is first convolved with a gaussian function of scale σ.
Next, for each pixel in the image, the normal n to the local edge is estimated.
Figure BDA0002771639480000063
The edge position is found again.
Figure BDA0002771639480000071
The edge strength is calculated again.
Figure BDA0002771639480000072
And finally, performing hysteresis thresholding on the edge image to eliminate false response, and collecting final edge information from multiple scales by using a feature synthesis method.
Wherein
Figure BDA0002771639480000073
The image F is a picture of a person,
(2) uniform shift discontinuity preserving filtering
First, for each image pixel Xi, the initialization step number j is 1, Yi,1 is Xi, then Y (i, j +1) is calculated until convergence on Y (i, con), and finally the filtered pixel value is defined
Figure BDA0002771639480000074
Namely at
Figure BDA0002771639480000075
The filtered pixel value of (a) is assigned as the convergence point
Figure BDA0002771639480000076
The image value of the pixel of (1).
(3) Mean shift image segmentation
First, mean shift discontinuity preserving filtering is adopted to preserve the convergence point of each d-dimension
Figure BDA0002771639480000077
All information of, secondly all ZiClustering according to a kernel Hs in an airspace and a kernel Hr in a value range to obtain { Cp } p ═ 1i={p|Zi∈Cp1.., n, and eliminating areas smaller than p pixels.
The intelligent camera hardware system provided by the invention has the beneficial effects that the intelligent camera hardware system takes ZYNQ as a core, and consists of a core board and an image acquisition board, wherein an image acquisition module consists of an optical component system, an image sensor, an AD conversion module and other components. The intelligent camera selects an embedded Linux operating system as a software platform, designs and develops intelligent camera configuration software by combining an OpenCV open source vision library, designs an image processing application program on an application layer to realize video data flow control of FPGA hardware processing and ARM cooperative processing, completes image processing of a large number of involved image processing algorithms, and realizes real-time acquisition and processing of high-definition video data streams.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. A smart camera based on CMOS chip and ZYNQ system is characterized by comprising,
ZYNQ platform, CMOS sensor;
the ZYNQ platform comprises an FPGA hardware unit and an ARM hardware unit;
the FPGA hardware unit comprises a video acquisition module and a video preprocessing module;
the video acquisition module acquires video signal information through the CMOS sensor and sends the video signal information to the video preprocessing module, and the video preprocessing module sends the processed video signal to the ARM hardware unit;
the ARM hardware unit runs a linux operating system, and an application layer video image processing module is installed in the linux operating system.
2. The CMOS chip and ZYNQ system based smart camera of claim 1, wherein said application layer video image processing module executes a template matching based target finding algorithm,
the template matching based object finding algorithm comprises the following steps,
an image preprocessing step:
Figure FDA0002771639470000011
wherein f (x, y) is the preprocessed image; theta is a local neighborhood of the current pixel (m, n), and the one-sided persistence function u has an inverse function u-1A (i, j) is a weighting coefficient, g (i, j) is an input image;
template matching:
Figure FDA0002771639470000012
s (m, n) is a template image, M, N is the dimension of the template image, f (m, n) is a sub-image in the f (x, y) image with the same size as the template image, and D (x, y) is the measure of matching error;
a geometric transformation step:
mapping the template image to the position of the processed image through geometric transformation,
Figure FDA0002771639470000021
t is a vector function, (x, y) is the pixel coordinates of the template image, and (x ', y') is the new coordinates of the transformed pixels in the processed image.
3. The CMOS chip and ZYNQ based smart camera of claim 1, wherein the application layer video image processing module executes an edge extraction segmentation algorithm,
the edge extraction segmentation algorithm comprises the following steps,
an edge extraction step:
the image is first convolved with a gaussian function of scale sigma,
secondly, for each pixel in the image, the normal n of the local edge is estimated,
Figure FDA0002771639470000022
the position of the edge is found again and,
Figure FDA0002771639470000023
the edge strength is calculated again and the edge strength is calculated,
Figure FDA0002771639470000024
finally, hysteresis thresholding is carried out on the edge image to eliminate false response, and a characteristic synthesis method is used to collect final edge information from multiple scales,
wherein
Figure FDA0002771639470000025
F denotes an image, x and y denote coordinates of pixels;
a uniform shift discontinuity preserving filtering step:
first, for each image pixel Xi, the initialization step number j is 1, Y(i,1)=XiSecond, calculate Y(i,j+1)Until convergence on Y(i,con)Finally defining the filtered pixel values
Figure FDA0002771639470000031
Namely at
Figure FDA0002771639470000032
The filtered pixel value of (a) is assigned as the convergence point
Figure FDA0002771639470000033
I denotes the number of the pixel, ZiFor each image pixel;
mean shift image segmentation step:
first, mean shift discontinuity preserving filtering is adopted to preserve the convergence point of each d-dimension
Figure FDA0002771639470000034
All information of, secondly all ZiClustering according to a kernel Hs in an airspace and a kernel Hr in a value range to obtain { Cp } p ═ 1i={p|Zi∈Cp1, n, and eliminating regions smaller than p pixels, where m is a natural number, Cp is the attraction field of the convergence point, and p and d represent the spatial dimensions of the image.
CN202011251121.5A 2020-11-11 2020-11-11 Intelligent camera based on CMOS chip and ZYNQ system Pending CN112351181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011251121.5A CN112351181A (en) 2020-11-11 2020-11-11 Intelligent camera based on CMOS chip and ZYNQ system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011251121.5A CN112351181A (en) 2020-11-11 2020-11-11 Intelligent camera based on CMOS chip and ZYNQ system

Publications (1)

Publication Number Publication Date
CN112351181A true CN112351181A (en) 2021-02-09

Family

ID=74363236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011251121.5A Pending CN112351181A (en) 2020-11-11 2020-11-11 Intelligent camera based on CMOS chip and ZYNQ system

Country Status (1)

Country Link
CN (1) CN112351181A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760414A (en) * 2022-04-12 2022-07-15 上海航天电子通讯设备研究所 Image acquisition and processing system for CMV4000 camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820418A (en) * 2015-04-22 2015-08-05 遨博(北京)智能科技有限公司 Embedded vision system for mechanical arm and method of use
CN105847766A (en) * 2016-05-30 2016-08-10 福州大学 Zynq-7000 based moving object detecting and tracking system
CN206154352U (en) * 2016-09-18 2017-05-10 常州机电职业技术学院 Robot vision system and robot with motion object detection and tracking function
CN110012201A (en) * 2019-04-10 2019-07-12 山东尤雷克斯智能电子有限公司 A kind of USB3.0 ultrahigh speed camera and its working method based on complete programmable SOC
CN209748689U (en) * 2019-03-06 2019-12-06 上海艾迪森国际数字医疗装备有限公司 Image transmission device of double-network-port CMOS detector based on ZYNQ series FPGA + ARM

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820418A (en) * 2015-04-22 2015-08-05 遨博(北京)智能科技有限公司 Embedded vision system for mechanical arm and method of use
CN105847766A (en) * 2016-05-30 2016-08-10 福州大学 Zynq-7000 based moving object detecting and tracking system
CN206154352U (en) * 2016-09-18 2017-05-10 常州机电职业技术学院 Robot vision system and robot with motion object detection and tracking function
CN209748689U (en) * 2019-03-06 2019-12-06 上海艾迪森国际数字医疗装备有限公司 Image transmission device of double-network-port CMOS detector based on ZYNQ series FPGA + ARM
CN110012201A (en) * 2019-04-10 2019-07-12 山东尤雷克斯智能电子有限公司 A kind of USB3.0 ultrahigh speed camera and its working method based on complete programmable SOC

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张超: "树木影像特征提取与立体匹配技术研究", 《中国优秀博硕士学位论文全文数据库(博士)农业科技辑》 *
胥泽飞: "基于Zynq的智能相机硬件设计", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
金永涛等: "基于多特征的高分遥感图像分割算法研究", 《中国空间科学技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760414A (en) * 2022-04-12 2022-07-15 上海航天电子通讯设备研究所 Image acquisition and processing system for CMV4000 camera
CN114760414B (en) * 2022-04-12 2024-04-16 上海航天电子通讯设备研究所 Image acquisition and processing system for CMV4000 camera

Similar Documents

Publication Publication Date Title
US10198823B1 (en) Segmentation of object image data from background image data
US11379699B2 (en) Object detection method and apparatus for object detection
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
WO2020134818A1 (en) Image processing method and related product
EP3376433B1 (en) Image processing apparatus, image processing method, and image processing program
CN111191582B (en) Three-dimensional target detection method, detection device, terminal device and computer readable storage medium
CN111444764A (en) Gesture recognition method based on depth residual error network
CN112233221B (en) Three-dimensional map reconstruction system and method based on instant positioning and map construction
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN112351181A (en) Intelligent camera based on CMOS chip and ZYNQ system
Esfahani et al. DeepDSAIR: Deep 6-DOF camera relocalization using deblurred semantic-aware image representation for large-scale outdoor environments
Akman et al. Multi-cue hand detection and tracking for a head-mounted augmented reality system
CN112884803A (en) Real-time intelligent monitoring target detection method and device based on DSP
CN111967287A (en) Pedestrian detection method based on deep learning
CN108058170A (en) A kind of vision robot's data acquisition processing system
CN114200934A (en) Robot target following control method and device, electronic equipment and storage medium
Shiratori et al. Detection of pointing position by omnidirectional camera
CN112001247A (en) Multi-target detection method, equipment and storage device
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN114419451B (en) Method and device for identifying inside and outside of elevator, electronic equipment and storage medium
CN112634360B (en) Visual information determining method, device, equipment and storage medium
CN110660134B (en) Three-dimensional map construction method, three-dimensional map construction device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210209