CN114584674A - Visual integration system for processing same image - Google Patents

Visual integration system for processing same image Download PDF

Info

Publication number
CN114584674A
CN114584674A CN202210483316.5A CN202210483316A CN114584674A CN 114584674 A CN114584674 A CN 114584674A CN 202210483316 A CN202210483316 A CN 202210483316A CN 114584674 A CN114584674 A CN 114584674A
Authority
CN
China
Prior art keywords
image
module
image processing
convolution
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210483316.5A
Other languages
Chinese (zh)
Inventor
李海龙
王艳强
钟石明
焦国年
潘庆玉
蔡步远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Julifang Vision Technology Co ltd
Original Assignee
Shenzhen Julifang Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Julifang Vision Technology Co ltd filed Critical Shenzhen Julifang Vision Technology Co ltd
Priority to CN202210483316.5A priority Critical patent/CN114584674A/en
Publication of CN114584674A publication Critical patent/CN114584674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses a visual integration system for processing the same image, which comprises: the system comprises an image acquisition unit, a controller, an image storage module, a communication module, an image processing module and a visual display module; the controller is respectively connected with the image storage module, the image acquisition unit, the communication module and the image processing module, the image processing module is connected with the communication module through a communication interface, the image acquisition unit is connected with a multi-channel data interface, and the image acquisition unit acquires data information of the camera through the multi-channel data interface; the image processing module is also connected with the visual display module through a communication interface. The invention can synchronously process the same image data information, improves the image processing efficiency through the acceleration module, realizes image processing through different modes, can realize different analysis effects on the same image processing, and further improves the image data information processing and analyzing capability.

Description

Visual integration system for processing same image
Technical Field
The present invention relates to the field of image data processing technology, and more particularly to a vision integration system for processing the same image.
Background
The image processing technology is a technology for realizing image information processing by using a computer technology means, and mainly comprises different modes such as image digitization, image enhancement and restoration, image data coding, image segmentation, image identification and the like. The vision integrated system (also called as an intelligent camera) integrates image acquisition, processing and communication functions, and provides a machine vision solution which has multiple functions, modularization, high reliability and easy realization.
In the image processing process, most of the prior art adopts the modes of denoising and the like to realize data information processing so as to improve the definition of image data information. When the binary gray scale processing is required, the gray scale value calculation and the image data information processing need to be performed on the image data information again, which easily causes the image data information processing efficiency to be low. In the conventional image processing technology, the image processing speed is relatively slow, the size is small, the networking degree is low, and the processing hardware is not easy to carry, so that the image processing capability is delayed.
Disclosure of Invention
Aiming at the defects of the technology, the invention discloses a visual integration system for processing the same image, which can synchronously process the data information of the same image, improve the image processing efficiency through an acceleration module, realize image processing through different modes and further improve the image information analysis and application capability.
In order to realize the technical scheme, the application provides the following technical scheme:
a vision integration system for processing identical images, comprising:
an image acquisition unit; the image acquisition unit comprises an image sensor, a dual-port RAM interface, an SH4 chip processor and an acceleration module, and realizes external data information interaction through an I/O interface;
a controller; the controller is used for controlling other modules to be in a working state and represents a multi-axis integrated motion controller combined by a DSP chip and an FPGA chip; the intelligent control system comprises a DSP + FPGA chip, and an electromagnetic valve, a relay, a servo driver, a servo motor, a monitoring module, a communication module, a human-computer interaction module and a display which are connected with the DSP + FPGA chip;
an image storage module; the storage of image data information is realized;
a communication module; the data information interaction module is used for transmitting the received data information to other modules so as to realize data information interaction;
an image processing module; the image processing device is used for outputting different image information according to different processing rules for the same image data information, and realizing the output of processing results of different image data information; the image processing module comprises a compatible data interface, a first image processing module, a second image processing module and a visual output interface; the output end of the data interface is connected with the input ends of the first image processing module and the second image processing module, and the output ends of the first image processing module and the second image processing module are connected with the input end of the visual output interface; the first image processing module represents an image processing module based on local texture feature extraction of a hyperchaotic image, the second image processing module represents an image identification module based on a direction gradient histogram, and different types of output of the same image are realized through a visual output interface;
the visual display module is used for realizing data output after the same image data information is processed according to different processing rules;
the controller is respectively connected with the image storage module, the image acquisition unit, the communication module and the image processing module, the image processing module is connected with the communication module through a communication interface, the image acquisition unit is connected with a multi-channel data interface, and the image acquisition unit acquires data information of the camera through the multi-channel data interface; the image processing module is also connected with the visual display module through a communication interface.
As a further technical solution of the present invention, the multi-channel interface at least includes an RS232 communication channel interface, an RS485 communication channel interface, a carrier communication channel interface, a TCP/IP communication channel interface, an RS422 communication channel interface, an ethernet communication channel interface, a CAN communication channel interface, a USB communication channel interface, a WIFI communication channel interface, a ZigBee communication channel interface, a bluetooth communication channel interface, or an optical fiber communication channel interface; the multi-channel interface further comprises a cloud data interaction port.
As a further technical scheme of the invention, the acceleration module comprises an ARM processor, an on-chip memory, an off-chip memory, a storage module and 2 mutually cascaded convolution accelerators, wherein the on-chip memory, the off-chip memory and the storage module are connected with the ARM controller.
As a further technical scheme of the invention, the method for realizing convolution acceleration by the convolution accelerator is represented as follows:
(1) the data parameters of the convolution accelerator are set up,
Figure 200131DEST_PATH_IMAGE001
and
Figure 914009DEST_PATH_IMAGE002
respectively represent
Figure 294744DEST_PATH_IMAGE003
The length and width of the layer convolution kernel,
Figure 486691DEST_PATH_IMAGE004
wherein
Figure 769905DEST_PATH_IMAGE005
And
Figure 936575DEST_PATH_IMAGE006
respectively representing the step size of a convolution kernel, and 2 convolution accelerators which are cascaded mutually output data information results to represent:
Figure 672450DEST_PATH_IMAGE007
(1)
in the formula (1), the reaction mixture is,
Figure 730404DEST_PATH_IMAGE008
Figure 946753DEST_PATH_IMAGE009
and
Figure 408959DEST_PATH_IMAGE010
the activation function and the input mapping respectively representing the current layer are subjected to convolution operation and the second
Figure 756763DEST_PATH_IMAGE003
Layer one
Figure 962617DEST_PATH_IMAGE011
Bias of individual feature maps; obtaining neurons of a convolutional neural network
Figure 564630DEST_PATH_IMAGE012
And then performs an upsampling operation with respect to the downsampled layer,downsampled layer after upsampling
Figure 587950DEST_PATH_IMAGE012
The function represents:
Figure 298417DEST_PATH_IMAGE013
(2)
Figure 118081DEST_PATH_IMAGE014
(3)
in the formulae (2) to (3),
Figure 620607DEST_PATH_IMAGE015
Figure 503243DEST_PATH_IMAGE016
and
Figure 701006DEST_PATH_IMAGE017
respectively representing the sampling factor of the convolution accelerator
Figure 373296DEST_PATH_IMAGE018
First of a layer
Figure 605694DEST_PATH_IMAGE019
Up-sampling information of each neuron and the down-sampling layer;
down-sampling layer learning function representation:
Figure 721549DEST_PATH_IMAGE020
(4)
in the formula (4), the reaction mixture is,
Figure 141029DEST_PATH_IMAGE021
downsampling information representing an upsampled layer;
(2) the deep convolutional neural network is in a convergence state, and the last full-connection layer of the deep convolutional neural network is used as a new graphImage features, representation
Figure 617010DEST_PATH_IMAGE022
(3) According to
Figure 516964DEST_PATH_IMAGE023
Obtaining new cluster labels
Figure 928353DEST_PATH_IMAGE024
And the output of the deep convolutional neural network in the next iteration process is represented;
(4) and setting the iteration times to represent 100 times, and when the iteration times are equal to 100, selecting a noise reduction automatic encoder to further obtain the compressed image code, thereby improving the data acceleration capability.
As a further technical solution of the present invention, the human-computer interaction module represents an FX3U-64MR-ES model controller. The first image processing module comprises an image information extraction module, an image texture convolution module, a gray value calculation module and a brightening module, wherein the output end of the image information extraction module is connected with the input end of the image texture convolution module, the output end of the image texture convolution module is connected with the input end of the gray value calculation module, the output end of the gray value calculation module is connected with the input end of the brightening module, and the image information extraction module represents a filter.
As a further technical solution of the present invention, a processing method of the first image processing module includes the following steps:
step 1, extracting data information of the acquired image through an image information extraction module, and extracting texture of a content representation image;
and (3) extracting image problem features by adopting a filter, and expressing an output image filtering function:
Figure 959763DEST_PATH_IMAGE025
(5)
in the formula (5), the reaction mixture is,
Figure 849222DEST_PATH_IMAGE026
which represents the center frequency of the filter and,
Figure 934508DEST_PATH_IMAGE008
a constant that controls the bandwidth of the radial filter,
Figure 516799DEST_PATH_IMAGE027
a representation directional bandwidth determination parameter;
step 2, realizing convolution of image data information through an image texture convolution module, wherein the extracted image data information has the same bandwidth, and the texture features of the image are defined as follows:
Figure 35505DEST_PATH_IMAGE028
(6)
in the formula (6), the reaction mixture is,
Figure 728655DEST_PATH_IMAGE029
a feature map representing the texture of the image,
Figure 337622DEST_PATH_IMAGE030
is shown in
Figure 215448DEST_PATH_IMAGE031
A map of the textural features of the image at scale,
Figure 96816DEST_PATH_IMAGE032
is shown in
Figure 469023DEST_PATH_IMAGE031
And (5) performing image convolution corresponding to the texture feature map of the image under the scale.
Step 3, realizing gray map calculation through a gray value calculation module;
the function formula expresses:
Figure 119447DEST_PATH_IMAGE033
(7)
in the formula (7), the reaction mixture is,
Figure 902595DEST_PATH_IMAGE034
representing the average value of the grey scale of the pixels of the image,
Figure 271260DEST_PATH_IMAGE035
the visual constant is represented by a visual constant,
Figure 181578DEST_PATH_IMAGE036
which represents the size of the image window and,
Figure 952088DEST_PATH_IMAGE037
to represent
Figure 906138DEST_PATH_IMAGE038
The gray value of the image pixel under the position;
step 4, brightness updating of the extracted data information is achieved through the brightening module; so that the input image information has clear brightness, the local brightness contrast is introduced into the local texture feature extraction, and through brightness calculation, the calculation function is as follows:
Figure 762098DEST_PATH_IMAGE039
(8)
in the formula (8), the reaction mixture is,
Figure 738757DEST_PATH_IMAGE040
represents the local brightness in the hyperchaotic image,
Figure 98194DEST_PATH_IMAGE041
which represents the local background brightness of the image,
Figure 691986DEST_PATH_IMAGE042
representing the high frequency components of the image,
Figure 910609DEST_PATH_IMAGE008
representing an image edge variation parameter.
As a further technical solution of the present invention, the second image processing module includes a gray-scale image normalization module, a pixel gradient calculation module, a pixel normalization calculation module, and a feature vector generation module, wherein an output end of the gray-scale image normalization module is connected to an input end of the pixel gradient calculation module, an output end of the pixel gradient calculation module is connected to an input end of the pixel normalization calculation module, and an output end of the pixel normalization calculation module is connected to an input end of the feature vector generation module.
As a further technical solution of the present invention, a method of processing an image by a second image processing module includes the steps of:
step 1: converting the image data collected by the second image processing module into a gray scale map, and performing normalization processing through the following functions:
Figure 552943DEST_PATH_IMAGE043
(9)
in the formula (9), the reaction mixture is,Ta,b) A gray value representing image data information collected by the second image processing module;
step 2: gradient calculation, calculating the image in pixels (a,b) Gradient values of the points, the gradient function formula is:
Figure 157100DEST_PATH_IMAGE044
(10)
in the formula (10), the compound represented by the formula (10),L a a,b) Represents the horizontal gradient values of the image,L b a,b) Represents the vertical gradient values of the image,Ha,b) Pixel values representing an image at pixel pointsa,b) The gradient vector of (a) is:
Figure 672526DEST_PATH_IMAGE045
(11)
Figure 237500DEST_PATH_IMAGE046
(12)
in the formulas (11) to (12),La,b) A gradient value representing the image is determined,
Figure 542579DEST_PATH_IMAGE047
representing the gradient direction of the image;
and step 3: constructing a directional gradient histogram, dividing the image into a plurality of modules, and performing normalization processing;
and 4, step 4: generating characteristic vectors, wherein each normalization module has partially overlapped directional gradient histogram characteristics, extracting the characteristics to generate the characteristic vectors, and after the directional gradient histogram characteristics are obtained, machine learning is carried out
The linear classifier isfx) Eliminating redundant information in the image, the linear classifier isfx) The expression function of (a) is:
Figure 876608DEST_PATH_IMAGE048
(13)
in the formula (13), the reaction mixture is,Rrepresents one parameter of the linear classifier and is,xrepresents a variation of a sample of the HTGS image,za hidden variable is represented by a number of hidden variables,Zx) The value space of the hidden variable is represented,Px,z) Representing the HTGS image sample information.
The invention has the following positive beneficial effects:
the invention can synchronously process the same image data information, improves the image processing efficiency through the accelerating module, realizes image processing through different modes and further improves the image information analysis and application capability. The vision integrated system has the characteristics of easy learning, easy use, easy maintenance, convenient installation and the like, and can construct a reliable and effective machine vision system in a short time.
The invention can quickly realize the functions of positioning, geometric measurement, presence/absence detection, counting, character recognition, bar code recognition, color analysis and the like, and is suitable for most machine vision applications. The visual integration system realizes high integration of the image acquisition unit, the image processing module and the network communication device. The invention has small volume, compact structure and small size, is easy to be installed on industrial production lines and various devices, and is convenient to be assembled, disassembled and moved. The vision integration system of the invention generally provides better network functions, and can be applied to each industrial monitoring point in real time by virtue of network advantages. The invention has low cost, the vision integrated system integrates the collection and the processing, and the PC system and the image collection card are not needed to be configured, thereby greatly reducing the cost of the vision system.
The invention can realize different analysis effects on the same image processing, thereby improving the image data information processing and analyzing capability.
Drawings
In order to illustrate the embodiments of the present invention or the technical solutions in the prior art more clearly, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive exercise, wherein:
FIG. 1 is a schematic diagram of the overall architecture of the present invention;
FIG. 2 is a schematic diagram of an image capture module according to the present invention;
FIG. 3 is a schematic diagram of an acceleration module architecture according to the present invention;
FIG. 4 is a schematic diagram of a control module architecture according to the present invention;
FIG. 5 is a schematic diagram of a human-computer interaction module architecture according to the present invention;
FIG. 6 is a block diagram of an image processing module according to the present invention;
FIG. 7 is a block diagram of a first image processing module according to the present invention;
FIG. 8 is a block diagram of a second image processing module according to the present invention;
FIG. 9 is a schematic diagram of an acceleration method of the accelerator according to the present invention;
FIG. 10 is a diagram illustrating a first image processing module processing an image according to the present invention;
FIG. 11 is a diagram illustrating an image processing module according to a second embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, and it should be understood that the embodiments described herein are merely for purposes of illustration and explanation, and are not intended to limit the present invention.
A vision integration system that processes identical images, comprising:
an image acquisition unit; the image acquisition unit comprises an image sensor, a dual-port RAM interface, an SH4 chip processor and an acceleration module, and realizes external data information interaction through an I/O interface;
a controller; the controller is used for controlling other modules to be in a working state and represents a multi-axis integrated motion controller combined by a DSP chip and an FPGA chip; the intelligent monitoring system comprises a DSP + FPGA chip, and an electromagnetic valve, a relay, a servo driver, a servo motor, a monitoring module, a communication module, a human-computer interaction module and a display which are connected with the DSP + FPGA chip;
an image storage module; the storage of image data information is realized;
a communication module; the data information interaction module is used for transmitting the received data information to other modules so as to realize data information interaction;
an image processing module; the image processing device is used for outputting different image information according to different processing rules for the same image data information, and realizing the output of processing results of different image data information; the image processing module comprises a compatible data interface, a first image processing module, a second image processing module and a visual output interface; the output end of the data interface is connected with the input ends of the first image processing module and the second image processing module, and the output ends of the first image processing module and the second image processing module are connected with the input end of the visual output interface; the first image processing module represents an image processing module based on local texture feature extraction of a hyperchaotic image, the second image processing module represents an image identification module based on a direction gradient histogram, and different types of output of the same image are realized through a visual output interface;
the visual display module is used for realizing data output after the same image data information is processed according to different processing rules;
the controller is respectively connected with the image storage module, the image acquisition unit, the communication module and the image processing module, the image processing module is connected with the communication module through a communication interface, the image acquisition unit is connected with a multi-channel data interface, and the image acquisition unit acquires data information of the camera through the multi-channel data interface; the image processing module is also connected with the visual display module through a communication interface;
in the above embodiment, the multi-channel interface at least includes an RS232 communication channel interface, an RS485 communication channel interface, a carrier communication channel interface, a TCP/IP communication channel interface, an RS422 communication channel interface, an ethernet communication channel interface, a CAN communication channel interface, a USB communication channel interface, a WIFI communication channel interface, a ZigBee communication channel interface, a bluetooth communication channel interface, or an optical fiber communication channel interface; the multi-channel interface further comprises a cloud data interaction port.
In the above embodiments, it is shown that the data information acquisition capability is improved, and a high-speed and high-resolution camera, a high-speed processor, and the like need to be configured. For the vision integration system, due to the integrated design, the system is limited by the processor, the memory and the like, and complex operations are not performed on large data volumes. The vision integration system can not be compared with PC-BASED at present in occasions requiring high speed and high precision. The PC-Based system is expected to adapt to complex applications and can be configured and controlled more flexibly. Algorithmically, complex operations can be implemented in a variety of high-level languages. When the precision is to be improved, the method can be realized by improving the system configuration and increasing the number of cameras. The invention is provided with the acceleration module, and can improve the data information acquisition capability.
In the specific application process, the image acquisition unit comprises 5 main chips: the system comprises an image acquisition chip OV7620, a high-speed microprocessor SH4, a large-scale programmable array FPGA and a serial port communication control chip MAX 232. Two dual-port RAMs are programmed in the FPGA to generate dot frequency, line field synchronization and other signals required by the image sensor and control the storage time sequence of the dual-port RAMs. The SH4 is responsible for configuring the OV7620 through the I2C, reading and processing image data of the dual-port RAM, and uploading image data or controlling other devices such as a stepping motor through a serial port. The system module is represented by a CMOS image sensor OV7620, and further comprises a condenser lens and other auxiliary components such as a 27MHz crystal oscillator, a resistor, a capacitor and the like.
The cmos image sensor is a new image sensor which is developed rapidly in recent years, and due to the adoption of the same cmos technology, a pixel array and a peripheral supporting circuit can be integrated on the same Chip, so that the cmos image sensor is a complete image system (Camera on Chip). The system uses a CMOS color image sensor OV7620, marketed by ommvision, with a resolution of 640x 480. It can work in progressive scanning mode and also in interlaced scanning mode. It can not only output color images, but also be used as a black and white image sensor. There are many image output formats supported by this chip: 1) YCrCb4:2: 216 bit/8 bit format; 2) ZV port output format; 3) RGB original data 16 bit/8 bit; 4) CCIR601/CCIR656 format. Its functions include contrast, brightness, saturation, white balance and automatic exposure, synchronous signal position and polarity output, frame rate and output format, etc. all can be controlled by I2C bus to program and configure on-chip register.
The FPGA adopts xc2s100, and 10000 logic gates are integrated in the chip. The interface program is written in VHDL (very High Speed Integrated Circuit Hardware Description language). Indicating an increase in the data transfer rate, 2 dual-port RAM buffers, the size of which indicates 127KB, are allocated inside the xc2s100, each of which stores 1 line of image data. The two sets of dual-port RAMs switch odd and even row counters. When a line is stored, an application signal for an interrupt to read the line is immediately transmitted to SH 4. The read-write operation of the dual-port RAM in the FPGA shares the same data bus and address bus, and when the read-write operation is carried out simultaneously, a time sequence problem is generated to cause data errors in writing or reading. In the two processes, the data and address bus conflict is prevented, and a central bus arbiter is designed inside the FPGA. According to the sequence of the public data transmission, the central arbiter receives the bus request of the image sensor firstly, and the central arbiter responds to the read signal request of the single chip microcomputer system after the image is stored in the RAM. The system uses an SH4 chip as a representation processor: the SH4 singlechip is a full 32-bit singlechip with low power consumption, high performance and RISC (reduced instruction set computer) structure, which is released by Hitachi corporation. The processing speed can reach 60M IPS-100 MIPS, the chip can work under the voltage of 2.25v, and a 32-bit multiplier, a 4-path 5KB CACHE, an access management unit MMU, other general interfaces, clock circuits and the like are integrated in a 400MW chip. Hitachi company shows that SH4 series single-chip microcomputers provide c and c + + language integrated compiling tool HIM (Hitachi integrated Manag). By which a source program compiled link in Hitachi C, C + + format can be represented as an assembler or object machine code. The image sensor chip OV7620 has flexible programmable functions that can be programmed through the I2C bus to set the function registers. Because the singlechip does not have an internal hardware I2C bus interface, the I2C bus interface function is realized only by adopting a software simulation method. The two I/O pins representing SH4 are taken to represent the SCL and SDA bus device interfaces of the I2C bus.
The acceleration module comprises an ARM processor, an on-chip memory, an off-chip memory, a storage module and 2 mutually cascaded convolution accelerators, wherein the on-chip memory, the off-chip memory and the storage module are connected with the ARM controller.
In the above embodiment, the method for realizing convolution acceleration by the convolution accelerator represents that:
(1) setting data parameters of a convolution accelerator, wherein in a specific embodiment, a deep convolution neural network learning process comprises two links, namely convolution layer learning and downsampling layer learning. The convolutional layer performs convolution processing such as local link and weight sharing on the data acquired by the previous layer based on the principle and the network structure that video images with the same source belong to one class, so that the number of links and parameters is reduced.
Figure 359673DEST_PATH_IMAGE049
And
Figure 411943DEST_PATH_IMAGE050
respectively represent
Figure 255134DEST_PATH_IMAGE003
The length and width of the layer convolution kernel,
Figure 709249DEST_PATH_IMAGE051
wherein
Figure 643443DEST_PATH_IMAGE005
And
Figure 183009DEST_PATH_IMAGE006
respectively representing the step size of a convolution kernel, and 2 convolution accelerators which are cascaded mutually output data information results to represent:
Figure 829891DEST_PATH_IMAGE052
(1)
in the formula (1), the reaction mixture is,
Figure 138513DEST_PATH_IMAGE053
Figure 963380DEST_PATH_IMAGE054
and
Figure 724663DEST_PATH_IMAGE055
the activation function and the input mapping respectively representing the current layer are subjected to convolution operation and the second
Figure 175236DEST_PATH_IMAGE003
Layer one
Figure 338364DEST_PATH_IMAGE056
Bias of individual feature maps;
Figure 334133DEST_PATH_IMAGE057
neurons of layer +1 are calculations
Figure 176187DEST_PATH_IMAGE003
Key of layer neuron, use
Figure 40238DEST_PATH_IMAGE058
To represent
Figure 933238DEST_PATH_IMAGE003
And (4) respectively calculating the product of the activation function of the layer neuron and the weight function and the gradient value. I.e. representing neurons from which a convolutional neural network is acquired
Figure 490122DEST_PATH_IMAGE059
Then, an up-sampling operation is performed on the down-sampling layer, and the down-sampling layer after up-sampling is performed
Figure 350630DEST_PATH_IMAGE060
The function represents:
Figure 18372DEST_PATH_IMAGE061
(2)
Figure 28529DEST_PATH_IMAGE014
(3)
in the formulae (2) to (3),
Figure 84210DEST_PATH_IMAGE015
Figure 307380DEST_PATH_IMAGE062
and
Figure 857442DEST_PATH_IMAGE017
respectively representing the sampling factor of the convolution accelerator
Figure 974302DEST_PATH_IMAGE003
First of a layer
Figure 872988DEST_PATH_IMAGE019
Up-sampling information of each neuron and the down-sampling layer;
the problem of excessive learning-induced fitting of the convolutional layer is solved, and the sampling layer is calculated under the condition of maximum pooling
Figure 193242DEST_PATH_IMAGE063
And the upper limit value of the regional characteristics is used for simplifying the calculation process, improving the stability of the network model and avoiding the problem of overfitting. Down-sampled layer learning function representation:
Figure 202786DEST_PATH_IMAGE064
(4)
in the formula (4), the reaction mixture is,
Figure 439733DEST_PATH_IMAGE065
downsampling information representing an upsampled layer;
(2) the deep convolutional neural network is in a convergence state, and the last full-connection layer of the deep convolutional neural network is used as a new image characteristic to represent
Figure 509320DEST_PATH_IMAGE066
(3) According to
Figure 582449DEST_PATH_IMAGE067
Obtaining new cluster labels
Figure 130105DEST_PATH_IMAGE068
And the output of the deep convolutional neural network in the next iteration process is represented;
(4) and setting the iteration times to represent 100 times, and when the iteration times are equal to 100, selecting a noise reduction automatic encoder to further obtain the compressed image code, thereby improving the data acceleration capability.
In the above embodiment, the human interaction module represents an FX3U-64MR-ES model controller.
In a specific embodiment, the block diagram of the human-computer interaction structure has a power supply, a Central Processing Unit (CPU), a memory, an input module, an output module, a communication interface, a power supply module, an expansion interface module, and the like. The power supply part plays an important role in the whole human-computer interaction structure diagram, can provide system operation voltage and ensures the stability of images obtained by the images; the CPU controls and commands the whole human-computer interaction structure; the memory stores the required hardware and software safely, the system code of the original factory is usually solidified in the system memory, the user can not rewrite the system code in the read-only memory, and the quality of the software code also determines the performance of the PLC; the input/output module is a channel for conveniently receiving signals and feedback signals; the expansion interface module of the human-computer interaction C is mainly responsible for the connection between the PLC and the peripheral module, and ensures effective data communication between the controller and the outside.
In a specific embodiment, the multi-axis integrated motion controller combined by the DSP and the FPGA chip is an 'embedded PC and motion control card integrated' controller, can control up to 8 axes simultaneously, has higher reliability and interference immunity, can realize more complex and accurate camera motion control, does not need a professional touch screen (HMI) for a control system module, is integrated in the controller, and only needs to be connected with a common display. The motion control module does not need to install a hardware PLC, the function of PLC control is realized by writing a software program, the program written on the PLC can be modified according to the actual situation on site, and the motion control module has strong universality and good transportability. The servo module adopts a Sanyo (SANYO) alternating current servo motor, the servo motor has the advantages of small volume and good rigidity, the servo module mainly receives motion instructions of motion control, including the rotation angle, the rotation speed and the torque of the servo motor, has the advantages of small inertia, quick response, stable rotation and the like, and the servo module ensures the stable operation of the camera under the high-speed condition, adopts a driver with high rigidity to transmit and considers other overload capacity.
In the above embodiment, the first image processing module includes an image information extraction module, an image texture convolution module, a gray value calculation module, and a brightening module, where an output end of the image information extraction module is connected to an input end of the image texture convolution module, an output end of the image texture convolution module is connected to an input end of the gray value calculation module, an output end of the gray value calculation module is connected to an input end of the brightening module, and the image information extraction module represents a filter.
In the above embodiment, the processing method of the first image processing module includes the following steps:
step 1, extracting data information of the acquired image through an image information extraction module, and extracting texture of a content representation image;
in an embodiment, the texture is an important index in the image information, which includes the correlation information between the structural organization of the object surface and the environment, and can be an important visual feature for representing and analyzing the hyperchaotic image information. The distribution of energy in the magnitude spectrum over the various frequency bands, which shows a close relationship with textural features, represents a reduction of filtering redundancy,
and (3) extracting image problem features by adopting a filter, and expressing an output image filtering function:
Figure 221558DEST_PATH_IMAGE069
(5)
in the formula (5), the reaction mixture is,
Figure 196467DEST_PATH_IMAGE070
which represents the center frequency of the filter and,
Figure 759822DEST_PATH_IMAGE008
a constant that controls the bandwidth of the radial filter,
Figure 111169DEST_PATH_IMAGE071
a representation directional bandwidth determination parameter;
step 2, realizing convolution of image data information through an image texture convolution module, wherein the extracted image data information has the same bandwidth, and the texture features of the image are defined as follows:
Figure 260391DEST_PATH_IMAGE072
(6)
in the formula (6), the reaction mixture is,
Figure 547147DEST_PATH_IMAGE073
a feature map representing the texture of the image,
Figure 453923DEST_PATH_IMAGE074
is shown in
Figure 468015DEST_PATH_IMAGE075
A map of the textural features of the image at scale,
Figure 675006DEST_PATH_IMAGE076
is shown in
Figure 867084DEST_PATH_IMAGE075
And (5) performing image convolution corresponding to the texture feature map of the image under the scale.
Step 3, realizing gray map calculation through a gray value calculation module;
the method is characterized in that the partial visibility of an image is measured, an image visibility concept is introduced, and a function formula is represented as follows:
Figure 261156DEST_PATH_IMAGE077
(7)
in the formula (7), the reaction mixture is,
Figure 78939DEST_PATH_IMAGE034
representing the average of the image pixel gray levels,
Figure 874857DEST_PATH_IMAGE078
the visual constant is represented by a visual constant,
Figure 503416DEST_PATH_IMAGE036
which represents the size of the image window and,
Figure 384784DEST_PATH_IMAGE037
to represent
Figure 943941DEST_PATH_IMAGE038
The gray value of the image pixel under the position;
step 4, brightness updating of the extracted data information is realized through a brightening module; so that the input image information has clear brightness, the local brightness contrast is introduced into the local texture feature extraction, and through brightness calculation, the calculation function is as follows:
Figure 466802DEST_PATH_IMAGE079
(8)
in the formula (8), the reaction mixture is,
Figure 390896DEST_PATH_IMAGE040
represents the local brightness in the hyperchaotic image,
Figure 884194DEST_PATH_IMAGE041
which represents the local background brightness of the image,
Figure 184725DEST_PATH_IMAGE042
representing the high frequency components of the image,
Figure 565022DEST_PATH_IMAGE008
representing an image edge variation parameter.
By the method, the slight change of the local contrast of the hyperchaotic image can be clearly expressed.
In the above embodiment, the second image processing module includes a gray-scale image normalization module, a pixel gradient calculation module, a pixel normalization calculation module, and a feature vector generation module, wherein an output end of the gray-scale image normalization module is connected to an input end of the pixel gradient calculation module, an output end of the pixel gradient calculation module is connected to an input end of the pixel normalization calculation module, and an output end of the pixel normalization calculation module is connected to an input end of the feature vector generation module.
In the above embodiment, the method of processing an image by the second image processing module includes the steps of:
step 1: converting the image data collected by the second image processing module into a gray scale map, and performing normalization processing through the following functions:
Figure 987913DEST_PATH_IMAGE080
(9)
in the formula (9), the reaction mixture is,Ta,b) A gray scale value representing image data information collected by the second image processing module;
step 2: gradient calculation, calculating the image in pixels (a,b) Gradient values of the points, the gradient function formula is:
Figure 843874DEST_PATH_IMAGE081
(10)
in the formula (10), the reaction mixture is,L a a,b) Represents the horizontal gradient values of the image,L b a,b) Represents the vertical gradient values of the image,Ha,b) Pixel values representing an image at pixel pointsa,b) The gradient vector of (a) is:
Figure 292304DEST_PATH_IMAGE082
(11)
Figure 917320DEST_PATH_IMAGE083
(12)
in the formulas (11) to (12),La,b) A gradient value representing the image is determined,
Figure 307850DEST_PATH_IMAGE047
representing the gradient direction of the image;
and step 3: constructing a directional gradient histogram, dividing the image into a plurality of modules, and performing normalization processing;
each second image processing module outputs 8 x 8 pixels of the module, dividing the gradient direction of the module into 9 blocks. And performing weighted projection on each pixel in the module in the gradient direction histogram to calculate the gradient direction histogram of the module. And combining and normalizing adjacent modules.
And 4, step 4: generating feature vectors, wherein each normalization module has partially overlapped directional gradient histogram features, extracting the features to generate the feature vectors, and separating two kinds of image features to the maximum extent through a machine learning algorithm after the directional gradient histogram features are obtained. The classifier can simplify the classification problem and is divided intofx) Eliminating redundant information in the image, the linear classifier isfx) The expression function of (a) is:
Figure 385528DEST_PATH_IMAGE048
(13)
in the formula (13), the reaction mixture is,Rrepresents one parameter of the linear classifier and is,xrepresents a variation of a sample of the HTGS image,za hidden variable is represented by a number of hidden variables,Zx) The value space of the hidden variable is represented,Px,z) Representing HTGS image sample information;
in particular embodiments, a linear classifier is trained to obtain optimal parameters primarily by minimizing an objective functionRSpecifically defined as:
Figure 903228DEST_PATH_IMAGE084
(14)
in the formula (14), M (R) represents an objective function,
Figure 382751DEST_PATH_IMAGE085
is shown as
Figure 944182DEST_PATH_IMAGE085
And (5) training tasks. For solving the objective function minimization, it can be optimized by fixing R to select the optimum hidden variable value for each positive sample. Square blockThe modularization concept is adopted for the gradient histogram feature, and the areas of the current unit and the four surrounding units are directly normalized during normalization processing, so that the image can be rapidly acquired, and an operator can perform corresponding manual control.
After the calculation, the visual display of the image data information is realized through the visual output interface, and the visual visualization degree of the image is greatly improved. Different data can be output for the same image to be displayed, and the analysis and application capability of the image information is greatly improved.
Although specific embodiments of the present invention have been described above, it will be understood by those skilled in the art that these specific embodiments are merely illustrative and that various omissions, substitutions and changes in the form of the detail of the methods and systems described above may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is within the scope of the present invention to combine the steps of the above-described methods to perform substantially the same function in substantially the same way to achieve substantially the same result. Accordingly, the scope of the invention is to be limited only by the following claims.

Claims (8)

1. A vision integration system for processing identical images, characterized by: the method comprises the following steps:
an image acquisition unit; the image acquisition unit comprises an image sensor, a dual-port RAM interface, an SH4 chip processor and an acceleration module, and realizes external data information interaction through an I/O interface;
a controller; the controller is used for controlling other modules to be in a working state and represents a multi-axis integrated motion controller combined by a DSP chip and an FPGA chip; the intelligent monitoring system comprises a DSP + FPGA chip, and an electromagnetic valve, a relay, a servo driver, a servo motor, a monitoring module, a communication module, a human-computer interaction module and a display which are connected with the DSP + FPGA chip;
an image storage module; the storage of image data information is realized;
a communication module; the data information interaction module is used for transmitting the received data information to other modules so as to realize data information interaction;
an image processing module; the image processing device is used for outputting different image information according to different processing rules for the same image data information, and realizing the output of the processing results of different image data information; the image processing module comprises a compatible data interface, a first image processing module, a second image processing module and a visual output interface; the output end of the data interface is connected with the input ends of the first image processing module and the second image processing module, and the output ends of the first image processing module and the second image processing module are connected with the input end of the visual output interface; the first image processing module represents an image processing module based on local texture feature extraction of a hyperchaotic image, the second image processing module represents an image identification module based on a direction gradient histogram, and different types of output of the same image are realized through a visual output interface;
the visual display module is used for realizing data output after the same image data information is processed according to different processing rules;
the camera comprises a controller, an image storage module, an image acquisition unit, a communication module and an image processing module, wherein the controller is respectively connected with the image storage module, the image acquisition unit, the communication module and the image processing module; the image processing module is also connected with the visual display module through a communication interface.
2. A vision integration system for processing identical images according to claim 1, characterized in that: the multichannel data interface at least comprises an RS232 communication channel interface, an RS485 communication channel interface, a carrier communication channel interface, a TCP/IP communication channel interface, an RS422 communication channel interface, an Ethernet communication channel interface, a CAN communication channel interface, a USB communication channel interface, a WIFI communication channel interface, a ZigBee communication channel interface, a Bluetooth communication channel interface or an optical fiber communication channel interface; the multi-channel interface further comprises a cloud data interaction port.
3. A vision integration system for processing identical images according to claim 1, characterized in that: the acceleration module comprises an ARM processor, an on-chip memory, an off-chip memory, a storage module and 2 mutually cascaded convolution accelerators, wherein the on-chip memory, the off-chip memory and the storage module are connected with the ARM controller.
4. A vision integration system for processing identical images according to claim 3, wherein: the convolution accelerator realizes convolution acceleration by the following method:
(1) the data parameters of the convolution accelerator are set up,
Figure 173479DEST_PATH_IMAGE001
and
Figure 219932DEST_PATH_IMAGE002
respectively represent
Figure 736364DEST_PATH_IMAGE003
The length and width of the layer convolution kernel,
Figure 577281DEST_PATH_IMAGE004
wherein
Figure 179164DEST_PATH_IMAGE005
And
Figure 545421DEST_PATH_IMAGE006
respectively representing the step size of a convolution kernel, and 2 convolution accelerators which are cascaded mutually output data information results to represent:
Figure 916360DEST_PATH_IMAGE007
(1)
in the formula (1), the reaction mixture is,
Figure 193757DEST_PATH_IMAGE008
Figure 17357DEST_PATH_IMAGE009
and
Figure 405613DEST_PATH_IMAGE010
the activation function and the input mapping respectively representing the current layer are subjected to convolution operation and the second
Figure 631058DEST_PATH_IMAGE003
Layer one
Figure 813778DEST_PATH_IMAGE011
Biasing of the individual feature maps;
obtaining neurons of a convolutional neural network
Figure 157296DEST_PATH_IMAGE012
Then, an up-sampling operation is performed on the down-sampling layer, and the down-sampling layer after up-sampling is performed
Figure 83664DEST_PATH_IMAGE013
The function represents:
Figure 429195DEST_PATH_IMAGE014
(2)
Figure 48395DEST_PATH_IMAGE015
(3)
in the formulae (2) to (3),
Figure 846587DEST_PATH_IMAGE016
Figure 842225DEST_PATH_IMAGE017
and
Figure 776683DEST_PATH_IMAGE018
respectively representing the sampling factor of the convolution accelerator
Figure 65319DEST_PATH_IMAGE003
First of a layer
Figure 350807DEST_PATH_IMAGE011
Up-sampling information of each neuron and the down-sampling layer;
down-sampling layer learning function representation:
Figure 884557DEST_PATH_IMAGE019
(4)
in the formula (4), the reaction mixture is,
Figure 939100DEST_PATH_IMAGE020
downsampling information representing an upsampled layer;
(2) the deep convolutional neural network is in a convergence state, and the last full-connection layer of the deep convolutional neural network is used as a new image characteristic to represent
Figure 165682DEST_PATH_IMAGE021
(3) According to
Figure 672887DEST_PATH_IMAGE022
Obtaining new cluster labels
Figure 10327DEST_PATH_IMAGE023
And the output of the deep convolutional neural network in the next iteration process is represented;
(4) and setting the iteration times to represent 100 times, and when the iteration times are equal to 100, selecting a noise reduction automatic encoder to further obtain the compressed image code, thereby improving the data acceleration capability.
5. A vision integration system for processing identical images according to claim 1, characterized in that: the human-computer interaction module represents an FX3U-64MR-ES model controller; the first image processing module comprises an image information extraction module, an image texture convolution module, a gray value calculation module and a brightening module, wherein the output end of the image information extraction module is connected with the input end of the image texture convolution module, the output end of the image texture convolution module is connected with the input end of the gray value calculation module, the output end of the gray value calculation module is connected with the input end of the brightening module, and the image information extraction module represents a filter.
6. A visual integration system for processing identical images according to claim 1 or 5, characterized in that: the processing method of the first image processing module comprises the following steps:
step 1, extracting data information of the acquired image through an image information extraction module, and extracting texture of the content representation image;
and (3) extracting image problem features by adopting a filter, and expressing an output image filtering function:
Figure 686422DEST_PATH_IMAGE024
(5)
in the formula (5), the reaction mixture is,
Figure 552746DEST_PATH_IMAGE025
which represents the center frequency of the filter and,
Figure 812826DEST_PATH_IMAGE026
a constant that controls the bandwidth of the radial filter,
Figure 422799DEST_PATH_IMAGE027
a representation directional bandwidth determination parameter;
step 2, realizing convolution of image data information through an image texture convolution module, wherein the extracted image data information has the same bandwidth, and the texture features of the image are defined as follows:
Figure 717514DEST_PATH_IMAGE028
(6)
in the formula (6), the reaction mixture is,
Figure 754741DEST_PATH_IMAGE029
a feature map representing the texture of the image,
Figure 502117DEST_PATH_IMAGE030
is shown in
Figure 420175DEST_PATH_IMAGE031
A map of the textural features of the image at scale,
Figure 569397DEST_PATH_IMAGE032
is shown in
Figure 43103DEST_PATH_IMAGE031
The image convolution result corresponding to the texture feature map of the image under the scale;
step 3, realizing gray map calculation through a gray value calculation module;
the function formula expresses:
Figure 12196DEST_PATH_IMAGE033
(7)
in the formula (7), the reaction mixture is,
Figure 229551DEST_PATH_IMAGE034
representing the average value of the grey scale of the pixels of the image,
Figure 233279DEST_PATH_IMAGE035
the visual constant is represented by a visual constant,
Figure 612308DEST_PATH_IMAGE036
which represents the size of the image window and,
Figure 570162DEST_PATH_IMAGE037
to represent
Figure 325628DEST_PATH_IMAGE038
The gray value of the image pixel under the position;
step 4, brightness updating of the extracted data information is achieved through the brightening module; so that the input image information has clear brightness, the local brightness contrast is introduced into the local texture feature extraction, and through brightness calculation, the calculation function is as follows:
Figure 449442DEST_PATH_IMAGE039
(8)
in the formula (8), the reaction mixture is,
Figure 264951DEST_PATH_IMAGE040
represents the local brightness in the hyperchaotic image,
Figure 208637DEST_PATH_IMAGE041
which represents the local background brightness of the image,
Figure 767794DEST_PATH_IMAGE042
representing the high frequency components of the image,
Figure 480535DEST_PATH_IMAGE043
representing an image edge variation parameter.
7. A vision integration system for processing identical images according to claim 5, wherein: the second image processing module comprises a grey scale icon normalization module, a pixel gradient calculation module, a pixel normalization calculation module and a feature vector generation module, wherein the output end of the grey scale icon normalization module is connected with the input end of the pixel gradient calculation module, the output end of the pixel gradient calculation module is connected with the input end of the pixel normalization calculation module, and the output end of the pixel normalization calculation module is connected with the input end of the feature vector generation module.
8. A vision integration system for processing the same image according to claim 1 or 7, characterized in that: the method of processing an image by a second image processing module includes the steps of:
step 1: converting the image data collected by the second image processing module into a gray scale map, and performing normalization processing through the following functions:
Figure 965481DEST_PATH_IMAGE044
(9)
in the formula (9), the reaction mixture is,Ta,b) A gray value representing image data information collected by the second image processing module;
and 2, step: gradient calculation, calculating the image in pixels (a,b) Gradient values of the points, the gradient function formula is:
Figure 662041DEST_PATH_IMAGE045
(10)
in the formula (10), the compound represented by the formula (10),L a a,b) Represents the horizontal gradient values of the image,L b a,b) Represents the vertical gradient values of the image,Ha,b) Pixel values representing an image at pixel pointsa,b) The gradient vector of (a) is:
Figure 759310DEST_PATH_IMAGE046
(11)
Figure 326558DEST_PATH_IMAGE047
(12)
in the formulas (11) to (12),La,b) A gradient value representing the image is determined,
Figure 483870DEST_PATH_IMAGE048
representing the gradient direction of the image;
and step 3: constructing a directional gradient histogram, dividing the image into a plurality of modules, and performing normalization processing;
and 4, step 4: generating characteristic vectors, wherein each normalization module has partially overlapped directional gradient histogram characteristics, extracting the characteristics to generate the characteristic vectors, and after the directional gradient histogram characteristics are obtained, machine learning is carried out
The linear classifier isfx) Eliminating redundant information in the image, the linear classifier isfx) The expression function of (a) is:
Figure 667726DEST_PATH_IMAGE049
(13)
in the formula (13), the reaction mixture is,Rrepresents one parameter of the linear classifier and is,xrepresents one of the HTGS image sample variables,
Figure 303107DEST_PATH_IMAGE050
a hidden variable is represented by a number of hidden variables,Zx) The value space of the hidden variable is represented,Px,z) Representing the HTGS image sample information.
CN202210483316.5A 2022-05-06 2022-05-06 Visual integration system for processing same image Pending CN114584674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210483316.5A CN114584674A (en) 2022-05-06 2022-05-06 Visual integration system for processing same image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210483316.5A CN114584674A (en) 2022-05-06 2022-05-06 Visual integration system for processing same image

Publications (1)

Publication Number Publication Date
CN114584674A true CN114584674A (en) 2022-06-03

Family

ID=81778366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210483316.5A Pending CN114584674A (en) 2022-05-06 2022-05-06 Visual integration system for processing same image

Country Status (1)

Country Link
CN (1) CN114584674A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115504187A (en) * 2022-09-22 2022-12-23 中信重工开诚智能装备有限公司 Intelligent speed regulation and protection system for mining belt and control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044063A (en) * 2010-12-23 2011-05-04 中国科学院自动化研究所 FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) based machine vision system
US20190042923A1 (en) * 2017-08-07 2019-02-07 Intel Corporation System and method for an optimized winograd convolution accelerator
CN210476955U (en) * 2019-05-10 2020-05-08 深圳市领略数控设备有限公司 Multi-axis manipulator controller based on ZYNQ
WO2021249233A1 (en) * 2020-06-10 2021-12-16 中铁四局集团有限公司 Image processing method, target recognition model training method, and target recognition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044063A (en) * 2010-12-23 2011-05-04 中国科学院自动化研究所 FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) based machine vision system
US20190042923A1 (en) * 2017-08-07 2019-02-07 Intel Corporation System and method for an optimized winograd convolution accelerator
CN210476955U (en) * 2019-05-10 2020-05-08 深圳市领略数控设备有限公司 Multi-axis manipulator controller based on ZYNQ
WO2021249233A1 (en) * 2020-06-10 2021-12-16 中铁四局集团有限公司 Image processing method, target recognition model training method, and target recognition method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115504187A (en) * 2022-09-22 2022-12-23 中信重工开诚智能装备有限公司 Intelligent speed regulation and protection system for mining belt and control method
CN115504187B (en) * 2022-09-22 2023-06-13 中信重工开诚智能装备有限公司 Intelligent speed regulation and protection system and control method for mining belt

Similar Documents

Publication Publication Date Title
WO2018112833A1 (en) Efficient transferring of human experiences to robots and other autonomous machines
CN107545263B (en) Object detection method and device
US20190114546A1 (en) Refining labeling of time-associated data
CN103424409A (en) Vision detecting system based on DSP
AU2018319215A1 (en) Electronic apparatus and control method thereof
CN114584674A (en) Visual integration system for processing same image
CN100465994C (en) Method and apparatus for downscaling a digital matrix image
Chalimbaud et al. Embedded active vision system based on an FPGA architecture
CN113409355A (en) Moving target identification system and method based on FPGA
CN111695408A (en) Intelligent gesture information recognition system and method and information data processing terminal
CN214751405U (en) Multi-scene universal edge vision motion control system
CN112613425B (en) Target identification system for small sample underwater image
Sun et al. Data-driven technology in event-based vision
CN116449947B (en) Automobile cabin domain gesture recognition system and method based on TOF camera
CN111860361A (en) Green channel cargo scanning image entrainment automatic identifier and identification method
WO2023043001A1 (en) Attention map transferring method and device for enhancement of face recognition performance of low-resolution image
CN112926277B (en) Design method of miniaturized focal plane array test data acquisition and display system
CN113920165A (en) Robot pose estimation method and system based on multi-sensor feature fusion
CN112689121A (en) Motion tracking system based on FPGA
CN116348905A (en) Electronic apparatus and control method thereof
CN112947304A (en) Intelligent camera multi-core heterogeneous on-chip integration system and visual control method
CN113962842B (en) Dynamic non-polar despinning system and method based on high-level synthesis of large-scale integrated circuit
CN114760414B (en) Image acquisition and processing system for CMV4000 camera
CN113132554A (en) Real-time image acquisition and processing system based on FPGA
CN115167666A (en) But interactive AR intelligence helmet device of gesture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220603

RJ01 Rejection of invention patent application after publication