CN110414401B - PYNQ-based intelligent monitoring system and monitoring method - Google Patents

PYNQ-based intelligent monitoring system and monitoring method Download PDF

Info

Publication number
CN110414401B
CN110414401B CN201910661356.2A CN201910661356A CN110414401B CN 110414401 B CN110414401 B CN 110414401B CN 201910661356 A CN201910661356 A CN 201910661356A CN 110414401 B CN110414401 B CN 110414401B
Authority
CN
China
Prior art keywords
pynq
module
calculation
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910661356.2A
Other languages
Chinese (zh)
Other versions
CN110414401A (en
Inventor
李一涛
胡有能
岳克强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910661356.2A priority Critical patent/CN110414401B/en
Publication of CN110414401A publication Critical patent/CN110414401A/en
Application granted granted Critical
Publication of CN110414401B publication Critical patent/CN110414401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an intelligent monitoring system and a monitoring method based on PYNQ, which realize the purpose of multi-target detection classification through the cooperation of software and hardware. The system mainly comprises a multi-target detection module, a general convolutional neural network accelerator IP and an API (application program interface) based on python, wherein the multi-target detection module is used for transplanting and optimizing an algorithm. The PYNQ integrates an arm processor system and FPGA programmable logic, a caffe framework is transplanted to a software part, the PYNQ is suitable for a mainstream artificial intelligence algorithm, and the fact that a fast-RCNN algorithm is transplanted to a PYNQ platform to achieve a target detection function is improved. The FPGA portion uses a convolutional neural network to accelerate IP to perform the calculation of the algorithm derivation portion. Python-based APIs provide a convenient calling interface. The invention has the advantages of high image processing speed, less hardware resource requirement and convenient transplantation and development.

Description

PYNQ-based intelligent monitoring system and monitoring method
Technical Field
The invention relates to an embedded platform-based target detection technology, in particular to an intelligent monitoring system and a monitoring method based on PYNQ.
Background
Video monitoring is a sub-industry of security industry, and the video monitoring market scale in China is increased from 242 billion yuan to 1124 billion yuan during the year 2010-2017, and the annual average recombination rate reaches 24.53 percent. With the construction of road traffic infrastructure in China and the accelerated construction of 'safe cities', the scale of the video monitoring market in China is expected to reach 1558 hundred million yuan by 2020 and to break 1900 hundred million yuan by 2023. And intellectualization will be a long-term development direction for video monitoring in the future. Artificial intelligence will therefore play an increasingly important role in monitoring systems. The target detection is a hot direction of computer vision and digital image processing, is also a core part of an intelligent monitoring system, reduces the consumption of human capital through the computer vision, and has important practical significance. Due to the wide application of deep learning, the target detection algorithm is developed rapidly.
The PYNQ development board adds the support to python on the basis of the original Zynq framework. The embedded programmer can fully exert the function of Xilinx Zynq All Programmable SoC (APSoC) without designing a Programmable logic circuit. Unlike conventional methods, where the PYNQ integrates an ARM processor and an FPGA programmable logic device, a user can use Python for APSoC programming and code can be developed and tested directly on the PYNQ. With PYNQ, the programmable logic circuit will be imported as a hardware library and programmed through its API in much the same way as a software library is imported and programmed. The Python is used as an elegant and simple script language and widely applied to various fields, and a control system developed based on the Python has high portability.
The traditional monitoring system usually needs manual intervention, for example, traffic accidents and theft often need to make the monitoring and the monitoring miss the best opportunity, and the traditional monitoring system has great time delay and higher cost of human resources. As long as the required information can be automatically detected, the feedback can be given in time, and the cost can be reduced by replacing manpower with a machine.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides an intelligent monitoring system and a monitoring method based on PYNQ.
An intelligent monitoring system based on PYNQ comprises a camera and a PYNQ processing system which are connected through a USB, wherein the PYNQ processing system comprises an arm processor and an FPGA (field programmable gate array), and is characterized in that the PYNQ processing system comprises a multi-target detection module, an IP (Internet protocol) of a general convolutional neural network accelerator and an API (application program interface) of Python, the multi-target detection module is transplanted and optimized with a fast-RCNN multi-target detection algorithm, the structure of an AlexNet network is optimized to serve as a feedforward network, and the detection results kmeans are clustered; the fast-RCNN multi-target detection algorithm comprises an extraction module of a suggestion box, an SVM classification module, a linear regression correction module, a convolution module, a pooling module and a full connection layer module.
Furthermore, the convolution module, the pooling module and the full connection layer module are calculated by using an IP of a general convolution neural network accelerator in an FPGA.
Further, the API interface of Python includes configuration parameters of the general convolutional neural network accelerator IP, data handling, calculation execution, and state acquisition.
Further, the extraction module of the suggestion box, the SVM classification module, the linear regression correction module and the kmeans clustering module are calculated in an arm processor.
Further, the feature map obtained by preprocessing the arm processor is stored in a DDR, and is input according to rows, and image data is transmitted through DMA by adopting AXI-lite bus control.
Further, the general convolutional neural network accelerator IP comprises a computing unit, row-column multiplexing and 6-level stream are adopted in the computing unit, the general convolutional neural network accelerator IP can selectively realize functions of convolution, pooling and activating functions, and the size, the step length and the 0 supplement of a kernel can be customized.
A monitoring method based on the intelligent PYNQ-based monitoring system comprises the following steps:
(1) leading the trained network weight into a PYNQ SD card, and reading the weight from the SD into the DDR;
(2) acquiring an image, controlling a USB camera to acquire the image, and transmitting a frame of image to the PYNQ through a USB interface; the arm processor preprocesses the image into a characteristic diagram of an input format of an AlexNet network and writes the characteristic diagram into the DDR;
(3) calling an API (application program interface) to configure network parameters, controlling a corresponding register through AXI _ lite, and configuring information such as the size of a core, the step length, whether to supplement 0, whether the layer is a convolutional layer or a pooling layer, whether to activate a function and the like according to different network structures of each layer;
(4) starting a general convolutional neural network accelerator IP for calculation, automatically carrying feature map data from the DDR by the FPGA according to rows through DMA, writing the result back to the DDR after calculation, circularly executing each layer, and finishing the forward derivation calculation of a convolution layer and a pooling layer;
(5) intervening an arm processor, detecting a flag bit and judging whether the calculation is finished;
(6) selecting a suggestion frame according to an anchor point box, cutting out a partial feature map which is possibly a target according to the suggestion frame to perform ROI pooling, calling an IP (Internet protocol) of a general convolutional neural network accelerator to perform calculation of a full connection layer, wherein the full connection layer is realized by converting into convolution calculation with the length and the width of 1;
(7) and classifying the calculation results of the full connection layer by using a support vector machine, correcting by using a regression model to obtain the boundary frame coordinates of the target, and screening the repeatedly identified target by using kmeans clustering.
Furthermore, the suggestion box can be configured with 9 types of suggestion boxes with three areas and three dimensions.
The method is suitable for a mainstream artificial intelligence algorithm, and has the advantages of high image processing speed, low hardware resource requirement and convenience in transplantation and development.
Drawings
FIG. 1 is a schematic structural diagram of the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings of the specification and as shown in figure 1, and the specific technical scheme is as follows:
the invention provides an intelligent monitoring system based on a PYNQ development platform, which comprises a camera and a PYNQ processing system, wherein the camera is connected with the PYNQ processing system through a USB. The PYNQ processing system comprises an arm processor and an FPGA. The camera collects image information, the image information is transmitted to the arm processor for preprocessing through the USB, a processing result is input into the FPGA for forward derivation, multiple targets are detected rapidly, and corresponding control is performed. And the detection real-time performance is improved through a software and hardware cooperation module.
The PYNQ adds the support to python on the basis of the original Zynq framework. An ARM processor and an FPGA programmable logic device are integrated, and a programmable logic circuit is imported as a hardware library and is programmed through an API of the hardware library. And the software and hardware are conveniently implemented cooperatively, the forward derivation part with high computation repetition is processed in the FPGA, and the part with high computation complexity is processed by software.
The PYNQ processing system mainly comprises a multi-target detection module, a general convolutional neural network acceleration IP and an API (application program interface) based on python. The multi-target detection module carries out multi-target detection through transplantation and optimization of an algorithm, firstly, a multi-target detection algorithm false-RCNN is selected, and aiming at hardware resources of the platform, the structure of an AlexNet network is improved to be used as a feedforward network; and then performing kmeans clustering on the detection result to improve the accuracy. And (4) transplanting cafe to PYNQ to facilitate the realization of the multi-target detection algorithm false-RCNN.
The multi-target detection algorithm false-RCNN comprises an extraction module of a suggestion frame, an SVM classification module, a linear regression correction module, a convolution module, a pooling module and a full connection layer module, wherein the software and hardware cooperation module, the extraction module of the suggestion frame, the SVM classification module, the linear regression correction module, the kmean clustering module and the like are calculated by using an arm processor; and the forward derivation part with high calculation parallelism and high repetition rate uses a general convolutional neural network accelerator IP in the FPGA for calculation. The general convolutional neural network accelerator IP adopts AXI-lite bus control, DMA transmits image data, can select and realize functions of convolution, pooling and activating functions, can customize the size, step length, complement 0 and the like of a kernel, and is suitable for various convolutional neural networks.
The general convolutional neural network accelerator IP comprises a computing unit, row-column multiplexing and 6-level flow are adopted in the computing unit, and the acceleration effect is good. And circularly configuring and calling the IP of the general convolutional neural network accelerator to realize a convolution module, a pooling module and a full connection layer module in the faster-RCNN. The Python-based API encapsulates the configuration parameters, the data carrying, the calculation execution and the state acquisition of the IP of the general convolutional neural network accelerator into an interface of Python, and is convenient to embed and call and transplant in the algorithm implementation process.
An intelligent monitoring method based on PYNQ comprises the following steps:
and (4) leading the trained network weight into an SD card of PYNQ, and reading the weight from the SD into the DDR.
And image acquisition, namely controlling a USB camera to acquire images and transmitting a frame of picture to the PYNQ through a USB interface. The arm processor preprocesses the image into an input format of an AlexNet network, 8-bit three-channel 224 x 224 pixels, and writes the pixels into DDR. Because the data volume of the characteristic diagram is large, the resources in the FPGA are few and cannot be saved, the DDR is saved at a large number, and then the calculation is carried out by writing the characteristic diagram into the FPGA according to the IP of the general convolutional neural network accelerator.
Calling the API to configure network parameters, controlling a corresponding register through the AXI _ lite, configuring information such as the size of a core, the step length, whether to supplement 0, whether the layer is a convolutional layer or a pooling layer, whether to activate functions and the like according to different network structures of each layer, and being convenient for representing various network structures. And starting the IP of the general convolutional neural network accelerator for calculation, automatically carrying characteristic diagram data from the DDR by rows through the DMA at the FPGA end, writing the result back to the DDR after calculation, and judging whether the calculation is finished only by detecting a flag bit without the intervention of an arm processor during the calculation. Each layer is executed circularly, and forward derivation calculation of convolution and pooling layers is completed.
And selecting the suggestion frame according to the anchor point box, wherein 9 types of suggestion frames with three areas and three scales can be configured. And cutting out a part of feature maps which can be taken as targets according to the suggestion frame to perform ROI pooling, calling the IP of the general convolutional neural network accelerator again to perform calculation of a full connection layer, and converting the full connection layer into the convolution calculation with the length and the width of 1.
And classifying the calculation result of the full connection layer by using a support vector machine, correcting by using a regression model to obtain the boundary frame coordinates of the target, and screening the repeatedly identified target by using kmeans clustering to improve the accuracy.
And performing corresponding control according to the recognition result.
An unmanned ticket checking system of an amusement park based on an intelligent PYNQ monitoring system is used for distinguishing adult tickets, child tickets and family tickets.
The method comprises the steps that parents take children to check tickets, entrance ticket information is provided at an entrance, and a camera is controlled to start to collect images. And a frame of picture is transmitted to the PYNQ-based intelligent monitoring system through the USB interface. The Arm processor preprocesses the collected image information, converts the image information into a characteristic diagram of 8-bit three channels 224 × 224 pixels, and writes the characteristic diagram into the DDR.
And step two, calling the API to configure parameters of each layer and constructing a network. And then starting an IP (Internet protocol) of a general convolutional neural network accelerator to calculate a feedforward network, automatically carrying feature diagram data by the FPGA according to rows through DMA (direct memory access), and writing a calculated result back to the DDR. When the calculation is finished, the flag bit is changed to inform the arm processor.
And thirdly, selecting a suggestion frame according to the anchor point box, cutting out a part of feature graphs which can be targets from the calculation result to perform ROI pooling, and calling the IP of the general convolutional neural network accelerator again to perform calculation of the full connection layer. And classifying the calculation results of the full connection layer by using a support vector machine, correcting by using a regression model to obtain the boundary frame coordinates of the target, and screening the repeatedly identified target by using kmeans clustering.
And step four, finally, obtaining the multi-target category and the boundary frame of the detection result, judging that two adults and one child correspond to the family ticket information in the entrance area, and controlling the motor to open the door and release the door.

Claims (7)

1. An intelligent monitoring method based on PYNQ is characterized in that: the method is characterized in that the PYNQ system comprises a multi-target detection module, a general convolutional neural network accelerator IP and an API (application program interface) of Python, the multi-target detection module is transplanted and optimized with a fast-RCNN (convolutional neural network) multi-target detection algorithm, the structure of an AlexNet network is optimized to serve as a feedforward network, and the detection results kmeans are clustered; the fast-RCNN multi-target detection algorithm comprises an extraction module of a suggestion frame, an SVM classification module, a linear regression correction module, a convolution module, a pooling module and a full connection layer module; the monitoring method comprises the following steps:
(1) leading the trained network weight into a PYNQ SD card, and reading the weight from the SD into the DDR;
(2) acquiring an image, controlling a USB camera to acquire the image, and transmitting a frame of image to the PYNQ through a USB interface; the arm processor preprocesses the image into a characteristic diagram of an input format of an AlexNet network and writes the characteristic diagram into the DDR;
(3) calling an API (application program interface) to configure network parameters, controlling a corresponding register through AXI _ lite, and configuring the size and the step length of a core, whether to supplement 0, whether the layer is a convolutional layer or a pooling layer and whether to activate function information according to different network structures of each layer;
(4) starting a general convolutional neural network accelerator IP for calculation, automatically carrying feature map data from the DDR by the FPGA according to rows through DMA, writing the result back to the DDR after calculation, circularly executing each layer, and finishing the forward derivation calculation of a convolution layer and a pooling layer;
(5) intervening an arm processor, detecting a flag bit and judging whether the calculation is finished;
(6) selecting a suggestion frame according to an anchor point box, cutting out a partial feature map which is possibly a target according to the suggestion frame to perform ROI pooling, calling an IP (Internet protocol) of a general convolutional neural network accelerator to perform calculation of a full connection layer, wherein the full connection layer is realized by converting into convolution calculation with the length and the width of 1;
(7) and classifying the calculation results of the full connection layer by using a support vector machine, correcting by using a regression model to obtain the boundary frame coordinates of the target, and screening the repeatedly identified target by using kmeans clustering.
2. The intelligent monitoring method based on PYNQ as recited in claim 1, characterized in that said convolution module, pooling module, full connectivity layer module use general convolution neural network accelerator IP calculation in FPGA.
3. The intelligent monitoring method according to claim 1, wherein the API interface of Python includes configuration parameters of the general convolutional neural network accelerator IP, data handling, calculation execution, and status acquisition.
4. The intelligent PYNQ-based monitoring method of claim 1, wherein said extraction module of the suggestion box, SVM classification module, linear regression correction module and kmeans clustering module are calculated in an arm processor.
5. The intelligent PYNQ-based monitoring method as defined in claim 2, wherein the feature map preprocessed by the arm processor is stored in DDR, input by row, and image data is DMA-transmitted by using AXI-lite bus control.
6. The intelligent monitoring method based on PYNQ as recited in claim 1, characterized in that said general convolutional neural network accelerator IP includes a computing unit, said computing unit employs row and column multiplexing, 6-level flow inside, said general convolutional neural network accelerator IP can selectively implement convolution, pooling, function of activation, can customize size, step size, complement 0 of kernel.
7. The intelligent monitoring method based on PYNQ as recited in claim 1, characterized in that the suggestion box can be configured with three areas, three dimensions, and 9 types of suggestion boxes.
CN201910661356.2A 2019-07-22 2019-07-22 PYNQ-based intelligent monitoring system and monitoring method Active CN110414401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910661356.2A CN110414401B (en) 2019-07-22 2019-07-22 PYNQ-based intelligent monitoring system and monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910661356.2A CN110414401B (en) 2019-07-22 2019-07-22 PYNQ-based intelligent monitoring system and monitoring method

Publications (2)

Publication Number Publication Date
CN110414401A CN110414401A (en) 2019-11-05
CN110414401B true CN110414401B (en) 2022-02-15

Family

ID=68362350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910661356.2A Active CN110414401B (en) 2019-07-22 2019-07-22 PYNQ-based intelligent monitoring system and monitoring method

Country Status (1)

Country Link
CN (1) CN110414401B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991382B (en) * 2019-12-02 2024-04-09 中国科学院国家空间科学中心 Heterogeneous visual target tracking system and method based on PYNQ framework
WO2021237608A1 (en) * 2020-05-28 2021-12-02 京东方科技集团股份有限公司 Target detection method based on heterogeneous platform, and terminal device and storage medium
CN112616043A (en) * 2020-12-22 2021-04-06 杭州电子科技大学 PYNQ-based neural network identification video monitoring alarm system and method
CN112580751A (en) * 2020-12-31 2021-03-30 杭州电子科技大学 Snore identification device based on ZYNQ and deep learning
CN113126767A (en) * 2021-04-25 2021-07-16 合肥工业大学 PYNQ and multi-mode brain-computer interface-based aircraft control system and method
CN114819120A (en) * 2022-02-25 2022-07-29 西安电子科技大学 PYNQ platform-based neural network universal acceleration processing method
CN116630709B (en) * 2023-05-25 2024-01-09 中国科学院空天信息创新研究院 Hyperspectral image classification device and method capable of configuring mixed convolutional neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103158620B (en) * 2013-03-25 2015-09-16 中国电子科技集团公司第三十八研究所 A kind of traffic detecting and tracking forewarn system
CN105975931B (en) * 2016-05-04 2019-06-14 浙江大学 A kind of convolutional neural networks face identification method based on multiple dimensioned pond
CN108256636A (en) * 2018-03-16 2018-07-06 成都理工大学 A kind of convolutional neural networks algorithm design implementation method based on Heterogeneous Computing
CN109167966A (en) * 2018-09-29 2019-01-08 南京邮电大学南通研究院有限公司 Image dynamic detection system and method based on FPGA+ARM

Also Published As

Publication number Publication date
CN110414401A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110414401B (en) PYNQ-based intelligent monitoring system and monitoring method
CN106599773B (en) Deep learning image identification method and system for intelligent driving and terminal equipment
Hu et al. Small object detection with multiscale features
CN107851195B (en) Target detection using neural networks
WO2021129181A1 (en) Portrait segmentation method, model training method and electronic device
CN113762252A (en) Unmanned aerial vehicle intelligent following target determination method, unmanned aerial vehicle and remote controller
CN105956626A (en) Deep learning based vehicle license plate position insensitive vehicle license plate recognition method
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
CN108647665A (en) Vehicle real-time detection method of taking photo by plane based on deep learning
CN109801265B (en) Real-time transmission equipment foreign matter detection system based on convolutional neural network
US20230137337A1 (en) Enhanced machine learning model for joint detection and multi person pose estimation
CN110599463A (en) Tongue image detection and positioning algorithm based on lightweight cascade neural network
CN112668492A (en) Behavior identification method for self-supervised learning and skeletal information
CN111860259A (en) Training and using method, device, equipment and medium of driving detection model
CN112949578A (en) Vehicle lamp state identification method, device, equipment and storage medium
CN110533688A (en) Follow-on method for tracking target, device and computer readable storage medium
CN114092746A (en) Multi-attribute identification method and device, storage medium and electronic equipment
CN115565146A (en) Perception model training method and system for acquiring aerial view characteristics based on self-encoder
CN114359892A (en) Three-dimensional target detection method and device and computer readable storage medium
CN114529719A (en) Method, system, medium and device for semantic segmentation of ground map elements
CN113792807A (en) Skin disease classification model training method, system, medium and electronic device
Ye et al. LLOD: a object detection method under low-light condition by feature enhancement and fusion
CN113642353A (en) Training method of face detection model, storage medium and terminal equipment
CN113628206B (en) License plate detection method, device and medium
KR102678174B1 (en) Method of human activity recognition and classification using convolutional LSTM, apparatus and computer program for performing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant