CN110717575A - Frame buffer free convolutional neural network system and method - Google Patents

Frame buffer free convolutional neural network system and method Download PDF

Info

Publication number
CN110717575A
CN110717575A CN201810767312.3A CN201810767312A CN110717575A CN 110717575 A CN110717575 A CN 110717575A CN 201810767312 A CN201810767312 A CN 201810767312A CN 110717575 A CN110717575 A CN 110717575A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
region
interest
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810767312.3A
Other languages
Chinese (zh)
Other versions
CN110717575B (en
Inventor
杨得炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Himax Technologies Ltd
Original Assignee
Himax Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Himax Technologies Ltd filed Critical Himax Technologies Ltd
Priority to CN201810767312.3A priority Critical patent/CN110717575B/en
Publication of CN110717575A publication Critical patent/CN110717575A/en
Application granted granted Critical
Publication of CN110717575B publication Critical patent/CN110717575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention relates to a convolutional neural network system and a convolutional neural network method without a frame buffer. The frame buffer-free convolutional neural network system comprises: a region-of-interest unit for extracting features to generate a region of interest of the input image frame; a convolutional neural network unit processing a region of interest of an input image frame to detect an object; and a tracking unit for comparing the features extracted at different times so that the convolutional neural network unit selectively processes the input image frame accordingly.

Description

Frame buffer free convolutional neural network system and method
Technical Field
The present invention relates to a Convolutional Neural Network (CNN), and more particularly, to a frame buffer-less convolutional neural network system.
Background
A Convolutional Neural Network (CNN) is one of artificial neural networks (artificial neural networks) and can be used for machine learning (machine learning). Convolutional neural networks are applicable to signal processing, such as image processing and computer vision.
Fig. 1 shows a block diagram of a conventional convolutional Neural Network 900, disclosed in "a Reconfigurable Streaming reliable Neural Network Accelerator for Internet of Things" (a Reconfigurable Streaming reliable Neural Network Accelerator for Internet of Things) proposed by Li Du et al, 8 months 2017, Institute of Electrical and Electronics Engineers (IEEE) circuit and systems society I: periodic papers, the contents of which are considered part of this specification. The convolutional neural network 900 includes a buffer bank 91 comprising a single-port Static Random Access Memory (SRAM) for storing intermediate data (intermediate data) and exchanging data with a frame buffer 92, the frame buffer 92 comprising a Dynamic Random Access Memory (DRAM), such as a double data rate synchronous dynamic random access memory (DDR DRAM), for storing an entire image frame for operation of the convolutional neural network. The buffer group 91 is divided into two parts: an input layer and an output layer. The convolutional neural network 900 includes a column buffer 93 for remapping (remap) the output of the buffer group 91 to a Convolution Unit (CU) engine array 94. Convolution unit engine array 94 includes a plurality of convolution units to perform highly parallel convolution operations. The convolution unit engine array 94 includes a pre-fetch controller 941 for periodically fetching parameters from a Direct Memory Access (DMA) controller (not shown) and updating the weights and bias values of the convolution unit engine array 94. Convolutional neural network 900 also includes an accumulation (accumulation) buffer 95, having draft (scratch pad) memory, for storing the partial convolution results of convolutional unit engine array 94. The accumulation buffer 95 contains a maximum pooling (max pool)951 to pool the output layer data. The convolutional neural network 900 includes an instruction decoder 96 for storing commands pre-stored in a frame buffer 92.
As shown in FIG. 1, in a conventional convolutional neural network system, a frame buffer comprises a Dynamic Random Access Memory (DRAM), such as a double data rate synchronous dynamic random access memory (DDR DRAM), for storing an entire image frame for convolutional neural network operation. For example, a frame with a resolution of 320x240 requires a frame buffer with a space of 320x240x8 bits. However, double data rate synchronous dynamic random access memory (DDRDRAM) is not suitable for low power applications, such as wearable or internet of things (IoT) devices. Therefore, it is desirable to provide a novel convolutional neural network system suitable for low power applications.
Disclosure of Invention
In view of the foregoing, it is an object of the embodiments of the present invention to provide a convolutional neural network system without a frame buffer. The present embodiment can use a simple system architecture to perform the convolutional neural network operation on high resolution image frames.
According to an embodiment of the present invention, a frame buffer-less convolutional neural network system includes a region of interest unit, a convolutional neural network unit, and a tracking unit. The region of interest unit extracts features to generate a region of interest of the input image frame. The convolutional neural network unit processes a region of interest of the input image frame to detect an object. The tracking unit compares the extracted features at different times so that the convolutional neural network unit can selectively process the input image frame accordingly.
Brief Description of Drawings
Fig. 1 shows a block diagram of a conventional convolutional neural network.
FIG. 2A is a block diagram of a convolutional neural network system without a frame buffer according to an embodiment of the present invention.
FIG. 2B shows a flow diagram of a method for a frame buffer free convolutional neural network, in accordance with an embodiment of the present invention.
Fig. 3 is a block diagram showing a detailed structure of the roi unit of fig. 2A.
Fig. 4A illustrates a decision diagram, which includes 4x6 blocks.
FIG. 4B illustrates another decision diagram, which is updated after FIG. 4A.
FIG. 5 is a block diagram illustrating a detailed structure of the register of FIG. 2A.
FIG. 6 is a block diagram of the convolutional neural network unit of FIG. 2A.
Detailed Description
Fig. 2A shows a block diagram of a frame buffer-less Convolutional Neural Network (CNN) system 100 according to an embodiment of the present invention, and fig. 2B shows a flowchart of a frame buffer-less Convolutional Neural Network (CNN) method 200 according to an embodiment of the present invention.
In the present embodiment, the frame-buffer-less convolutional neural network system (hereinafter referred to as the system) 100 may include a region of interest (ROI) unit 11 for generating a region of interest in an input image frame (step 21). Since the system 100 of the present embodiment does not include a frame buffer, the roi unit 11 can employ a scan line based technique and a block based mechanism to find the roi in the input image frame. The input image frame is divided into a plurality of image blocks arranged in a matrix form, such as 4 × 6 image blocks.
In the present embodiment, the roi unit 11 generates block-based features to determine whether to perform a Convolutional Neural Network (CNN) operation on each image block. Fig. 3 shows a block diagram of the region of interest unit 11 of fig. 2A. In the present embodiment, the region of interest unit 11 may include a feature extractor 111, for example, for extracting a shallow feature (shallow feature) from the input image frame. In one example, the feature extractor 111 generates (shallow) features of the tile based on a histogram (histogram) of the tile. In another example, the feature extractor 111 generates (shallow) features of the block based on frequency analysis.
The region of interest unit 11 may also include a classifier 112, such as a Support Vector Machine (SVM), for determining whether each block of the input image frame performs convolutional neural network operations. Thereby, a decision map (decision map)12 can be generated, which includes a plurality of blocks (which can be arranged in a matrix form) representing the input image frame. Fig. 4A illustrates the decision of fig. 12, which includes 4X6 blocks, where X indicates that the relevant block does not need to perform the convolutional neural network operation, C indicates that the relevant block needs to perform the convolutional neural network operation, and D indicates that the relevant block detects an object (e.g., a dog). Therefore, the region of interest can be determined and the convolutional neural network operation can be executed.
Referring to fig. 2B, the system 100 may comprise a register 13, such as a Static Random Access Memory (SRAM), for storing the (shallow) features generated by the feature extractor 111 (of the region of interest unit 11) (step 22). FIG. 5 is a block diagram illustrating a detailed structure of the register 13 of FIG. 2A. In the present embodiment, the register 13 may comprise two feature maps (feature maps), i.e., the first feature map 131A, for storing the features of the previous image frame (at the previous time t-1); and a second feature map 131B for storing features of the current frame (at the current time t). The buffer 13 may also include a sliding window 132, which may be 40x40x8 bits in size, for storing blocks of the input image frame.
Referring to fig. 2A, the system 100 of the present embodiment may include a Convolutional Neural Network (CNN) unit 14, which receives and processes the region of interest of the generated input image frame (region of interest unit 11) to detect the object (step 23). The convolutional neural network unit 14 of the present embodiment is only performed in the region of interest, rather than the entire input image frame as in the conventional system with a frame buffer.
Fig. 6 shows a block diagram of the convolutional neural network unit 14 of fig. 2A. The convolutional neural network unit 14 may include a convolution unit 141 including a plurality of convolution engines (convolution engines) for performing convolution operations. The convolutional neural network unit 14 may include an excitation (activation) unit 142 that may perform an excitation function when a default feature is detected. The convolutional neural network unit 14 may also include a pooling (pooling) unit 143 to perform down-sampling or pooling on the input image frame.
The system 100 of the present embodiment may include a tracking unit 15 for comparing the first feature map 131A (of the previous frame) with the second feature map 131B (of the current frame), and then updating the decision map 12 (step 24). The tracking unit 15 analyzes the content change between the first feature map 131A and the second feature map 131B. FIG. 4B illustrates another decision FIG. 12, updated after FIG. 4A. In this example, at the previous time, the blocks in rows 5-6 and column 3 have detected an object (D as indicated in FIG. 4A), but at the current time, the object disappears (X as indicated in the fourth B). Accordingly, the convolutional neural network unit 14 does not need to perform convolutional neural network operations for the blocks without feature changes. In other words, the convolutional neural network unit 14 selectively performs convolutional neural network operations for blocks having characteristic variations. Thus, the system 100 may substantially speed up operation.
The convolutional neural network operation of the above embodiments can be substantially reduced (and accelerated) compared to conventional convolutional neural network systems. Furthermore, since embodiments of the present invention do not require a frame buffer, embodiments may be better suited for low power applications, such as wearable or internet of things (IoT) devices. For image frames with a resolution of 320x240 and a (non-overlapping) sliding window size of 40x40, a conventional system with a frame buffer requires an 8x6 sliding window to perform convolutional neural network operations. In contrast, the system 100 of the present embodiment requires only a few (less than 10) sliding windows to perform convolutional neural network operations.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention; it is intended that all such equivalent changes and modifications be included within the scope of the following claims without departing from the spirit of the invention.
[ description of reference ]
100 convolutional neural network system without frame buffer
11 region of interest unit
111 characteristic extractor
112 classifier
12 decision diagram
13 buffer
131A first characteristic diagram
131B second characteristic diagram
132 sliding window
14 convolutional neural network unit
141 convolution unit
142 excitation unit
143 pooling unit
15 tracking unit
200 method for frame buffer free convolutional neural network
21 generating a region of interest in an input image frame
22 store characteristic feature maps
23 processing the region of interest to detect the object
24 comparing features and performing convolutional neural network operations on blocks with feature variations
900 convolutional neural network
91 buffer group
92 frame buffer
93-row buffer
94 convolution unit engine array
941 prefetch controller
95 accumulation buffer
951 maximum pooling
96 instruction decoder

Claims (20)

1. A frame buffer free convolutional neural network system, comprising:
a region-of-interest unit for extracting features to generate a region of interest of the input image frame;
a convolutional neural network unit processing a region of interest of the input image frame to detect an object; and
a tracking unit to compare the features extracted at different times so that the convolutional neural network unit selectively processes the input image frame accordingly.
2. The frame buffer free convolutional neural network system of claim 1, wherein the region of interest unit employs a scan line based technique and a block based mechanism for finding the region of interest in the input image frame, wherein the input image frame is divided into a plurality of image blocks.
3. The frame buffer-less convolutional neural network system of claim 2, wherein the region-of-interest unit generates block-based features to determine whether to perform convolutional neural network operations for each image block.
4. The framebuffer-less convolutional neural network system of claim 2, wherein the region-of-interest unit comprises:
a feature extractor for extracting the feature from the input image frame; and
the classifier determines whether each image block executes the operation of the convolutional neural network, thereby generating a determination map to determine the region of interest.
5. The frame buffer-less convolutional neural network system of claim 4, wherein the feature extractor generates shallow features of the image blocks according to a block-based histogram or frequency analysis.
6. The frame buffer-less convolutional neural network system of claim 4, further comprising a register for storing the feature.
7. The frame buffer-less convolutional neural network system of claim 6, wherein the buffer comprises: the first characteristic picture is used for storing the characteristics of the previous image frame; and a second feature map for storing the features of the current image frame.
8. The frame buffer-less convolutional neural network system of claim 6, wherein the register comprises a sliding window for storing blocks of the input image frame.
9. The frame-buffer-less convolutional neural network system of claim 7, wherein the tracking unit compares the first feature map with the second feature map to update the decision map.
10. The framebuffer-less convolutional neural network system of claim 1, wherein the convolutional neural network unit comprises:
a convolution unit including a plurality of convolution engines for performing convolution operations on the region of interest;
an excitation unit that performs an excitation function when the default feature is detected; and
a pooling unit for performing a down-sampling of the input image frame.
11. A method for a frame buffer free convolutional neural network, comprising:
extracting features to generate a region of interest of the input image frame;
executing a convolutional neural network operation on the region of interest of the input image frame to detect an object; and
the features extracted at different times are compared to selectively process the input image frame.
12. The method of claim 11, wherein the region of interest is generated using a scan line based technique and a block based mechanism, wherein the input image frame is divided into a plurality of image blocks.
13. The method for frame buffer free convolutional neural network of claim 12, wherein the step of generating the region of interest comprises:
block-based features are generated to determine whether each image block performs convolutional neural network operations.
14. The method for frame buffer free convolutional neural network of claim 12, wherein the step of generating the region of interest comprises:
extracting the feature from the input image frame; and
a classification method is used to determine whether each image block executes the operation of the convolutional neural network, so as to generate a determination map, and the interested region is determined accordingly.
15. The method for frame buffer-less convolutional neural network of claim 14, wherein the step of extracting the feature comprises:
shallow features of the image block are generated based on a block-based histogram or frequency analysis.
16. The method for frame buffer free convolutional neural network of claim 14, further comprising the step of staging the feature temporarily.
17. The method for frame buffer-less convolutional neural network of claim 16, wherein the step of staging the feature comprises:
generating a first feature map for storing the features of the previous image frame; and generating a second feature map for storing the features of the current image frame.
18. The method for frame buffer-less convolutional neural network of claim 16, wherein the step of staging the feature comprises:
generating a sliding window for storing the blocks of the input image frame.
19. The method for frame buffer-less convolutional neural network of claim 17, wherein the step of comparing the features comprises:
and comparing the first characteristic diagram with the second characteristic diagram to update the decision diagram.
20. The method for frame buffer free convolutional neural network of claim 11, wherein the step of performing the convolutional neural network operation comprises:
using a plurality of convolution engines to perform convolution operations on the region of interest;
executing an incentive function when a default feature is detected; and
a down-sampling rate is performed on the input image frame.
CN201810767312.3A 2018-07-13 2018-07-13 Frame buffer free convolutional neural network system and method Active CN110717575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810767312.3A CN110717575B (en) 2018-07-13 2018-07-13 Frame buffer free convolutional neural network system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810767312.3A CN110717575B (en) 2018-07-13 2018-07-13 Frame buffer free convolutional neural network system and method

Publications (2)

Publication Number Publication Date
CN110717575A true CN110717575A (en) 2020-01-21
CN110717575B CN110717575B (en) 2022-07-26

Family

ID=69208451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810767312.3A Active CN110717575B (en) 2018-07-13 2018-07-13 Frame buffer free convolutional neural network system and method

Country Status (1)

Country Link
CN (1) CN110717575B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271514A (en) * 2007-03-21 2008-09-24 株式会社理光 Image detection method and device for fast object detection and objective output
US20090222388A1 (en) * 2007-11-16 2009-09-03 Wei Hua Method of and system for hierarchical human/crowd behavior detection
CN103914702A (en) * 2013-01-02 2014-07-09 国际商业机器公司 System and method for boosting object detection performance in videos
CN104268900A (en) * 2014-09-26 2015-01-07 中安消技术有限公司 Motion object detection method and device
CN104298976A (en) * 2014-10-16 2015-01-21 电子科技大学 License plate detection method based on convolutional neural network
CN104504362A (en) * 2014-11-19 2015-04-08 南京艾柯勒斯网络科技有限公司 Face detection method based on convolutional neural network
WO2015095733A1 (en) * 2013-12-19 2015-06-25 Objectvideo, Inc. System and method for identifying faces in unconstrained media
CN105512640A (en) * 2015-12-30 2016-04-20 重庆邮电大学 Method for acquiring people flow on the basis of video sequence
CN105718868A (en) * 2016-01-18 2016-06-29 中国科学院计算技术研究所 Face detection system and method for multi-pose faces
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
US20160371546A1 (en) * 2015-06-16 2016-12-22 Adobe Systems Incorporated Generating a shoppable video
US20170011281A1 (en) * 2015-07-09 2017-01-12 Qualcomm Incorporated Context-based priors for object detection in images
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN107016409A (en) * 2017-03-20 2017-08-04 华中科技大学 A kind of image classification method and system based on salient region of image
CN107492115A (en) * 2017-08-30 2017-12-19 北京小米移动软件有限公司 The detection method and device of destination object
CN107704797A (en) * 2017-08-08 2018-02-16 深圳市安软慧视科技有限公司 Real-time detection method and system and equipment based on pedestrian in security protection video and vehicle
CN107832683A (en) * 2017-10-24 2018-03-23 亮风台(上海)信息科技有限公司 A kind of method for tracking target and system
CN108229319A (en) * 2017-11-29 2018-06-29 南京大学 The ship video detecting method merged based on frame difference with convolutional neural networks
CN108229523A (en) * 2017-04-13 2018-06-29 深圳市商汤科技有限公司 Image detection, neural network training method, device and electronic equipment

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271514A (en) * 2007-03-21 2008-09-24 株式会社理光 Image detection method and device for fast object detection and objective output
US20090222388A1 (en) * 2007-11-16 2009-09-03 Wei Hua Method of and system for hierarchical human/crowd behavior detection
CN103914702A (en) * 2013-01-02 2014-07-09 国际商业机器公司 System and method for boosting object detection performance in videos
WO2015095733A1 (en) * 2013-12-19 2015-06-25 Objectvideo, Inc. System and method for identifying faces in unconstrained media
CN104268900A (en) * 2014-09-26 2015-01-07 中安消技术有限公司 Motion object detection method and device
CN104298976A (en) * 2014-10-16 2015-01-21 电子科技大学 License plate detection method based on convolutional neural network
CN104504362A (en) * 2014-11-19 2015-04-08 南京艾柯勒斯网络科技有限公司 Face detection method based on convolutional neural network
US20160371546A1 (en) * 2015-06-16 2016-12-22 Adobe Systems Incorporated Generating a shoppable video
US20170011281A1 (en) * 2015-07-09 2017-01-12 Qualcomm Incorporated Context-based priors for object detection in images
CN105512640A (en) * 2015-12-30 2016-04-20 重庆邮电大学 Method for acquiring people flow on the basis of video sequence
CN105718868A (en) * 2016-01-18 2016-06-29 中国科学院计算技术研究所 Face detection system and method for multi-pose faces
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN107016409A (en) * 2017-03-20 2017-08-04 华中科技大学 A kind of image classification method and system based on salient region of image
CN108229523A (en) * 2017-04-13 2018-06-29 深圳市商汤科技有限公司 Image detection, neural network training method, device and electronic equipment
CN107704797A (en) * 2017-08-08 2018-02-16 深圳市安软慧视科技有限公司 Real-time detection method and system and equipment based on pedestrian in security protection video and vehicle
CN107492115A (en) * 2017-08-30 2017-12-19 北京小米移动软件有限公司 The detection method and device of destination object
CN107832683A (en) * 2017-10-24 2018-03-23 亮风台(上海)信息科技有限公司 A kind of method for tracking target and system
CN108229319A (en) * 2017-11-29 2018-06-29 南京大学 The ship video detecting method merged based on frame difference with convolutional neural networks

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIAOWEN WU 等: "The Detection of Typical Targets under the Background of Land War", 《2017 29TH CHINESE CONTROL AND DECISION CONFERENCE(CCDC)》 *
张菊莉 等: "视觉感知启发的面向出舱活动的物体识别技术研究", 《载人航天》 *
张雅俊 等: "基于卷积神经网络的人流量统计", 《重庆邮电大学学报( 自然科学版)》 *
王思雨 等: "基于卷积神经网络的高分辨率SAR图像飞机目标检测方法", 《雷达学报》 *

Also Published As

Publication number Publication date
CN110717575B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
US10769485B2 (en) Framebuffer-less system and method of convolutional neural network
US11074445B2 (en) Remote sensing image recognition method and apparatus, storage medium and electronic device
US9971959B2 (en) Performing object detection operations via a graphics processing unit
US20060222243A1 (en) Extraction and scaled display of objects in an image
US10810721B2 (en) Digital image defect identification and correction
WO2018176186A1 (en) Semantic image segmentation using gated dense pyramid blocks
CN108229673B (en) Convolutional neural network processing method and device and electronic equipment
CN109784372B (en) Target classification method based on convolutional neural network
US9025889B2 (en) Method, apparatus and computer program product for providing pattern detection with unknown noise levels
CN109671042B (en) Gray level image processing system and method based on FPGA morphological operator
US10892012B2 (en) Apparatus, video processing unit and method for clustering events in a content addressable memory
CN111461145A (en) Method for detecting target based on convolutional neural network
WO2023116632A1 (en) Video instance segmentation method and apparatus based on spatio-temporal memory information
US10475187B2 (en) Apparatus and method for dividing image into regions
CN111340025A (en) Character recognition method, character recognition device, computer equipment and computer-readable storage medium
JP7014005B2 (en) Image processing equipment and methods, electronic devices
US20150242988A1 (en) Methods of eliminating redundant rendering of frames
CN110717575B (en) Frame buffer free convolutional neural network system and method
US7479996B2 (en) Noise eliminating device and method therefor
TWI696127B (en) Framebuffer-less system and method of convolutional neural network
CN111179212A (en) Method for realizing micro target detection chip integrating distillation strategy and deconvolution
CN114943729A (en) Cell counting method and system for high-resolution cell image
JP2003178310A (en) Object image tracking method and program
CN114821048A (en) Object segmentation method and related device
CN114120423A (en) Face image detection method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant