CN111191730B - Method and system for detecting oversized image target oriented to embedded deep learning - Google Patents

Method and system for detecting oversized image target oriented to embedded deep learning Download PDF

Info

Publication number
CN111191730B
CN111191730B CN202010003131.0A CN202010003131A CN111191730B CN 111191730 B CN111191730 B CN 111191730B CN 202010003131 A CN202010003131 A CN 202010003131A CN 111191730 B CN111191730 B CN 111191730B
Authority
CN
China
Prior art keywords
image
target
sub
related information
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010003131.0A
Other languages
Chinese (zh)
Other versions
CN111191730A (en
Inventor
程陶然
白林亭
文鹏程
高泽
邹昌昊
李欣瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202010003131.0A priority Critical patent/CN111191730B/en
Publication of CN111191730A publication Critical patent/CN111191730A/en
Application granted granted Critical
Publication of CN111191730B publication Critical patent/CN111191730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an oversized image target detection method and system for embedded deep learning, comprising the following steps: the device comprises an image preprocessing unit, a target detection unit and an image post-processing unit. The oversized image target detection method for embedded deep learning, which is provided by the invention, is used for solving the problem that the processing efficiency of an embedded computing platform is low by integrating a plurality of detection results through analyzing the detection results by an image post-processing unit and aiming at the increasingly oversized image processing requirements in the embedded deep learning field and aiming at the limitation of the operation of a deep neural network of an embedded multi-core processor, dividing a single image into a plurality of images based on a blocking thought, thereby realizing the parallelization target detection of the single image.

Description

Method and system for detecting oversized image target oriented to embedded deep learning
Technical Field
The invention belongs to the field of intelligent computing, and relates to an oversized image target detection method for embedded deep learning.
Background
With the continuous development of deep learning technology, image processing algorithms based on deep learning such as target detection and semantic segmentation have achieved great success. However, due to the complex structure and numerous parameters of the deep neural network, the deep learning algorithm performs poorly on the embedded computing platform with limited resources, and is often only capable of processing some small images. For large-size remote sensing images or high-definition images rich in detail information, the processing efficiency of the embedded deep learning algorithm is quite low.
In order to improve the processing efficiency of the embedded deep learning algorithm, a hardware manufacturer sequentially pushes out a plurality of AI chips, the processing speed is improved by optimizing a computing structure at a hardware level, and the data throughput is improved by a multi-core technology. However, due to the high degree of interconnectivity of the deep neural network, the parallelization calculation difficulty of the algorithm model for processing the single image is increased. The parallel computing capacity of the multi-core processor is difficult to fully utilize facing the real-time processing requirement of the streaming media data.
Disclosure of Invention
The invention provides an embedded deep learning-oriented oversized image target detection method, which aims to efficiently process oversized image data in real time and complete comprehensive and accurate detection of targets.
The solution proposed by the invention is as follows:
the method for detecting the oversized image target for embedded deep learning comprises the following steps:
1) Receiving an input image, dividing the image into a plurality of sub-images according to pixel positions, wherein any one sub-image is overlapped with all adjacent sub-images in a region close to a boundary, and the region is marked as a division redundant region;
2) Performing target detection on each sub-image respectively to obtain target related information (target detection result);
3) Re-detecting and positioning the targets of the partitioned redundant areas by referring to the obtained target related information of each sub-image; marking on the original image according to the updated target detection result, and outputting a visual result.
Based on the scheme, the invention further optimizes the following steps:
optionally, step 1) specifically performing image segmentation according to a preset width W, a preset height H and a redundancy threshold T; the redundancy threshold T characterizes the number of pixels overlapped with each other in the area close to the boundary; and 2) specifically, respectively carrying out target detection on sub-images with the size of W multiplied by H based on a convolutional neural network algorithm, and outputting target related information, wherein the target related information at least comprises a target position.
Optionally, the target related information further includes a category and a confidence.
Optionally, the width W, the height H, and the redundancy threshold T are determined according to an original image size, a target size, and a processor computing power;
the width range of the segmented image is [0, W-1], [ W-T,2W-T-1], [2W-2T,3W-2T-1] … …;
the height range of the segmented image is [0, H-1], [ H-T,2H-T-1], [2H-2T,3H-2T-1] … …;
any width range and height range together form a sub-image area.
Optionally, step 3) specifically includes:
analyzing whether a target regression frame exists in the segmented redundant area of each sub-image, wherein the distance from the target regression frame to the sub-image boundary is smaller than aT, a is a set coefficient, and the preferred value range is 0< a <0.5;
if the target regression frame meeting the conditions exists, the corresponding width or height range is redetermined by taking the segmentation redundant area as the center, and a new sub-image with the size of W multiplied by H is sampled;
and re-performing target detection on the new sub-image, only adopting a target detection result of the partitioned redundant area, updating target related information of the partitioned redundant area, and marking all target related information on the original image according to a required rule according to the target related information of the non-partitioned redundant area before so as to form a visual output image.
Correspondingly, the invention also provides an oversized image target detection system facing embedded deep learning, which comprises:
the image preprocessing unit is used for receiving an input image, dividing the image into a plurality of sub-images according to pixel point positions, wherein any one sub-image is overlapped with all adjacent sub-images in a region close to a boundary, and the region is marked as a division redundant region;
the target detection unit is used for respectively carrying out target detection on each sub-image to obtain target related information;
the image post-processing unit is used for referring to the obtained target related information of each sub-image and detecting and positioning the targets of the partitioned redundant areas again; marking on the original image according to the updated target detection result, and outputting a visual result.
Optionally, the image post-processing unit re-detects and locates the target of the segmented redundant area, specifically: and generating a new sub-image by taking the partitioned redundant area as a center, wherein the new sub-image reenters the target detection unit to perform target detection, and only adopting the target detection result of the partitioned redundant area.
Correspondingly, the invention also provides embedded equipment, which comprises a processor and a program memory, wherein the program stored in the program memory is loaded by the processor to execute the method for detecting the oversized image target facing the embedded deep learning.
Compared with the prior art, the invention has the following beneficial effects:
the method for detecting the oversized image target oriented to embedded deep learning, provided by the invention, realizes parallel target detection of a single oversized image based on a deep neural network, fully utilizes the computing resources of a multi-core processor, improves the deep learning processing efficiency in an embedded computing environment, and effectively solves the problem of real-time processing of the captured data of the (ultra) high-definition camera.
The invention fully considers that the possible targets are positioned on the dividing lines when dividing the sub-images, so that the redundant processing (which is equivalent to the offset of the standard grid cells in the transverse direction and the longitudinal direction) and the corresponding image post-processing are particularly carried out on the pixels near the boundary, thereby avoiding the missed detection of the targets and realizing the complete and accurate detection of the targets.
Drawings
Fig. 1 is a schematic diagram of the present invention.
Detailed Description
The invention is further described in detail below with reference to the drawings and examples.
Aiming at the increasing demands of oversized image processing in the embedded deep learning field and the limitation of an embedded multi-core processor operation deep neural network, the invention provides an oversized image target detection method for embedded deep learning, which is used for dividing a single image into a plurality of images based on a block dividing thought so as to realize parallelization target detection of the single image, analyzing the detection result through an image post-processing unit, integrating the plurality of detection results, and effectively solving the problem of low processing efficiency of an embedded computing platform.
As shown in fig. 1, the method for detecting the oversized image target for embedded deep learning can be realized by the following software modules:
an image preprocessing unit for: and receiving an input image, and performing image segmentation according to a preset width W, a preset height H and a redundancy threshold T. Specifically, the image width is divided into [0, W-1], [ W-T,2W-T-1], [2W-2T,3W-2T-1] … …, and the image height is divided into [0, H-1], [ H-T,2H-T-1], [2H-2T,3H-2T-1] … …;
the target detection unit is used for: respectively carrying out target detection on the sub-images with the size of W multiplied by H based on a convolutional neural network algorithm, and marking information such as target positions, belonging categories, confidence degrees and the like;
the image post-processing unit is used for: and merging the target detection results, and re-detecting and positioning the targets in the partitioned redundant areas to form a visual target detection result. Specifically, whether a target regression frame exists in the adjacent sub-image segmentation redundant area is analyzed, and if the distance from the target regression frame to the sub-image boundary is smaller than aT (0 < a < 0.5), the sub-image with the size W multiplied by H is sampled to carry out target detection again by taking the segmentation redundant area as the center; and the targets in the partitioned redundant area are subject to the secondary detection result.
The image post-processing unit can update the relevant information of the target of the partitioned redundant area according to the target detection result of the second round, mark the relevant information of the target of the first round of non-partitioned redundant area on the original image according to the required rule, and form a visual output image.
Taking the Faster R-CNN target detection algorithm as an example, a specific workflow is shown in FIG. 1, and the specific process is as follows:
first, an input image is divided by an image preprocessing unit based on a preset rule, and a set of sub-images is obtained. Taking into account factors such as original image size (4096×2160), target size (assuming that the minimum target size is 32×32 and the maximum target size is 200×200), processor computing power, etc., the width and height of the sub-images are set to 612 and 426, respectively, and the redundancy threshold is 30, that is, the original image width is divided into [0,611] [582,1193] [1164,1775] [1746,2357] [2328,2939] [2910,3521] [3492,4103], [4096,4103] portions are padded with zero extension, and the original image height is divided into [0,455] [426,881] [852,1307] [1278,1733] [1704,2159], and 7×5=35 sub-images.
Then, the sub-images enter a target detection unit in batches, and target detection is performed in parallel based on a fast R-CNN algorithm.
Finally, the image post-processing unit analyzes the target detection result and judges whether a target regression frame is in a segmentation redundant area, and the distance between the target regression frame and the edge of the sub-image is smaller than 9 pixels (0.3T); if there is a target satisfying the above condition, a new sub-image is generated by sampling with the divided redundant area as the center, and target detection is performed again. For example, if the target regression frame diagonal of sub-image number 2 ([ 585,215] [645,255 ]), i.e. the regression frame is 3 pixels away from the edge of the sub-image, resampling the region of the abscissa range [241,852], and the ordinate range [0,455] generates a new sub-image, which enters the target detection unit, repositioning the intermediate region target position. In addition, the image post-processing unit marks the position, the category, the confidence and other target information on the original image according to the target detection result, and the visualization processing is completed and the image is output.
The above target detection for each sub-image may employ different conventional algorithms (usually based on convolutional neural network algorithms) according to actual needs, for example, besides the fast R-CNN algorithm (focusing on positioning accuracy), the YOLO algorithm (focusing on operation speed) may also be employed.

Claims (6)

1. An oversized image target detection method for embedded deep learning is characterized by comprising the following steps:
1) Receiving an input image, dividing the image into a plurality of sub-images according to pixel positions, wherein any one sub-image is overlapped with all adjacent sub-images in a region close to a boundary, and the region is marked as a division redundant region; specifically, image segmentation is carried out according to a preset width W, a preset height H and a redundancy threshold T; the redundancy threshold T characterizes the number of pixels overlapped with each other in the area close to the boundary;
2) Respectively carrying out target detection on each sub-image to obtain target related information; the method comprises the steps of respectively carrying out target detection on sub-images with the size of W multiplied by H based on a convolutional neural network algorithm, and outputting target related information, wherein the target related information at least comprises a target position;
3) Re-detecting and positioning the targets of the partitioned redundant areas by referring to the obtained target related information of each sub-image; marking the original image according to the updated target detection result, and outputting a visual result; the method specifically comprises the following steps: analyzing whether a target regression frame exists in the segmentation redundant area of each sub-image, wherein the distance from the target regression frame to the sub-image boundary is smaller than aT, and a is a set coefficient; if the target regression frame meeting the conditions exists, the corresponding width or height range is redetermined by taking the segmentation redundant area as the center, and a new sub-image with the size of W multiplied by H is sampled; and re-performing target detection on the new sub-image, only adopting a target detection result of the partitioned redundant area, updating target related information of the partitioned redundant area, and marking all target related information on the original image according to a required rule according to the target related information of the non-partitioned redundant area before so as to form a visual output image.
2. The embedded deep learning-oriented oversized image target detection method as claimed in claim 1, characterized by: the target related information also includes a category to which the target related information belongs and a confidence level.
3. The embedded deep learning-oriented oversized image target detection method as claimed in claim 1, characterized by: the width W, the height H and the redundancy threshold T are determined according to the original image size, the target size and the processor computing power;
the range of the width of the segmentation image is [0, W-1], [ W-T,2W-T-1], [2W-2T,3W-2T-1]. The third party.
The height ranges of the segmented images are [0, H-1], [ H-T,2H-T-1], [2H-2T,3H-2T-1 ];
any width range and height range together form a sub-image area.
4. The embedded deep learning-oriented oversized image target detection method as claimed in claim 1, characterized by: a is more than 0 and less than 0.5.
5. An embedded deep learning-oriented oversized image target detection system, comprising:
the image preprocessing unit is used for receiving an input image, dividing the image into a plurality of sub-images according to pixel point positions, wherein any one sub-image is overlapped with all adjacent sub-images in a region close to a boundary, and the region is marked as a division redundant region; the method is particularly used for image segmentation according to a preset width W, a preset height H and a redundancy threshold T; the redundancy threshold T characterizes the number of pixels overlapped with each other in the area close to the boundary;
the target detection unit is used for respectively carrying out target detection on each sub-image to obtain target related information; the method is particularly used for respectively carrying out target detection on sub-images with the size of W multiplied by H based on a convolutional neural network algorithm and outputting target related information, wherein the target related information at least comprises a target position;
the image post-processing unit is used for referring to the obtained target related information of each sub-image and detecting and positioning the targets of the partitioned redundant areas again; marking the original image according to the updated target detection result, and outputting a visual result; the method comprises the steps of analyzing whether a target regression frame exists in a segmentation redundant area of each sub-image, wherein the distance from the target regression frame to a sub-image boundary is smaller than aT, and a is a set coefficient; if the target regression frame meeting the conditions exists, the corresponding width or height range is redetermined by taking the segmentation redundant area as the center, and a new sub-image with the size of W multiplied by H is sampled; and then re-carrying out target detection on the new sub-image, only adopting the target detection result of the partitioned redundant area, updating the target related information of the partitioned redundant area, and marking all the target related information on the original image according to the target related information of the non-partitioned redundant area so as to form a visual output image.
6. An embedded device comprising a processor and a program memory, wherein the program stored in the program memory, when loaded by the processor, performs the embedded deep learning oriented oversized image object detection method of claim 1.
CN202010003131.0A 2020-01-02 2020-01-02 Method and system for detecting oversized image target oriented to embedded deep learning Active CN111191730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010003131.0A CN111191730B (en) 2020-01-02 2020-01-02 Method and system for detecting oversized image target oriented to embedded deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010003131.0A CN111191730B (en) 2020-01-02 2020-01-02 Method and system for detecting oversized image target oriented to embedded deep learning

Publications (2)

Publication Number Publication Date
CN111191730A CN111191730A (en) 2020-05-22
CN111191730B true CN111191730B (en) 2023-05-12

Family

ID=70709746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010003131.0A Active CN111191730B (en) 2020-01-02 2020-01-02 Method and system for detecting oversized image target oriented to embedded deep learning

Country Status (1)

Country Link
CN (1) CN111191730B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132164B (en) * 2020-11-20 2021-03-09 北京易真学思教育科技有限公司 Target detection method, system, computer device and storage medium
CN113762220B (en) * 2021-11-03 2022-03-15 通号通信信息集团有限公司 Object recognition method, electronic device, and computer-readable storage medium
CN114445794A (en) * 2021-12-21 2022-05-06 北京罗克维尔斯科技有限公司 Parking space detection model training method, parking space detection method and device
CN114332456A (en) * 2022-03-16 2022-04-12 山东力聚机器人科技股份有限公司 Target detection and identification method and device for large-resolution image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751363B1 (en) * 1999-08-10 2004-06-15 Lucent Technologies Inc. Methods of imaging based on wavelet retrieval of scenes
WO2018058573A1 (en) * 2016-09-30 2018-04-05 富士通株式会社 Object detection method, object detection apparatus and electronic device
WO2019000653A1 (en) * 2017-06-30 2019-01-03 清华大学深圳研究生院 Image target identification method and apparatus

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014186392A (en) * 2013-03-21 2014-10-02 Fuji Xerox Co Ltd Image processing device and program
CN104346801B (en) * 2013-08-02 2018-07-20 佳能株式会社 Image composition apparatus for evaluating, information processing unit and its method
JP2015106360A (en) * 2013-12-02 2015-06-08 三星電子株式会社Samsung Electronics Co.,Ltd. Object detection method and object detection device
CN104408482B (en) * 2014-12-08 2019-02-12 电子科技大学 A kind of High Resolution SAR Images object detection method
US9542751B2 (en) * 2015-05-08 2017-01-10 Qualcomm Incorporated Systems and methods for reducing a plurality of bounding regions
KR101795270B1 (en) * 2016-06-09 2017-11-07 현대자동차주식회사 Method and Apparatus for Detecting Side of Object using Information for Ground Boundary of Obstacle
KR20180107988A (en) * 2017-03-23 2018-10-04 한국전자통신연구원 Apparatus and methdo for detecting object of image
CN108154521B (en) * 2017-12-07 2021-05-04 中国航空工业集团公司洛阳电光设备研究所 Moving target detection method based on target block fusion
KR101896357B1 (en) * 2018-02-08 2018-09-07 주식회사 라디코 Method, device and program for detecting an object
CN110781839A (en) * 2019-10-29 2020-02-11 北京环境特性研究所 Sliding window-based small and medium target identification method in large-size image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751363B1 (en) * 1999-08-10 2004-06-15 Lucent Technologies Inc. Methods of imaging based on wavelet retrieval of scenes
WO2018058573A1 (en) * 2016-09-30 2018-04-05 富士通株式会社 Object detection method, object detection apparatus and electronic device
WO2019000653A1 (en) * 2017-06-30 2019-01-03 清华大学深圳研究生院 Image target identification method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李庆忠 ; 李宜兵 ; 牛炯 ; .基于改进YOLO和迁移学习的水下鱼类目标实时检测.模式识别与人工智能.2019,(第03期),全文. *

Also Published As

Publication number Publication date
CN111191730A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111191730B (en) Method and system for detecting oversized image target oriented to embedded deep learning
US11475660B2 (en) Method and system for facilitating recognition of vehicle parts based on a neural network
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN111160108A (en) Anchor-free face detection method and system
US11900676B2 (en) Method and apparatus for detecting target in video, computing device, and storage medium
CN115205264A (en) High-resolution remote sensing ship detection method based on improved YOLOv4
CN113989604B (en) Tire DOT information identification method based on end-to-end deep learning
CN110069961A (en) A kind of object detecting method and device
CN114170230A (en) Glass defect detection method and device based on deformable convolution and feature fusion
CN112052702A (en) Method and device for identifying two-dimensional code
CN115223166A (en) Picture pre-labeling method, picture labeling method and device, and electronic equipment
CN109657682B (en) Electric energy representation number identification method based on deep neural network and multi-threshold soft segmentation
CN110807430A (en) Method for preprocessing live panoramic traffic sign picture
CN114359948B (en) Based on overlapping sliding window mechanism and YOLOV &lt; 4) Power grid wiring graphic primitive identification method
CN114283431B (en) Text detection method based on differentiable binarization
CN115661097A (en) Object surface defect detection method and system
CN115439700A (en) Image processing method and device and machine-readable storage medium
CN111860332B (en) Dual-channel electrokinetic diagram part detection method based on multi-threshold cascade detector
Li et al. Container keyhole positioning based on deep neural network
CN114549628A (en) Power pole inclination detection method, device, equipment and storage medium
CN114511862A (en) Form identification method and device and electronic equipment
Wu et al. Express parcel detection based on improved faster regions with CNN features
CN118470577B (en) Inspection scene identification method and system based on big data
CN112364876B (en) Efficient bar code binarization method and system
CN118071785A (en) Automatic extraction method and device for standard units of chip layout level

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant