CN112052907A - Target detection method and device based on image edge information and storage medium - Google Patents

Target detection method and device based on image edge information and storage medium Download PDF

Info

Publication number
CN112052907A
CN112052907A CN202010968922.7A CN202010968922A CN112052907A CN 112052907 A CN112052907 A CN 112052907A CN 202010968922 A CN202010968922 A CN 202010968922A CN 112052907 A CN112052907 A CN 112052907A
Authority
CN
China
Prior art keywords
image
target detection
edge
edge information
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010968922.7A
Other languages
Chinese (zh)
Inventor
廖丹萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Smart Video Security Innovation Center Co Ltd
Original Assignee
Zhejiang Smart Video Security Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Smart Video Security Innovation Center Co Ltd filed Critical Zhejiang Smart Video Security Innovation Center Co Ltd
Priority to CN202010968922.7A priority Critical patent/CN112052907A/en
Publication of CN112052907A publication Critical patent/CN112052907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a target detection method based on image edge information, which comprises the steps of obtaining an image to be detected; performing edge extraction on the image to obtain a binary image only containing edge information; and training a target detection model, and inputting the binary image into a preset target detection model to obtain a target detection result. According to the target detection method based on the image edge information, redundant information such as color and texture in a target detection task is removed only by using the edge information of the image, so that an algorithm can construct a detection model with strong robustness to changes of color, texture and illumination, and the efficiency and accuracy of target detection are improved.

Description

Target detection method and device based on image edge information and storage medium
Technical Field
The present invention relates to the field of image analysis technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting a target based on image edge information.
Background
With the development of computer image recognition technology, recognizing real world images captured by a camera has become an important branch of technology.
In the prior art, a method based on deep learning detects information such as color and texture of an acquired image to realize target identification, but in different scenes, the color and texture information of a target are influenced by conditions such as data distribution and illumination, and a large difference is presented, and if the color and texture information of the image is used for target detection, the performance of a model with good performance on a source domain on a target domain may be poor due to the difference of the information. For example, in a car detection algorithm, if the source training set only contains white cars, the algorithm is likely to fail to detect cars of other colors in the test set, and a common solution is to label more data so that the training set contains as many object types as possible that will appear in the test set, however, labeling data is time-consuming and expensive, and secondly, in practical applications, objects are variable and cannot be substantially completely covered by the training set, so the target detection method in the prior art is not accurate.
Disclosure of Invention
The embodiment of the disclosure provides a target detection method, a target detection device, target detection equipment and a storage medium based on image edge information. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present disclosure provides a target detection method based on image edge information, including:
acquiring an image to be detected;
performing edge extraction on the image to obtain a binary image only containing edge information;
and inputting the binary image into a preset target detection model to obtain a target detection result.
Further, before inputting the binary image into a preset target detection model, the method further includes:
and training a target detection model.
Further, training a target detection model, comprising:
performing edge extraction on the training image set to obtain a binary image set only containing edge information;
acquiring a label set corresponding to the binary image set;
and training a target detection model according to the binary image set and the label set.
Further, an object detection model, comprising:
an object detection model based on a pattern recognition algorithm, or,
and (3) a target detection model based on a deep learning algorithm.
Further, performing edge extraction on the image, including:
edge extraction is performed on the image through a pattern recognition algorithm, or,
and carrying out edge extraction on the image through a deep learning algorithm.
Further, the image is subjected to edge extraction through a Canny edge detection algorithm in a pattern recognition algorithm, or is subjected to edge extraction through an HED (integrally-nested edge detection) algorithm in a deep learning algorithm.
In a second aspect, an embodiment of the present disclosure provides an object detection apparatus based on image edge information, including:
the acquisition module is used for acquiring an image to be detected;
the extraction module is used for carrying out edge extraction on the image to obtain a binary image only containing edge information;
and the detection module is used for inputting the binary image into a preset target detection model to obtain a target detection result.
Further, still include:
and the training module is used for training the target detection model.
In a third aspect, the present disclosure provides an image edge information-based object detection device, including a processor and a memory storing program instructions, where the processor is configured to execute the image edge information-based object detection method provided in the foregoing embodiments when executing the program instructions.
In a fourth aspect, the present disclosure provides a computer-readable medium, on which computer-readable instructions are stored, where the computer-readable instructions are executable by a processor to implement an image edge information-based target detection method provided in the foregoing embodiments.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the target detection method based on the image edge information, the target detection model is constructed according to the edge information of the image, only the edge information of the image is utilized, redundant information of color, texture and the like in a target detection task is removed, so that the detection model with strong robustness on the change of the color, the texture and illumination can be constructed by an algorithm, and the efficiency and the accuracy of target detection are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic flow diagram illustrating a method for object detection based on image edge information in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a method of object detection based on image edge information in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a structure of an object detection apparatus based on image edge information according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a structure of an object detection apparatus based on image edge information according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a computer storage medium in accordance with an exemplary embodiment.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
In many cases, the human eye can distinguish the position and the type of a target object only depending on the edge information of an image, which shows that in a target detection task, the color and texture information of the image is redundant information to some extent, and the redundant information can influence the generalization performance of an algorithm.
The following describes in detail a method, an apparatus, a device, and a storage medium for detecting an object based on image edge information according to an embodiment of the present application with reference to fig. 1 to 5.
Referring to fig. 1, the method specifically includes the following steps;
s101, acquiring an image to be detected;
specifically, before extracting the edge information from the image, the image to be detected is acquired first, and may be acquired from the public image data set, or may be acquired by the image acquisition device itself, for example, by a digital camera, a digital video camera, or other handheld electronic devices with an image acquisition function, such as a smart phone, a tablet computer, and the like, without being limited to the foregoing.
S102, performing edge extraction on the image to obtain a binary image only containing edge information;
after an image to be detected is acquired, edge extraction is performed on the image to be detected, the image can be subjected to edge extraction through a Canny edge detection algorithm in a pattern recognition algorithm, or the image can be subjected to edge extraction through an HED (integrally-nested edge detection) algorithm in a deep learning algorithm.
When the Canny edge detection algorithm is used for edge extraction of an image, the method is roughly divided into 5 steps:
(1) smoothing the image by applying Gaussian filtering to remove noise in the acquired image;
(2) finding an intensity gradient of the image;
(3) eliminating boundary false detection by using a non-maximum suppression technology;
(4) applying a dual threshold approach to determine the boundaries that may exist;
(5) the boundary is tracked using a hysteresis technique.
The edge information of the acquired image can be extracted by a Canny edge detection algorithm.
In another possible implementation, the edge extraction is performed on the image by a deep learning HED algorithm. The HED algorithm is an end-to-end edge detection algorithm based on a convolutional neural network, the input of the algorithm is an RGB image, the output of the algorithm is a binary image of a corresponding image edge, the network model of the HED adopts VGG-16 as a backbone network, features are respectively extracted from different feature layers of the network, the feature scales corresponding to different features are different, and the features are fused, so that a multi-scale edge detection network is trained.
In the training stage, the HED model needs RGB image data sets and corresponding edge image data sets (usually labeled manually), the network extracts multi-scale features from input RGB images, the features are fused to generate a fused feature map, the fused feature map and the feature maps of all scales are used as output of the network and are respectively compared with the edge images to calculate image difference, the weight is updated on the network by using the difference, so that the difference is smaller and smaller, each feature map is closer to the edge image, and the feature map and the real edge image of the image are very close to each other through large data training.
After the model is trained, the RGB image X of the feature to be extracted can be obtainedrgbAnd inputting the feature map to the HED network to obtain the feature maps extracted on all scales and the feature map after fusion.
(Yfuse,Y1……YM)=CNN(Xrgb,(W,w,h)*)
CNN (CNN.) represents a trained edge extraction model, (W, W, h) represents the optimal parameters obtained by training the network, W represents the parameters of the main network model, W represents the branch parameters of each scale, h represents the weight proportion of the scales fused into one feature, Y represents the weight proportion of the features, C represents the weight proportionfuseShows the feature map after fusion, YMA feature map representing the mth scale.
Averaging the feature maps with different scales to obtain an edge image Y corresponding to the RGB imageedge
Yedge=Average(Yfuse,Y1,……YM)
Through the step, the binary image only containing the edge information can be obtained, and redundant information such as color, texture and the like in the image is removed.
S103, inputting the binary image into a preset target detection model to obtain a target detection result.
Before the target detection is performed on the edge image, a training target detection model is further included, specifically, a training image set is obtained, the training image set can be obtained from public image data set, or can be automatically collected through an image collecting device, then edge extraction is performed on the training image set through a Canny edge detection algorithm or a deep learning HED algorithm, a binary image set only containing edge information is obtained, a label set corresponding to the binary image set is obtained, labels are frame position information and category information of targets in the image, generally, manual labeling is performed, and the target detection model is trained according to the binary image set and the label set.
In a possible implementation manner, the binary image set and the tag set are input into a pattern recognition algorithm to obtain a target detection model based on pattern recognition.
In another possible implementation manner, the binary image set and the label set are input into a neural network model to obtain a target detection model based on a deep learning algorithm.
Specifically, in a training stage of a target detection model based on a deep learning algorithm, a binary image is input into a neural network model, a certain number of detection frames are output, the detection frames are compared with labeled real detection frames, a difference value is calculated, network weights are updated by using the difference value, so that the position difference and the category difference value of the output detection frames and the real frames are smaller and smaller, and a target detection model capable of outputting correct detection frame position and category information is finally obtained through a large amount of data training.
Further, the binary image to be detected is input into a trained target detection model, and the model outputs a frame set B, wherein each frame comprises position information and category information.
B=CNNdetect(Yedge,W*)
Wherein, CNNdetectThe method is a trained target detection model, W is an optimal parameter obtained by training, and since a large number of overlapping frames and frames with small confidence degrees may exist in B, filtering is required according to the confidence degrees and the overlapping degrees of the frames to obtain a final detection result.
First, the region with the frame confidence smaller than a certain threshold is regarded as a negative sample and is removed from the result, for example, the threshold is 0.3, and the size of the threshold is not specifically limited in the embodiment of the present disclosure, and can be set by a person skilled in the art. In the remaining samples, frames with high overlapping degree are removed by using a Non-Maximum Suppression (NMS) method, so as to obtain a final detection frame set D, that is:
D=NMS(B)
the algorithm flow of non-maximum suppression comprises the following steps:
(1) finding out a frame p with the maximum confidence level in the set B for all detected frames of a certain class of objects, namely the set B in the embodiment of the disclosure;
(2) traversing other frames, if the intersection ratio of the frame p to the frame p is more than 0.5, removing the frame from the set B, and otherwise, keeping the frame;
(3) adding the frame p from the set B to the set D;
(4) and repeating the steps until the set B is empty.
And performing the NMS step on each type of object to obtain a final detection frame set D, and selecting the type with the maximum class confidence as the class label of the frame of each frame.
Through the step, target detection can be performed on the binary image to obtain a detection result.
In order to facilitate understanding of the target detection method provided in the embodiment of the present application, the following description is made with reference to fig. 2, and as shown in fig. 2, the method includes two stages, that is, a training stage, in which training image data is first obtained, edge extraction is performed on the training image data, redundant information such as color and texture in an image is removed, a binary image only including edge information is obtained, a target detection model is trained according to the extracted edge image, and a trained target detection model is obtained; and in the second testing stage, firstly, obtaining test image data, carrying out edge extraction on the test image data to obtain a binary image only containing edge information, and inputting the extracted edge image into a trained target detection model to obtain a target detection result.
In an optional implementation manner, video image data is acquired, edge information in a video image is extracted, a binary image only containing the edge information is obtained, a target detection model is trained according to the extracted edge image, a trained target detection model is obtained, the extracted edge image is input into the target detection model, and frame position information and category information of a target in the video image are obtained.
According to the target detection method based on the embodiment of the disclosure, redundant information such as color and texture in a target detection task is removed only by utilizing edge information of an image, so that an algorithm can construct a detection model with strong robustness to changes of color, texture and illumination, and the efficiency and accuracy of target detection are improved.
In a second aspect, an embodiment of the present disclosure provides an object detection apparatus based on image edge information, as shown in fig. 3, the apparatus including:
an obtaining module 301, configured to obtain an image to be detected;
an extraction module 302, configured to perform edge extraction on an image to obtain a binary image only including edge information;
the detection module 303 is configured to input the binary image into a preset target detection model to obtain a target detection result.
Further, still include:
and the training module is used for training the target detection model.
Further, a training module comprising:
the extraction unit is used for carrying out edge extraction on the training image set to obtain a binary image set only containing edge information;
the acquiring unit is used for acquiring a label set corresponding to the binary image set;
and the training unit is used for training the target detection model according to the binary image set and the label set.
Further, an object detection model, comprising: a target detection model based on a pattern recognition algorithm, or a target detection model based on a deep learning algorithm.
Further, the extraction module 302 is specifically configured to perform edge extraction on the image through a pattern recognition algorithm, or perform edge extraction on the image through a depth learning algorithm.
Further, the image is subjected to edge extraction through a Canny edge detection algorithm in a pattern recognition algorithm, or is subjected to edge extraction through an HED (integrally-nested edge detection) algorithm in a deep learning algorithm.
The target detection device provided by the embodiment of the disclosure only utilizes the edge information of the image, removes redundant information such as color and texture in the target detection task, enables the algorithm to construct a detection model with strong robustness to the change of color, texture and illumination, and improves the efficiency and accuracy of target detection.
It should be noted that, when the object detection apparatus based on image edge information provided in the foregoing embodiment executes the object detection method based on image edge information, the division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the image edge information-based target detection apparatus provided in the above embodiment and the image edge information-based target detection method embodiment belong to the same concept, and details of implementation processes thereof are referred to in the method embodiment, and are not described herein again.
In a third aspect, the embodiment of the present disclosure further provides an electronic device corresponding to the method for detecting an object based on image edge information provided in the foregoing embodiment, so as to execute the method for detecting an object based on image edge information.
Referring to fig. 4, a schematic diagram of an electronic device provided in some embodiments of the present application is shown. As shown in fig. 4, the electronic apparatus includes: a processor 400, a memory 401, a bus 402 and a communication interface 403, wherein the processor 400, the communication interface 403 and the memory 401 are connected through the bus 402; the memory 401 stores a computer program that can be executed on the processor 400, and the processor 400 executes the computer program to execute the object detection method based on the image edge information provided by any of the foregoing embodiments of the present application.
The Memory 401 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 403 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 402 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 401 is used for storing a program, and the processor 400 executes the program after receiving an execution instruction, and the method for detecting an object based on image edge information disclosed in any of the foregoing embodiments of the present application may be applied to the processor 400, or implemented by the processor 400.
Processor 400 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 400. The Processor 400 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 401, and the processor 400 reads the information in the memory 401 and completes the steps of the method in combination with the hardware.
The electronic device provided by the embodiment of the application and the target detection method based on the image edge information provided by the embodiment of the application have the same inventive concept and the same beneficial effects as the method adopted, operated or realized by the electronic device.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium corresponding to the method for detecting an object based on image edge information provided in the foregoing embodiment, please refer to fig. 5, which illustrates a computer-readable storage medium, which is an optical disc 500 and stores a computer program (i.e., a program product), where the computer program, when executed by a processor, executes the method for detecting an object based on image edge information provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the target detection method based on image edge information provided by the embodiment of the present application have the same beneficial effects as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A target detection method based on image edge information is characterized by comprising the following steps:
acquiring an image to be detected;
performing edge extraction on the image to obtain a binary image only containing edge information;
and inputting the binary image into a preset target detection model to obtain a target detection result.
2. The method according to claim 1, wherein before inputting the binary image into a preset target detection model, the method further comprises:
and training the target detection model.
3. The method of claim 2, wherein the training the target detection model comprises:
performing edge extraction on the training image set to obtain a binary image set only containing edge information;
acquiring a label set corresponding to the binary image set;
and training the target detection model according to the binary image set and the label set.
4. The method of claim 3, wherein the object detection model comprises:
an object detection model based on a pattern recognition algorithm, or,
and (3) a target detection model based on a deep learning algorithm.
5. The method of claim 1, wherein the edge extracting the image comprises:
performing edge extraction on the image through a pattern recognition algorithm, or,
and performing edge extraction on the image through a deep learning algorithm.
6. The method of claim 5,
and performing edge extraction on the image through a Canny edge detection algorithm in a pattern recognition algorithm, or performing edge extraction on the image through an HED (integrally-nested edge detection) algorithm in a deep learning algorithm.
7. An object detection apparatus based on image edge information, comprising:
the acquisition module is used for acquiring an image to be detected;
the extraction module is used for carrying out edge extraction on the image to obtain a binary image only containing edge information;
and the detection module is used for inputting the binary image into a preset target detection model to obtain a target detection result.
8. The apparatus of claim 7, further comprising:
and the training module is used for training the target detection model.
9. An image edge information based object detection apparatus, comprising a processor and a memory storing program instructions, wherein the processor is configured to perform the image edge information based object detection method according to any one of claims 1 to 6 when executing the program instructions.
10. A computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement a method for image edge information based object detection as claimed in any one of claims 1 to 6.
CN202010968922.7A 2020-09-15 2020-09-15 Target detection method and device based on image edge information and storage medium Pending CN112052907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010968922.7A CN112052907A (en) 2020-09-15 2020-09-15 Target detection method and device based on image edge information and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010968922.7A CN112052907A (en) 2020-09-15 2020-09-15 Target detection method and device based on image edge information and storage medium

Publications (1)

Publication Number Publication Date
CN112052907A true CN112052907A (en) 2020-12-08

Family

ID=73604699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010968922.7A Pending CN112052907A (en) 2020-09-15 2020-09-15 Target detection method and device based on image edge information and storage medium

Country Status (1)

Country Link
CN (1) CN112052907A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418223A (en) * 2020-12-11 2021-02-26 互助土族自治县北山林场 Wild animal image significance target detection method based on improved optimization
CN113298837A (en) * 2021-07-27 2021-08-24 南昌工程学院 Image edge extraction method and device, storage medium and equipment
CN114745292A (en) * 2022-03-14 2022-07-12 优刻得科技股份有限公司 Edge container cloud detection method, device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180541A1 (en) * 2014-12-19 2016-06-23 Apical Limited Sensor noise profile
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
CN108121991A (en) * 2018-01-06 2018-06-05 北京航空航天大学 A kind of deep learning Ship Target Detection method based on the extraction of edge candidate region
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
CN109961009A (en) * 2019-02-15 2019-07-02 平安科技(深圳)有限公司 Pedestrian detection method, system, device and storage medium based on deep learning
CN110210387A (en) * 2019-05-31 2019-09-06 华北电力大学(保定) Insulator object detection method, system, the device of knowledge based map
CN110472638A (en) * 2019-07-30 2019-11-19 精硕科技(北京)股份有限公司 A kind of object detection method, device and equipment, storage medium
CN110570440A (en) * 2019-07-19 2019-12-13 武汉珈和科技有限公司 Image automatic segmentation method and device based on deep learning edge detection
CN110598609A (en) * 2019-09-02 2019-12-20 北京航空航天大学 Weak supervision target detection method based on significance guidance
CN110706242A (en) * 2019-08-26 2020-01-17 浙江工业大学 Object-level edge detection method based on depth residual error network
CN111310615A (en) * 2020-01-23 2020-06-19 天津大学 Small target traffic sign detection method based on multi-scale information and residual error network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180541A1 (en) * 2014-12-19 2016-06-23 Apical Limited Sensor noise profile
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
CN108121991A (en) * 2018-01-06 2018-06-05 北京航空航天大学 A kind of deep learning Ship Target Detection method based on the extraction of edge candidate region
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
CN109961009A (en) * 2019-02-15 2019-07-02 平安科技(深圳)有限公司 Pedestrian detection method, system, device and storage medium based on deep learning
CN110210387A (en) * 2019-05-31 2019-09-06 华北电力大学(保定) Insulator object detection method, system, the device of knowledge based map
CN110570440A (en) * 2019-07-19 2019-12-13 武汉珈和科技有限公司 Image automatic segmentation method and device based on deep learning edge detection
CN110472638A (en) * 2019-07-30 2019-11-19 精硕科技(北京)股份有限公司 A kind of object detection method, device and equipment, storage medium
CN110706242A (en) * 2019-08-26 2020-01-17 浙江工业大学 Object-level edge detection method based on depth residual error network
CN110598609A (en) * 2019-09-02 2019-12-20 北京航空航天大学 Weak supervision target detection method based on significance guidance
CN111310615A (en) * 2020-01-23 2020-06-19 天津大学 Small target traffic sign detection method based on multi-scale information and residual error network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SAINING XIE,ZHUOWEN TU: "Holistically-Nested Edge Detection", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》, 31 December 2015 (2015-12-31), pages 1395 - 1402 *
刘树春等: "深度实践OCR 基于深度学习的文字识别", 机械工业出版社, pages: 185 - 188 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418223A (en) * 2020-12-11 2021-02-26 互助土族自治县北山林场 Wild animal image significance target detection method based on improved optimization
CN113298837A (en) * 2021-07-27 2021-08-24 南昌工程学院 Image edge extraction method and device, storage medium and equipment
CN114745292A (en) * 2022-03-14 2022-07-12 优刻得科技股份有限公司 Edge container cloud detection method, device, equipment and storage medium
CN114745292B (en) * 2022-03-14 2023-09-05 优刻得科技股份有限公司 Edge container cloud detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
JP5775225B2 (en) Text detection using multi-layer connected components with histograms
CN112052907A (en) Target detection method and device based on image edge information and storage medium
CN111428875A (en) Image recognition method and device and corresponding model training method and device
CN110516514B (en) Modeling method and device of target detection model
CN110516517B (en) Target identification method, device and equipment based on multi-frame image
CN113298050B (en) Lane line recognition model training method and device and lane line recognition method and device
CN111257341A (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN110287936B (en) Image detection method, device, equipment and storage medium
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111046971A (en) Image recognition method, device, equipment and computer readable storage medium
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN113011435A (en) Target object image processing method and device and electronic equipment
CN112052819A (en) Pedestrian re-identification method, device, equipment and storage medium
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
CN114462469B (en) Training method of target detection model, target detection method and related device
CN111191482B (en) Brake lamp identification method and device and electronic equipment
CN109523570B (en) Motion parameter calculation method and device
CN113505682A (en) Living body detection method and device
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN113743434A (en) Training method of target detection network, image augmentation method and device
CN116580232A (en) Automatic image labeling method and system and electronic equipment
CN113903014B (en) Lane line prediction method and device, electronic device and computer-readable storage medium
CN113255766B (en) Image classification method, device, equipment and storage medium
CN113591543B (en) Traffic sign recognition method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination