CN113139542A - Target detection method, device, equipment and computer readable storage medium - Google Patents

Target detection method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113139542A
CN113139542A CN202110468946.0A CN202110468946A CN113139542A CN 113139542 A CN113139542 A CN 113139542A CN 202110468946 A CN202110468946 A CN 202110468946A CN 113139542 A CN113139542 A CN 113139542A
Authority
CN
China
Prior art keywords
target
input image
class
classification result
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110468946.0A
Other languages
Chinese (zh)
Other versions
CN113139542B (en
Inventor
叶锦
谭啸
孙昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110468946.0A priority Critical patent/CN113139542B/en
Publication of CN113139542A publication Critical patent/CN113139542A/en
Application granted granted Critical
Publication of CN113139542B publication Critical patent/CN113139542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a target detection method, device, equipment and computer readable storage medium, which relates to the technical field of artificial intelligence, in particular to computer vision and deep learning technology. The implementation scheme is as follows: classifying the feature map of the input image by using a first classifier to generate a first classification result of the input image; constructing a category feature of the input image based on the feature map and the first classification result; enhancing the category features to generate enhanced category features; classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and generating a target detection result of the input image based at least in part on the second classification result.

Description

Target detection method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for detecting a target, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. The artificial intelligence hardware technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like, and the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge graph technology and the like.
In the field of intelligent transportation, there is a need to detect different objects in images of traffic scenes. In the existing target detection method, a large amount of target detection labeling information is needed, however, the target detection labeling is a step with higher cost.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides a target detection method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided an object detection method including: classifying the feature map of the input image by using a first classifier to generate a first classification result of the input image; constructing a category feature of the input image based on the feature map and the first classification result; enhancing the category features to generate enhanced category features; classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and generating a target detection result of the input image based at least in part on the second classification result.
According to another aspect of the present disclosure, there is provided an object detecting apparatus including: a first classification module configured to: classifying the feature map of the input image to generate a first classification result of the input image; a category feature construction module configured to: constructing a category feature of the input image based on the feature map and the first classification result; a feature enhancement module configured to: enhancing the category features to generate enhanced category features; a second classification module configured to: classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and a target detection generation module configured to: and generating a target detection result of the input image based at least in part on the second classification result.
According to another aspect of the present disclosure, there is also provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of object detection as described in the present disclosure.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the object detection method according to the present disclosure.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the object detection method as described in the present disclosure.
According to one or more embodiments of the present disclosure, object detection in an image can be achieved with only category label information.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a target detection method according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram of a process of constructing class features of an input image in the method of FIG. 2 in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of a process of extracting a target feature from a feature map in the process of FIG. 3, according to an embodiment of the disclosure;
FIG. 5 illustrates a flow diagram of a process of generating target area information for a feature map in the process of FIG. 4, according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an object detection network according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of a target detection apparatus according to an embodiment of the present disclosure;
FIG. 8 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the disclosure are included to assist understanding, and which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
With the development of computer vision technology, target detection is widely applied to the fields of intelligent transportation, robot navigation, intelligent video monitoring, industrial detection, aerospace and the like. In object detection, all objects in an image are marked out with a detection frame, and the category of each object is identified.
In the prior art, in order to train a neural network for target detection, a large amount of target labeling information, i.e., the position and category of a target in each picture, is required. However, the cost of performing target annotation information on a picture is high.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable a method of target detection to be performed, where the target detection method may enable target detection in an image without target annotation information. It will be appreciated that this is not limiting and in some embodiments client devices 101, 102, 103, 104, 105 and 106 may have sufficient storage and computing resources so that they are also capable of executing one or more services or software applications of the target detection method.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use client devices 101, 102, 103, 104, 105, and/or 106 for target detection. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., Google Chrome OS); or include various Mobile operating systems, such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In an exemplary embodiment of the present disclosure, there is provided an object detection method including: classifying the feature map of the input image by using a first classifier to generate a first classification result of the input image; constructing a category feature of the input image based on the feature map and the first classification result; enhancing the category features to generate enhanced category features; classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and generating a target detection result of the input image based at least in part on the second classification result.
FIG. 2 shows a flow diagram of a target detection method 200 according to an embodiment of the disclosure.
In step S201, a feature map of an input image is classified by a first classifier to generate a first classification result of the input image.
According to some embodiments, an input image is processed with a feature extraction network to generate a feature map of the input image, wherein the feature extraction network may be, for example, a convolutional neural network.
According to some embodiments, the first classifier comprises a plurality of class classification modules, wherein each class classification module corresponds to each class of the object to be detected. For example, when three object categories (e.g., traffic lights, pedestrians, and trucks) and the like need to be detected in a picture, the first classifier includes a category classification module corresponding to each of the three object categories (e.g., a category classification module corresponding to a traffic light, a category classification module corresponding to a pedestrian, and a category classification module corresponding to a truck).
According to some embodiments, each class classification module includes at least one convolutional layer and a global average pooling layer, and the first classification result of the input image includes a class activation map corresponding to each target class.
According to some embodiments, generating the first classification result of the input image comprises: in each class classification module, the output of the last convolutional layer of the at least one convolutional layer is globally averaged and pooled to generate a corresponding class activation map.
In step S203, a category feature of the input image is constructed based on the feature map and the first classification result.
According to some embodiments, for each target class in the input image, a feature corresponding to the target class in the feature map is extracted based on the feature map and a classification result corresponding to the target class in the first classification result (e.g., a class activation map for the target class). For example, when two target categories of traffic lights and pedestrians exist in the input image, all the features corresponding to the traffic lights and all the features corresponding to the pedestrians in the feature map are extracted, respectively.
According to some embodiments, for each target class in the input image, a class feature corresponding to the target class is constructed based on features in the feature map corresponding to the target class. For example, based on all features in the feature map corresponding to traffic lights, category features corresponding to traffic lights are constructed; based on all the features corresponding to the pedestrian in the feature map, a category feature corresponding to the pedestrian is constructed.
In step S205, the class features are enhanced to generate enhanced class features.
According to some embodiments, enhancing the class features to generate enhanced class features comprises: the class feature is enhanced using one selected from the group consisting of a graph-convolved neural network, a non-local network, and an attention mechanism to generate an enhanced class feature.
According to some embodiments, the enhancement category features include enhancement category features corresponding to each target category in the input image.
In step S207, the enhanced class features are classified by a second classifier to obtain a second classification result of the input image.
According to some embodiments, for each enhanced class feature, the second classifier determines a target class of the enhanced class feature, wherein the second classification result comprises a class determination result for each enhanced class feature. According to some embodiments, each enhanced class feature is aligned with a particular target class, and thus, the second classification result indicates the target class contained by the input image.
In step S209, a target detection result of the input image is generated based at least in part on the second classification result.
In the target detection method provided by the exemplary embodiment of the disclosure, the target detection in the image can be realized under the condition of only the category marking information, and the cost of the target detection is reduced.
According to some embodiments, constructing the class feature of the input image based on the feature map and the first classification result includes: extracting target features from the feature map based on the feature map and the first classification result; clustering the target characteristics to obtain at least one target category; and for each target class, averaging the target features of the target class to obtain the class features corresponding to the target class.
Fig. 3 shows a flowchart of a process of constructing category features of an input image (step S203) in the method 200 of fig. 2 according to an embodiment of the present disclosure.
In step S301, a target feature is extracted from the feature map based on the feature map and the first classification result.
In step S303, the target features are clustered to obtain at least one target class.
According to some embodiments, the target features are clustered such that a plurality of target features of the input image are divided into subsets corresponding to each target class. For example, when objects corresponding to traffic lights and pedestrians are included in the input image, the features of the objects corresponding to the traffic lights are divided into a first subset and the features of the objects corresponding to the pedestrians are divided into a second subset.
According to some embodiments, clustering the target features comprises: calculating the target category number of the input image based on the first classification result; and clustering the target features based on the number of target classes of the input image (e.g., by K-means clustering, gaussian mixture model-based clustering, or other algorithms that require a known number of data classes for clustering).
According to some embodiments, the first classification result includes at least one classification result respectively corresponding to at least one target class, and wherein calculating the number of target classes of the input image based on the first classification result includes: and calculating the number of target categories meeting a preset condition in at least one target category as the number of the target categories of the input image, wherein the classification result corresponding to each target category meeting the preset condition comprises a value larger than a target threshold value.
According to some embodiments, the first classification result includes a class activation map corresponding to each target class to be detected, and calculating the number of target classes of the input image includes: for each target category to be detected, judging whether a value larger than a target threshold value exists in a class activation graph corresponding to the target category, wherein if the value larger than the target threshold value exists in the class activation graph corresponding to the target category, the target of the category exists in an input image, and if the value larger than the target threshold value does not exist in the class activation graph corresponding to the target category, the target of the category does not exist in the input image; and calculating the number of the target categories with the values larger than the target threshold value in the corresponding class activation graph as the number of the target categories of the input image.
For example, when a traffic light, a pedestrian, and a truck need to be detected in the input image, the first classification result includes a first-type activation map corresponding to the traffic light, a second-type activation map corresponding to the pedestrian, and a third-type activation map corresponding to the truck. Wherein the first type of activation map corresponding to the traffic lights and the second type of activation map corresponding to the pedestrian contain values greater than the target threshold value, and therefore, the number of target categories of the input image is 2.
According to further embodiments, the target features may be clustered without calculating the target class number of the input image (e.g., by mean shift clustering, agglomerative clustering, or other algorithms that do not require a known data class number to cluster).
In step S305, for each object class, the object features of the object class are averaged to obtain the class feature corresponding to the object class.
According to some embodiments, extracting the target feature from the feature map based on the feature map and the first classification result includes: generating target area information of the feature map based on the first classification result; and constructing the target feature of the input image based on the feature map and the target area information.
Fig. 4 shows a flowchart of a process of extracting a target feature from the feature map (step S301) in the process of fig. 3 according to an embodiment of the present disclosure.
In step S401, target area information of the feature map is generated based on the first classification result.
According to some embodiments, the target area information comprises a location and a category of the target area, wherein the location of the target area information indicates a location of the target area in the feature map.
In step S403, a target feature of the input image is constructed based on the feature map and the target area information.
According to some embodiments, features corresponding to the position of the target region in the feature map are extracted, and, based on the extracted features, target features of the input image are constructed.
According to some embodiments, the first classification result is a class activation map of the input image, and generating the target area information of the feature map based on the first classification result includes: extracting at least one connected region from the class activation graph by using a connected domain algorithm to serve as at least one target region of the feature graph; and generating target area information of at least one target area of the feature map, wherein the target area information comprises a position and a category of the at least one target area.
Fig. 5 shows a flowchart of a process of generating target area information of a feature map (step S401) in the process of fig. 4 according to an embodiment of the present disclosure.
In step S501, at least one connected region is extracted from the class activation map as at least one target region of the feature map using a connected domain algorithm.
According to some embodiments, extracting at least one connected region from the class activation graph using a connected region algorithm comprises: carrying out binarization processing on each class activation map of the input image to obtain a binarization class activation map corresponding to the class activation map; and extracting at least one connected region from the activation map using a connected domain algorithm.
According to some embodiments, the binarizing processing of the class activation map includes: for each point in the class activation map, determining whether the pixel value of the point is greater than a binarization threshold, wherein if the pixel value of the point is greater than or equal to the binarization threshold, the pixel value corresponding to the point is set to a high pixel value (e.g., 1) in the binarization class activation map, and if the pixel value of the point is less than the binarization threshold, the pixel value corresponding to the point is set to a low pixel value (e.g., 0) in the binarization class activation map.
According to some embodiments, in the binarization-like activation map, points having the same pixel value that are connected to each other constitute a connected region.
According to some embodiments, a Two-Pass algorithm or a Seed-Filling algorithm may be employed to extract at least one connected region from such an activation map.
In step S503, target area information of at least one target area of the feature map is generated, wherein the target area information includes a position and a category of the at least one target area.
According to some embodiments, for each class activation map, the class of the target area extracted from the class activation map is the target class corresponding to the class activation map. According to some embodiments, for each target area, the position of the target area is the coordinate values of the upper left corner and the lower right corner of the target area.
According to some embodiments, constructing the target feature of the input image based on the feature map and the target area information comprises: based on the target area information, region-of-interest pooling is performed on the feature map to extract target features of the input image.
According to some embodiments, performing region of interest pooling on the feature map comprises: extracting features corresponding to the target area in the feature map of the input image based on the target area information; and scaling the feature corresponding to each target area to a predefined size.
According to some embodiments, scaling the features corresponding to each target region to a predefined size comprises: dividing the characteristics corresponding to the target area into a plurality of parts with the same size according to the preset size; and performing a max pooling operation for each portion.
According to some embodiments, the second classification result comprises object class information of the input image, and wherein generating the object detection result for the input image based at least in part on the second classification result comprises: and generating a target detection result of the input image based on the target area information and the target category information.
According to some embodiments, the object class information indicates which classes of objects are included in the input image, and the object region information includes information of all object regions extracted from the first classification result, and thus, the object class information may be used to further adjust the object region information to improve the accuracy of object detection.
According to some embodiments, generating the target detection result of the input image based on the target area information and the target category information includes: for each object class, in response to the object class information indicating that the feature map has an object of the object class, a portion of the object area information corresponding to the object class is retained, and in response to the object class information indicating that the feature map does not have an object of the object class, a portion of the object area information corresponding to the object class is discarded.
For example, when the target area information includes target area information corresponding to a traffic light, a pedestrian, and a truck, and the target category information indicates only a target whose input image contains a traffic light and a pedestrian, the target area information corresponding to the traffic light and the pedestrian is retained as a target detection result, and the target area information corresponding to the truck is discarded.
FIG. 6 shows a block diagram of an object detection network 600 according to an embodiment of the disclosure. As shown in fig. 6, the target detection network includes an input feature extraction module 611, a first classifier 612, a target region extraction module 613, a category count statistics module 614, a ROI pooling module 615, a clustering module 616, an averaging module 617, an enhancement module 618, a second classifier 619, and a target information generation module 620.
Firstly, an input image 601 is input into a feature extraction module 611 to extract a feature map of the input image;
then, a first classifier 612 is used to classify the feature map of the input image to generate a first classification result of the input image;
next, the target region extraction module 613 receives the first classification result input from the first classifier 612 to generate target region information of the feature map, and the category count statistics module 614 receives the first classification result from the first classifier 612 to count the number of target categories in the input image;
next, the ROI pooling module 615 receives the feature map of the input image from the feature extraction module 611 and the target region information from the target region extraction module 613 to construct the target features of the input image;
next, the clustering module 616 receives the target features from the ROI pooling module 615 and the target category number from the category number statistics module, and performs clustering with the target features;
next, the average module 617 receives the clustered target features from the clustering module 616, and averages the target features of each category to obtain a category feature corresponding to each category;
next, the enhancement module 618 receives the category features from the averaging module 617 to generate enhanced category features;
next, the second classifier 619 receives the enhanced class features from the enhancement module 618 to generate a second classification result containing the target class information;
finally, the target information generation module 620 receives the target area information from the target area extraction module 613 and the second classification result from the second classifier 619, and generates a final target detection result based on the target area information and target class information in the second classification information.
In an exemplary embodiment of the present disclosure, there is provided an object detection apparatus including: a first classification module configured to: classifying the feature map of the input image to generate a first classification result of the input image; a category feature construction module configured to: constructing a category feature of the input image based on the feature map and the first classification result; a feature enhancement module configured to: enhancing the category features to generate enhanced category features; a second classification module configured to: classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and a target detection generation module configured to: and generating a target detection result of the input image based at least in part on the second classification result.
Fig. 7 shows a block diagram of a target detection apparatus 700 according to an embodiment of the present disclosure.
As shown in fig. 7, the object detection apparatus 700 includes: a first classification module 701, a category feature construction module 702, a feature enhancement module 703, a second classification module 704, and a target detection generation module 705, wherein the first classification module 701 is configured to: classifying the feature map of the input image to generate a first classification result of the input image; the category feature construction module 702 is configured to: constructing a category feature of the input image based on the feature map and the first classification result; the feature enhancement module 703 is configured to: enhancing the category features to generate enhanced category features; the second classification module 704 is configured to: classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and the target detection generation module 705 is configured to: and generating a target detection result of the input image based at least in part on the second classification result.
According to some embodiments, the category feature construction module comprises: a target feature extraction module configured to: extracting target features from the feature map based on the feature map and the first classification result; a clustering module configured to: clustering the target characteristics to obtain at least one target category; and a target feature averaging module configured to: and for each target class, averaging the target features of the target class to obtain the class features corresponding to the class.
It should be understood that the various modules of the apparatus 700 shown in fig. 7 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to method 200 are equally applicable to apparatus 700 and the modules included therein. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
Although specific functionality is discussed above with reference to particular modules, it should be noted that the functionality of the various modules discussed herein may be divided into multiple modules and/or at least some of the functionality of multiple modules may be combined into a single module. Performing an action by a particular module discussed herein includes the particular module itself performing the action, or alternatively the particular module invoking or otherwise accessing another component or module that performs the action (or performs the action in conjunction with the particular module). Thus, a particular module that performs an action can include the particular module that performs the action itself and/or another module that the particular module invokes or otherwise accesses that performs the action.
It should also be appreciated that various techniques may be described herein in the general context of software, hardware elements, or program modules. The various modules described above with respect to fig. 7 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, the modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the first classification module 701, the category feature construction module 702, the feature enhancement module 703, the second classification module 704, and the target detection generation module 705 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip (which includes one or more components of a Processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry), and may optionally execute received program code and/or include embedded firmware to perform functions.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
In an exemplary embodiment of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as described in the present disclosure.
In an exemplary embodiment of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method as described in the present disclosure is provided.
In an exemplary embodiment of the disclosure, a computer program product is provided, comprising a computer program, wherein the computer program, when executed by a processor, implements the method as described in the disclosure.
Referring to fig. 8, a block diagram of a structure of an electronic device 800, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the device 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 807 can be any type of device capable of presenting information and can include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 808 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The computing unit 801 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, may perform one or more of the steps of method 200 described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (15)

1. A method of target detection, comprising:
classifying a feature map of an input image by using a first classifier to generate a first classification result of the input image;
constructing a category feature of the input image based on the feature map and the first classification result;
enhancing the class features to generate enhanced class features;
classifying the enhanced category features by using a second classifier to obtain a second classification result of the input image; and
generating a target detection result for the input image based at least in part on the second classification result.
2. The object detection method of claim 1, wherein the constructing the class feature of the input image based on the feature map and the first classification result comprises:
extracting target features from the feature map based on the feature map and the first classification result;
clustering the target features to obtain at least one target category; and
and for each target category, averaging the target characteristics of the target category to obtain the category characteristics corresponding to the target category.
3. The object detection method according to claim 2, wherein the extracting of the object feature from the feature map based on the feature map and the first classification result includes:
generating target area information of the feature map based on the first classification result; and
and constructing the target feature of the input image based on the feature map and the target area information.
4. The object detection method of claim 3, wherein the first classification result includes a class activation map of the input image, and wherein the generating the object region information of the feature map based on the first classification result includes:
extracting at least one connected region from the class activation graph by using a connected domain algorithm to serve as at least one target region of the feature graph; and
generating target area information of at least one target area of the feature map,
wherein the target area information comprises a location and a category of the at least one target area.
5. The object detection method of claim 3, wherein the constructing the object feature of the input image based on the feature map and the object region information comprises:
performing region-of-interest pooling on the feature map based on the target region information to extract target features of the input image.
6. The object detection method according to any one of claims 2 to 5, wherein said clustering the object features comprises:
calculating the target category number of the input image based on the first classification result; and
and clustering the target features based on the target category number of the input image.
7. The object detection method of claim 6, wherein the first classification result comprises at least one classification result respectively corresponding to the at least one object class, and wherein the calculating the number of object classes of the input image based on the first classification result comprises:
calculating the number of object classes satisfying a predetermined condition among the at least one object class as the number of object classes of the input image,
and the classification result corresponding to each target category meeting the preset condition comprises a value larger than a target threshold value.
8. The object detection method of any one of claims 1 to 5, wherein said enhancing the class features to generate enhanced class features comprises:
the class feature is enhanced using one selected from the group consisting of a graph-convolved neural network, a non-local network, and an attention mechanism to generate the enhanced class feature.
9. The object detection method of any of claims 3 to 5, wherein the second classification result comprises object class information of the input image, and wherein the generating the object detection result of the input image based at least in part on the second classification result comprises:
and generating a target detection result of the input image based on the target area information and the target category information.
10. The object detection method of claim 9, wherein the generating an object detection result of the input image based on the object region information and the object class information comprises:
for each object class, in response to the object class information indicating that the feature map has objects of that object class, retaining a portion of the object region information corresponding to that object class, and,
in response to the object class information indicating that the feature map does not have an object of the object class, discarding a portion of the object region information corresponding to the object class.
11. An object detection device comprising:
a first classification module configured to: classifying a feature map of an input image to generate a first classification result of the input image;
a category feature construction module configured to: constructing a category feature of the input image based on the feature map and the first classification result;
a feature enhancement module configured to: enhancing the class features to generate enhanced class features;
a second classification module configured to: classifying the enhanced category features by using a second classifier to obtain a second classification result of the input image; and
a target detection generation module configured to: generating a target detection result for the input image based at least in part on the second classification result.
12. The object detection device of claim 11, wherein the class feature construction module comprises:
a target feature extraction module configured to: extracting target features from the feature map based on the feature map and the first classification result;
a clustering module configured to: clustering the target features to obtain at least one target category; and
a target feature averaging module configured to: and for each target class, averaging the target features of the target class to obtain the class features corresponding to the class.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
15. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-10 when executed by a processor.
CN202110468946.0A 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium Active CN113139542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110468946.0A CN113139542B (en) 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110468946.0A CN113139542B (en) 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113139542A true CN113139542A (en) 2021-07-20
CN113139542B CN113139542B (en) 2023-08-11

Family

ID=76816370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110468946.0A Active CN113139542B (en) 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113139542B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117690161A (en) * 2023-12-12 2024-03-12 上海工程技术大学 Pedestrian detection method, device and medium based on image fusion

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169313A1 (en) * 2015-12-14 2017-06-15 Samsung Electronics Co., Ltd. Image processing apparatus and method based on deep learning and neural network learning
CN110008949A (en) * 2019-01-24 2019-07-12 华南理工大学 A kind of image object detection method, system, device and storage medium
CN110737801A (en) * 2019-10-14 2020-01-31 腾讯科技(深圳)有限公司 Content classification method and device, computer equipment and storage medium
CN111291819A (en) * 2020-02-19 2020-06-16 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and storage medium
CN111428807A (en) * 2020-04-03 2020-07-17 桂林电子科技大学 Image processing method and computer-readable storage medium
CN111709357A (en) * 2020-06-12 2020-09-25 北京百度网讯科技有限公司 Method and device for identifying target area, electronic equipment and road side equipment
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112613415A (en) * 2020-12-25 2021-04-06 深圳数联天下智能科技有限公司 Face nose type recognition method and device, electronic equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169313A1 (en) * 2015-12-14 2017-06-15 Samsung Electronics Co., Ltd. Image processing apparatus and method based on deep learning and neural network learning
CN110008949A (en) * 2019-01-24 2019-07-12 华南理工大学 A kind of image object detection method, system, device and storage medium
CN110737801A (en) * 2019-10-14 2020-01-31 腾讯科技(深圳)有限公司 Content classification method and device, computer equipment and storage medium
CN111291819A (en) * 2020-02-19 2020-06-16 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and storage medium
CN111428807A (en) * 2020-04-03 2020-07-17 桂林电子科技大学 Image processing method and computer-readable storage medium
CN111709357A (en) * 2020-06-12 2020-09-25 北京百度网讯科技有限公司 Method and device for identifying target area, electronic equipment and road side equipment
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112613415A (en) * 2020-12-25 2021-04-06 深圳数联天下智能科技有限公司 Face nose type recognition method and device, electronic equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZIYUAN LIU等: "Attention-Based Feature Pyramid Network for Object Detection", ICCPR, pages 117 - 121 *
李月峰等: "在线多目标视频跟踪算法综述", 计算机技术与自动化, pages 73 - 82 *
王婷婷;潘祥;: "基于卷积神经网络的目标检测算法研究", 长春师范大学学报, no. 06, pages 47 - 53 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117690161A (en) * 2023-12-12 2024-03-12 上海工程技术大学 Pedestrian detection method, device and medium based on image fusion
CN117690161B (en) * 2023-12-12 2024-06-04 上海工程技术大学 Pedestrian detection method, device and medium based on image fusion

Also Published As

Publication number Publication date
CN113139542B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN112857268B (en) Object area measuring method, device, electronic equipment and storage medium
CN115422389B (en) Method and device for processing text image and training method of neural network
EP3869404A2 (en) Vehicle loss assessment method executed by mobile terminal, device, mobile terminal and medium
CN113256583A (en) Image quality detection method and apparatus, computer device, and medium
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN114445667A (en) Image detection method and method for training image detection model
CN115438214A (en) Method for processing text image, neural network and training method thereof
CN115082740A (en) Target detection model training method, target detection method, device and electronic equipment
CN114495103B (en) Text recognition method and device, electronic equipment and medium
CN114723949A (en) Three-dimensional scene segmentation method and method for training segmentation model
CN113723305A (en) Image and video detection method, device, electronic equipment and medium
CN113139542B (en) Object detection method, device, equipment and computer readable storage medium
CN115797660A (en) Image detection method, image detection device, electronic equipment and storage medium
CN113596011B (en) Flow identification method and device, computing device and medium
CN114842476A (en) Watermark detection method and device and model training method and device
CN115578501A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114494797A (en) Method and apparatus for training image detection model
CN114429678A (en) Model training method and device, electronic device and medium
CN114140852A (en) Image detection method and device
CN112579587A (en) Data cleaning method and device, equipment and storage medium
CN114842474B (en) Character recognition method, device, electronic equipment and medium
CN116881485B (en) Method and device for generating image retrieval index, electronic equipment and medium
CN115512131B (en) Image detection method and training method of image detection model
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN115170536B (en) Image detection method, training method and device of model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant