CN113139542B - Object detection method, device, equipment and computer readable storage medium - Google Patents

Object detection method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113139542B
CN113139542B CN202110468946.0A CN202110468946A CN113139542B CN 113139542 B CN113139542 B CN 113139542B CN 202110468946 A CN202110468946 A CN 202110468946A CN 113139542 B CN113139542 B CN 113139542B
Authority
CN
China
Prior art keywords
target
category
input image
classification result
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110468946.0A
Other languages
Chinese (zh)
Other versions
CN113139542A (en
Inventor
叶锦
谭啸
孙昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110468946.0A priority Critical patent/CN113139542B/en
Publication of CN113139542A publication Critical patent/CN113139542A/en
Application granted granted Critical
Publication of CN113139542B publication Critical patent/CN113139542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a target detection method, a device, equipment and a computer readable storage medium, relates to the technical field of artificial intelligence, and particularly relates to the computer vision and deep learning technology. The implementation scheme is as follows: classifying the feature images of the input images by using a first classifier to generate a first classification result of the input images; constructing class features of the input image based on the feature map and the first classification result; enhancing the category features to generate enhanced category features; classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and generating a target detection result of the input image based at least in part on the second classification result.

Description

Object detection method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and in particular, to a method, an apparatus, an electronic device, a computer readable storage medium, and a computer program product for object detection.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. The artificial intelligence hardware technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like, and the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
In the field of intelligent traffic, it is desirable to detect different targets in traffic scene images. In the existing target detection method, a large amount of target detection labeling information is required, however, the target detection labeling is a step with high cost.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a target detection method, apparatus, electronic device, computer-readable storage medium, and computer program product.
According to an aspect of the present disclosure, there is provided a target detection method including: classifying the feature images of the input images by using a first classifier to generate a first classification result of the input images; constructing class features of the input image based on the feature map and the first classification result; enhancing the category features to generate enhanced category features; classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and generating a target detection result of the input image based at least in part on the second classification result.
According to another aspect of the present disclosure, there is provided an object detection apparatus including: a first classification module configured to: classifying the feature images of the input images to generate a first classification result of the input images; a category feature construction module configured to: constructing class features of the input image based on the feature map and the first classification result; a feature enhancement module configured to: enhancing the category features to generate enhanced category features; a second classification module configured to: classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and a target detection generation module configured to: based at least in part on the second classification result, a target detection result of the input image is generated.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the object detection method as described in the present disclosure.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the object detection method as described in the present disclosure.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the object detection method as described in the present disclosure.
According to one or more embodiments of the present disclosure, object detection in an image may be achieved with only category annotation information.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a target detection method according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a process of constructing a category feature of an input image in the method of FIG. 2, according to an embodiment of the present disclosure;
FIG. 4 illustrates a flow chart of a process of extracting target features from a feature map in the process of FIG. 3 in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a flowchart of a process of generating target region information for a feature map in the process of FIG. 4, according to an embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an object detection network according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of a target detection apparatus according to an embodiment of the disclosure;
fig. 8 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
With the development of computer vision technology, target detection is widely applied to the fields of intelligent transportation, robot navigation, intelligent video monitoring, industrial detection, aerospace and the like. In object detection, all objects in an image are marked with a detection frame, and the category of each object is identified.
In the prior art, in order to train a neural network for object detection, a large amount of object labeling information, i.e., the position and class of an object in each picture, is required. However, the cost of performing the target labeling information on the picture is high.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable methods of object detection to be performed, where the object detection methods may enable object detection in images without object annotation information. It will be appreciated that this is not limiting, and in some embodiments, the client devices 101, 102, 103, 104, 105, and 106 may have sufficient storage and computing resources such that they are also capable of executing one or more services or software applications of the object detection method.
In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use client devices 101, 102, 103, 104, 105, and/or 106 for target detection. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, apple iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., google Chrome OS); or include various mobile operating systems such as Microsoft Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in a variety of locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In some embodiments, the data store used by server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In an exemplary embodiment of the present disclosure, there is provided a target detection method including: classifying the feature images of the input images by using a first classifier to generate a first classification result of the input images; constructing class features of the input image based on the feature map and the first classification result; enhancing the category features to generate enhanced category features; classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and generating a target detection result of the input image based at least in part on the second classification result.
Fig. 2 shows a flow chart of a target detection method 200 according to an embodiment of the disclosure.
In step S201, the feature map of the input image is classified by a first classifier to generate a first classification result of the input image.
According to some embodiments, the input image is processed with a feature extraction network to generate a feature map of the input image, wherein the feature extraction network may be, for example, a convolutional neural network.
According to some embodiments, the first classifier comprises a plurality of class classification modules, wherein each class classification module corresponds to each target class to be detected. For example, when three target categories (e.g., traffic lights, pedestrians, trucks), etc. need to be detected in the picture, the first classifier includes a category classification module (e.g., a category classification module corresponding to a traffic light, a category classification module corresponding to a pedestrian, and a category classification module corresponding to a truck) corresponding to each of the three target categories.
According to some embodiments, each category classification module includes at least one convolution layer and a global average pooling layer, and the first classification result of the input image includes a category activation graph corresponding to each target category.
According to some embodiments, generating a first classification result of the input image comprises: in each class classification module, the output of the last convolutional layer in at least one convolutional layer is globally averaged and pooled to generate a corresponding class activation graph.
In step S203, based on the feature map and the first classification result, a category feature of the input image is constructed.
According to some embodiments, for each target class in the input image, a feature in the feature map corresponding to the target class is extracted based on the feature map and a classification result (e.g., a class activation map of the target class) corresponding to the target class in the first classification result. For example, when two target categories of traffic lights and pedestrians exist in the input image, all features corresponding to the traffic lights and all features corresponding to the pedestrians in the feature map are extracted respectively.
According to some embodiments, for each target category in the input image, a category feature corresponding to the target category is constructed based on features in the feature map corresponding to the target category. For example, based on all the features corresponding to the traffic lights in the feature map, constructing category features corresponding to the traffic lights; category features corresponding to pedestrians are constructed based on all features corresponding to pedestrians in the feature map.
In step S205, the category characteristics are enhanced to generate enhanced category characteristics.
According to some embodiments, enhancing the category feature to generate an enhanced category feature includes: the class features are enhanced using one selected from the group consisting of a graph roll-up neural network, a non-local network, and an attention mechanism to generate enhanced class features.
According to some embodiments, the enhanced category features include enhanced category features corresponding to each target category in the input image.
In step S207, the enhanced class features are classified by a second classifier to obtain a second classification result of the input image.
According to some embodiments, for each enhanced category feature, a second classifier determines a target category for the enhanced category feature, wherein the second classification result includes a category determination result for each enhanced category feature. According to some embodiments, each enhanced category feature is aligned with a particular target category, and thus the second classification result indicates the target category contained in the input image.
In step S209, a target detection result of the input image is generated based at least in part on the second classification result.
In the target detection method provided by the exemplary embodiment of the disclosure, the target detection in the image can be realized under the condition that only the category label information exists, so that the cost of the target detection is reduced.
According to some embodiments, constructing the category features of the input image based on the feature map and the first classification result comprises: extracting target features from the feature map based on the feature map and the first classification result; clustering the target features to obtain at least one target class; and for each target category, averaging the target characteristics of the target category to obtain the category characteristics corresponding to the target category.
Fig. 3 shows a flowchart of a process of constructing a category feature of an input image (step S203) in the method 200 of fig. 2 according to an embodiment of the present disclosure.
In step S301, a target feature is extracted from the feature map based on the feature map and the first classification result.
In step S303, the target features are clustered to obtain at least one target class.
According to some embodiments, the target features are clustered such that the plurality of target features of the input image are divided into subsets corresponding to each target class. For example, when targets corresponding to traffic lights and pedestrians are included in the input image, the target features corresponding to the traffic lights are divided into a first subset, and the target features corresponding to the pedestrians are divided into a second subset.
According to some embodiments, clustering the target features includes: calculating the target class number of the input image based on the first classification result; and clustering the target features based on the number of target categories of the input image (e.g., by K-means clustering, clustering based on gaussian mixture models, or other algorithms requiring a known number of data categories).
According to some embodiments, the first classification result comprises at least one classification result corresponding to at least one target class, respectively, and wherein calculating the number of target classes of the input image based on the first classification result comprises: and calculating the number of target categories meeting the preset condition in at least one target category as the number of target categories of the input image, wherein the classification result corresponding to each target category meeting the preset condition comprises a value larger than a target threshold value.
According to some embodiments, the first classification result includes a class activation graph corresponding to each target class to be detected, and calculating the number of target classes of the input image includes: judging whether a value larger than a target threshold exists in a class activation diagram corresponding to each target class to be detected, wherein if the value larger than the target threshold exists in the class activation diagram corresponding to the target class, the existence of the target of the class in the input image is indicated, and if the value larger than the target threshold does not exist in the class activation diagram corresponding to the target class, the absence of the target of the class in the input image is indicated; and calculating the number of target categories with values larger than a target threshold in the corresponding class activation diagram as the number of target categories of the input image.
For example, when traffic lights, pedestrians, and trucks need to be detected in the input image, the first classification result includes a first class activation map corresponding to the traffic lights, a second class activation map corresponding to the pedestrians, and a third class activation map corresponding to the trucks. Wherein the first class activation map corresponding to the traffic light and the second class activation map corresponding to the pedestrian contain values greater than the target threshold, and thus the target class number of the input image is 2.
According to other embodiments, the target features may be clustered without calculating the number of target categories of the input image (e.g., by mean-shift clustering, hierarchical clustering, or other algorithms that do not require a known number of data categories).
In step S305, for each target category, the target features of the target category are averaged to obtain the category feature corresponding to the target category.
According to some embodiments, extracting the target feature from the feature map based on the feature map and the first classification result comprises: generating target area information of the feature map based on the first classification result; and constructing a target feature of the input image based on the feature map and the target region information.
Fig. 4 shows a flowchart of a process of extracting target features from a feature map (step S301) in the process of fig. 3 according to an embodiment of the present disclosure.
In step S401, target region information of the feature map is generated based on the first classification result.
According to some embodiments, the target region information comprises a location and a category of the target region, wherein the location of the target region information indicates a location of the target region in the feature map.
In step S403, a target feature of the input image is constructed based on the feature map and the target region information.
According to some embodiments, features in the feature map corresponding to locations of the target region are extracted, and target features of the input image are constructed based on the extracted features.
According to some embodiments, the first classification result is a class activation graph of the input image, and generating the target region information of the feature graph based on the first classification result comprises: extracting at least one connected region from the class activation diagram by using a connected region algorithm as at least one target region of the feature diagram; and generating target area information of at least one target area of the feature map, wherein the target area information comprises a position and a category of the at least one target area.
Fig. 5 shows a flowchart of a process of generating target area information of a feature map (step S401) in the process of fig. 4 according to an embodiment of the present disclosure.
In step S501, at least one connected region is extracted from the class activation map as at least one target region of the feature map using the connected region algorithm.
According to some embodiments, extracting at least one connected region from the class activation map using a connected region algorithm comprises: performing binarization processing on each class activation graph of the input image to obtain a binarization class activation graph corresponding to the class activation graph; and extracting at least one connected region from the activation graph using a connected region algorithm.
According to some embodiments, binarizing the class activation map comprises: for each point in the class activation map, determining whether a pixel value of the point is greater than a binarization threshold, wherein if the pixel value of the point is greater than or equal to the binarization threshold, the pixel value corresponding to the point is set to a high pixel value (e.g., 1) in the binarization class activation map, and if the pixel value of the point is less than the binarization threshold, the pixel value corresponding to the point is set to a low pixel value (e.g., 0) in the binarization class activation map.
According to some embodiments, in the binarized class activation map, dots having the same pixel value that are connected to each other constitute a connected region.
According to some embodiments, a Two-Pass algorithm or a Seed-filtering algorithm may be employed to extract at least one connected region from the class of activation graphs.
In step S503, target area information of at least one target area of the feature map is generated, wherein the target area information includes a position and a category of the at least one target area.
According to some embodiments, for each class activation graph, the class of the target area extracted from the class activation graph is the target class to which the class activation graph corresponds. According to some embodiments, for each target region, the position of the target region is the coordinate values of the upper left and lower right corners of the target region.
According to some embodiments, constructing the target feature of the input image based on the feature map and the target region information comprises: based on the target region information, region-of-interest pooling is performed on the feature map to extract target features of the input image.
According to some embodiments, performing region of interest pooling on the feature map comprises: extracting features corresponding to the target region from the input image feature map based on the target region information; and scaling the features corresponding to each target region to a predefined size.
According to some embodiments, scaling the features corresponding to each target region to a predefined size comprises: dividing the features corresponding to the target area into a plurality of parts with the same size according to the preset size; and performing a max pooling operation on each portion.
According to some embodiments, the second classification result comprises target class information of the input image, and wherein generating a target detection result of the input image based at least in part on the second classification result comprises: and generating a target detection result of the input image based on the target area information and the target category information.
According to some embodiments, the object category information indicates which categories of objects are included in the input image, and the object region information includes information of all object regions extracted from the first classification result, and thus, the object category information may be used to further adjust the object region information to improve the accuracy of object detection.
According to some embodiments, generating the target detection result of the input image based on the target area information and the target class information includes: for each target category, a portion of the target region information corresponding to the target category is retained in response to the target category information indicating that the feature map has a target for that target category, and a portion of the target region information corresponding to the target category is discarded in response to the target category information indicating that the feature map does not have a target for that target category.
For example, when the target area information includes target area information corresponding to traffic lights, pedestrians, and trucks, and the target category information indicates only targets for which the input image contains traffic lights and pedestrians, the target area information corresponding to traffic lights and pedestrians is retained as a target detection result, and the target area information corresponding to trucks is discarded.
Fig. 6 shows a block diagram of an object detection network 600 according to an embodiment of the present disclosure. As shown in fig. 6, the object detection network includes an input feature extraction module 611, a first classifier 612, an object region extraction module 613, a category number statistics module 614, an ROI pooling module 615, a clustering module 616, an averaging module 617, an enhancement module 618, a second classifier 619, and an object information generation module 620.
First, the input image 601 is input to the feature extraction module 611 to extract a feature map of the input image;
next, classifying the feature map of the input image by using a first classifier 612 to generate a first classification result of the input image;
next, the target region extraction module 613 receives the first classification result input from the first classifier 612 to generate target region information of the feature map, and the category number statistics module 614 receives the first classification result from the first classifier 612 to count the number of target categories in the input image;
Next, the ROI pooling module 615 receives the feature map of the input image from the feature extraction module 611 and the target region information from the target region extraction module 613 to constitute target features of the input image;
next, the clustering module 616 receives the target features from the ROI pooling module 615 and the target category number from the category number statistics module, clusters with the target features;
next, the averaging module 617 receives the clustered target features from the clustering module 616 and averages the target features for each category to obtain category features corresponding to each category;
the enhancement module 618 then receives the category characteristics from the averaging module 617 to generate enhanced category characteristics;
next, a second classifier 619 receives the enhanced category features from the enhancement module 618 to generate a second classification result comprising target category information;
finally, the target information generation module 620 receives the target region information from the target region extraction module 613 and the second classification result from the second classifier 619, and generates a final target detection result based on the target region information and the target class information of the second classification information.
In an exemplary embodiment of the present disclosure, there is provided an object detection apparatus including: a first classification module configured to: classifying the feature images of the input images to generate a first classification result of the input images; a category feature construction module configured to: constructing class features of the input image based on the feature map and the first classification result; a feature enhancement module configured to: enhancing the category features to generate enhanced category features; a second classification module configured to: classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and a target detection generation module configured to: based at least in part on the second classification result, a target detection result of the input image is generated.
Fig. 7 shows a block diagram of a structure of an object detection apparatus 700 according to an embodiment of the present disclosure.
As shown in fig. 7, the object detection device 700 includes: a first classification module 701, a category feature construction module 702, a feature enhancement module 703, a second classification module 704, and a target detection generation module 705, wherein the first classification module 701 is configured to: classifying the feature images of the input images to generate a first classification result of the input images; the category feature construction module 702 is configured to: constructing class features of the input image based on the feature map and the first classification result; the feature enhancement module 703 is configured to: enhancing the category features to generate enhanced category features; the second classification module 704 is configured to: classifying the enhanced class features by using a second classifier to obtain a second classification result of the input image; and the target detection generation module 705 is configured to: based at least in part on the second classification result, a target detection result of the input image is generated.
According to some embodiments, the category feature construction module comprises: a target feature extraction module configured to: extracting target features from the feature map based on the feature map and the first classification result; a clustering module configured to: clustering the target features to obtain at least one target class; and a target feature averaging module configured to: and for each target category, averaging the target characteristics of the target category to obtain the category characteristics corresponding to the category.
It should be appreciated that the various modules of the apparatus 700 shown in fig. 7 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features, and advantages described above with respect to method 200 apply equally to apparatus 700 and the modules that it comprises. For brevity, certain operations, features and advantages are not described in detail herein.
Although specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be divided into multiple modules and/or at least some of the functions of the multiple modules may be combined into a single module. The particular module performing the actions discussed herein includes the particular module itself performing the actions, or alternatively the particular module invoking or otherwise accessing another component or module that performs the actions (or performs the actions in conjunction with the particular module). Thus, a particular module that performs an action may include that particular module itself that performs the action and/or another module that the particular module invokes or otherwise accesses that performs the action.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various modules described above with respect to fig. 7 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the first classification module 701, the category feature construction module 702, the feature enhancement module 703, the second classification module 704, and the object detection generation module 705 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (Digital Signal Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
In an exemplary embodiment of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods as described in the present disclosure.
In an exemplary embodiment of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method as described in the present disclosure is provided.
In an exemplary embodiment of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program, when being executed by a processor, implements a method as described in the present disclosure.
Referring to fig. 8, a block diagram of an electronic device 800 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the device 800, the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 807 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. The storage unit 808 may include, but is not limited to, magnetic disks, optical disks. The communication unit 809 allows the device 800 to exchange information/data with other devices over computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (11)

1. A target detection method comprising:
classifying a feature map of an input image by using a first classifier to generate a first classification result of the input image;
constructing a category feature of the input image based on the feature map and the first classification result, including:
extracting target features from the feature map based on the feature map and the first classification result, including: generating target area information of the feature map based on the first classification result; and constructing a target feature of the input image based on the feature map and the target region information;
Clustering the target features to obtain at least one target category; and
for each target category, averaging the target characteristics of the target category to obtain the category characteristics corresponding to the target category;
enhancing the category features to generate enhanced category features;
classifying the enhanced class features by a second classifier to obtain a second classification result of the input image; and
based at least in part on the second classification result, a target detection result of the input image is generated.
2. The target detection method according to claim 1, wherein the first classification result includes a class activation graph of the input image, and wherein the generating target region information of the feature graph based on the first classification result includes:
extracting at least one connected region from the class activation graph as at least one target region of the feature graph by using a connected region algorithm; and
target region information of at least one target region of the feature map is generated,
wherein the target area information includes a location and a category of the at least one target area.
3. The object detection method according to claim 1, wherein the constructing the object feature of the input image based on the feature map and the object region information includes:
and based on the target region information, performing region-of-interest pooling on the feature map to extract target features of the input image.
4. The object detection method according to any one of claims 1 to 3, wherein the clustering of the object features includes:
calculating the target category number of the input image based on the first classification result; and
and clustering the target features based on the target category number of the input image.
5. The object detection method according to claim 4, wherein the first classification result includes at least one classification result corresponding to the at least one object category, respectively, and wherein the calculating the number of object categories of the input image based on the first classification result includes:
calculating the number of target categories satisfying a predetermined condition among the at least one target category as the number of target categories of the input image,
wherein the classification result corresponding to each target category satisfying the predetermined condition includes a value greater than a target threshold.
6. The object detection method according to any one of claims 1 to 3, wherein the enhancing the category feature to generate an enhanced category feature comprises:
the class feature is enhanced using one selected from the group consisting of a graph roll-up neural network, a non-local network, and an attention mechanism to generate the enhanced class feature.
7. The object detection method of any of claims 1-3, wherein the second classification result includes object class information of the input image, and wherein the generating the object detection result of the input image based at least in part on the second classification result comprises:
and generating a target detection result of the input image based on the target area information and the target category information.
8. The object detection method according to claim 7, wherein the generating the object detection result of the input image based on the object region information and the object category information includes:
for each target category, responsive to the target category information indicating that the feature map has a target for that target category, retaining a portion of the target region information corresponding to that target category, and,
And discarding a part corresponding to the target category in the target area information in response to the target category information indicating that the feature map does not have the target of the target category.
9. An object detection apparatus comprising:
a first classification module configured to: classifying a feature map of an input image to generate a first classification result of the input image;
a category feature construction module configured to: constructing a category feature of the input image based on the feature map and the first classification result, including:
a target feature extraction module configured to: extracting target features from the feature map based on the feature map and the first classification result, including: generating target area information of the feature map based on the first classification result; and constructing a target feature of the input image based on the feature map and the target region information;
a clustering module configured to: clustering the target features to obtain at least one target category; and
a target feature averaging module configured to: for each target category, averaging the target characteristics of the target category to obtain the category characteristics corresponding to the category;
A feature enhancement module configured to: enhancing the category features to generate enhanced category features;
a second classification module configured to: classifying the enhanced class features by a second classifier to obtain a second classification result of the input image; and
a target detection generation module configured to: based at least in part on the second classification result, a target detection result of the input image is generated.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202110468946.0A 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium Active CN113139542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110468946.0A CN113139542B (en) 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110468946.0A CN113139542B (en) 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113139542A CN113139542A (en) 2021-07-20
CN113139542B true CN113139542B (en) 2023-08-11

Family

ID=76816370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110468946.0A Active CN113139542B (en) 2021-04-28 2021-04-28 Object detection method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113139542B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008949A (en) * 2019-01-24 2019-07-12 华南理工大学 A kind of image object detection method, system, device and storage medium
CN110737801A (en) * 2019-10-14 2020-01-31 腾讯科技(深圳)有限公司 Content classification method and device, computer equipment and storage medium
CN111291819A (en) * 2020-02-19 2020-06-16 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and storage medium
CN111428807A (en) * 2020-04-03 2020-07-17 桂林电子科技大学 Image processing method and computer-readable storage medium
CN111709357A (en) * 2020-06-12 2020-09-25 北京百度网讯科技有限公司 Method and device for identifying target area, electronic equipment and road side equipment
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112613415A (en) * 2020-12-25 2021-04-06 深圳数联天下智能科技有限公司 Face nose type recognition method and device, electronic equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102592076B1 (en) * 2015-12-14 2023-10-19 삼성전자주식회사 Appartus and method for Object detection based on Deep leaning, apparatus for Learning thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008949A (en) * 2019-01-24 2019-07-12 华南理工大学 A kind of image object detection method, system, device and storage medium
CN110737801A (en) * 2019-10-14 2020-01-31 腾讯科技(深圳)有限公司 Content classification method and device, computer equipment and storage medium
CN111291819A (en) * 2020-02-19 2020-06-16 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and storage medium
CN111428807A (en) * 2020-04-03 2020-07-17 桂林电子科技大学 Image processing method and computer-readable storage medium
CN111709357A (en) * 2020-06-12 2020-09-25 北京百度网讯科技有限公司 Method and device for identifying target area, electronic equipment and road side equipment
CN111783878A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN112613415A (en) * 2020-12-25 2021-04-06 深圳数联天下智能科技有限公司 Face nose type recognition method and device, electronic equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李月峰等.在线多目标视频跟踪算法综述.计算机技术与自动化.2018,73-82. *

Also Published As

Publication number Publication date
CN113139542A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN112857268B (en) Object area measuring method, device, electronic equipment and storage medium
EP3869404A2 (en) Vehicle loss assessment method executed by mobile terminal, device, mobile terminal and medium
US20230052389A1 (en) Human-object interaction detection
CN115422389B (en) Method and device for processing text image and training method of neural network
US20230051232A1 (en) Human-object interaction detection
CN113256583A (en) Image quality detection method and apparatus, computer device, and medium
CN115082740B (en) Target detection model training method, target detection device and electronic equipment
CN114445667A (en) Image detection method and method for training image detection model
CN114495103B (en) Text recognition method and device, electronic equipment and medium
CN113139542B (en) Object detection method, device, equipment and computer readable storage medium
CN115797660A (en) Image detection method, image detection device, electronic equipment and storage medium
CN113868453B (en) Object recommendation method and device
CN113596011B (en) Flow identification method and device, computing device and medium
CN114842476A (en) Watermark detection method and device and model training method and device
CN114842474B (en) Character recognition method, device, electronic equipment and medium
CN115170536B (en) Image detection method, training method and device of model
CN114140851B (en) Image detection method and method for training image detection model
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN115512131B (en) Image detection method and training method of image detection model
CN114140852B (en) Image detection method and device
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN115620271B (en) Image processing and model training method and device
CN114677691B (en) Text recognition method, device, electronic equipment and storage medium
CN115331077B (en) Training method of feature extraction model, target classification method, device and equipment
CN114120420B (en) Image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant