CN115511779B - Image detection method, device, electronic equipment and storage medium - Google Patents

Image detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115511779B
CN115511779B CN202210861722.0A CN202210861722A CN115511779B CN 115511779 B CN115511779 B CN 115511779B CN 202210861722 A CN202210861722 A CN 202210861722A CN 115511779 B CN115511779 B CN 115511779B
Authority
CN
China
Prior art keywords
feature
scale
obtaining
detection result
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210861722.0A
Other languages
Chinese (zh)
Other versions
CN115511779A (en
Inventor
邹智康
叶晓青
孙昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210861722.0A priority Critical patent/CN115511779B/en
Publication of CN115511779A publication Critical patent/CN115511779A/en
Application granted granted Critical
Publication of CN115511779B publication Critical patent/CN115511779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image detection method, an image detection device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, in particular to the technical fields of image processing, computer vision, deep learning and the like, and especially relates to scenes such as 3D vision, virtual reality and the like. The implementation scheme is as follows: acquiring a target image, wherein the target image comprises a target object; obtaining a plurality of feature maps of the target image, wherein the feature maps correspond to a plurality of scales; acquiring multi-scale fusion features based on the multiple feature maps; and obtaining a detection result of the target object based on the multi-scale fusion characteristic, wherein the detection result indicates three-dimensional space information of the target object.

Description

Image detection method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the technical fields of image processing, computer vision, deep learning, and the like, and in particular, to 3D vision, virtual reality, and other scenes, and more particularly, to an image detection method, apparatus, electronic device, computer readable storage medium, and computer program product.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
3D detection based on monocular images is performed by using a deep learning technology, and three-dimensional space information in the images is estimated based on the 2D images. How to improve the accuracy of the estimated three-dimensional space information is a constant concern.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides an image detection method, apparatus, electronic device, computer-readable storage medium, and computer program product.
According to an aspect of the present disclosure, there is provided an image detection method including: acquiring a target image, wherein the target image comprises a target object; inputting the target image into a convolutional network comprising a plurality of convolutional layers, with an output of each of the plurality of convolutional layers; obtaining a plurality of feature maps of the target image corresponding to a plurality of scales based on a plurality of outputs of the plurality of convolution layers; acquiring multi-scale fusion features based on the feature maps; and obtaining a detection result of the target object based on the multi-scale fusion feature, wherein the detection result indicates three-dimensional space information of the target object.
According to another aspect of the present disclosure, there is provided an image detection apparatus including: an image acquisition unit configured to acquire a target image including a target object; an image input unit configured to input the target image to a convolution network including a plurality of convolution layers, with an output of each of the plurality of convolution layers; a feature map acquisition unit configured to acquire a plurality of feature maps of the target image corresponding to a plurality of scales based on a plurality of outputs of the plurality of convolution layers; a fusion feature acquisition unit configured to acquire a multi-scale fusion feature based on the plurality of feature maps; and a detection result acquisition unit configured to obtain a detection result of the target object based on the multi-scale fusion feature, the detection result indicating three-dimensional spatial information of the target object.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method according to an embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements a method according to embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, in a process of performing 3D detection based on 2D images, 2D images for objects having different scales each have a corresponding receptive field, so that three-dimensional spatial information of a target object in the obtained detection result is accurate.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of an image detection method according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a process for obtaining a plurality of feature maps based on a plurality of outputs of a plurality of convolution layers in an image detection method according to an embodiment of the present disclosure;
FIG. 4 shows a flowchart of a process of obtaining a weight corresponding to each of a plurality of channels in an image detection method according to an embodiment of the present disclosure;
FIG. 5 illustrates a flowchart of a process for obtaining multi-scale fusion features based on multiple feature maps in an image detection method according to an embodiment of the present disclosure;
FIG. 6 illustrates a flowchart of obtaining a detection result of a target object based on a multi-scale fusion feature in an image detection method according to an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of an image detection model in an image detection method according to an embodiment of the present disclosure;
fig. 8 shows a block diagram of a structure of an image detection apparatus according to an embodiment of the present disclosure; and
fig. 9 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the image detection method.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may receive three-dimensional spatial information using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
According to an aspect of the present disclosure, there is provided an image detection method. As shown in fig. 2, the image detection method 200 includes:
step S210: acquiring a target image, wherein the target image comprises a target object;
step S220: inputting the target image into a convolutional network comprising a plurality of convolutional layers, with an output of each of the plurality of convolutional layers;
step S230: obtaining a plurality of feature maps of the target image corresponding to a plurality of scales based on a plurality of outputs of the plurality of convolution layers;
step S240: acquiring multi-scale fusion features based on the feature maps; and
step S250: and obtaining a detection result of the target object based on the multi-scale fusion characteristic, wherein the detection result indicates three-dimensional space information of the target object.
In the process of 3D detection based on the 2D images, corresponding receptive fields are provided for the 2D images of the objects with different scales, so that the three-dimensional space information of the target object in the obtained detection result is accurate.
In the related art, in performing 3D detection based on a 2D image, spatial properties of the 3D object, such as depth information, are predicted by key points of the 3D object projected onto the 2D image. Because the 3D detection process is strongly dependent on a single scale to estimate the 3D attribute of the obstacle of the whole scene, and aiming at complex scenes, 3D objects often have scale diversity, and different characteristics of each scale cannot be considered in a single scale mode, so that the accuracy of 3D detection is affected.
According to the embodiment of the disclosure, the multiple feature images of the 2D target image corresponding to the multiple scales are obtained, the multi-scale fusion features are obtained based on the multiple feature images, the spatial information of the target image under different scales is fused in the multi-scale fusion features, so that the spatial information under different scales can be considered in the process of obtaining the detection result based on the multi-scale fusion features, the corresponding sensing capability is achieved under the condition of multiple scales, the sensing field is increased, and the three-dimensional spatial information of the target object in the obtained detection result is accurate.
Meanwhile, the target image is input into the convolution network comprising a plurality of convolution layers, the output of each convolution layer is obtained, a plurality of feature images are obtained based on the output of each convolution layer, and the method is simple and has small data processing capacity.
In some embodiments, the target image may be an image obtained by a monocular camera. Such as an image obtained by an onboard monocular camera.
In some embodiments, the target object may be any object in the target image for which 3D detection is desired. For example, the target object is a vehicle, a pedestrian, a signal lamp, or the like.
In some embodiments, the three-dimensional spatial information includes position information, three-dimensional spatial dimension information, or orientation angle information. For example, the three-dimensional space size information includes the length, width, height, and the like of the target object.
In some embodiments, the detection results also indicate respective classifications of the target object among the plurality of classifications.
In some embodiments, the target image is feature extracted using a plurality of trained feature extraction networks to obtain a plurality of feature maps of the target image corresponding to a plurality of scales. Wherein, a plurality of trained feature extraction networks respectively correspond to a plurality of scales to extract feature graphs of the corresponding scales.
In some embodiments, the convolutional network is obtained by training with a training image set comprising training images, the training image set further comprising a plurality of other training images obtained by scaling the size of the training images.
The training is further carried out by adopting images with various sizes obtained after the training images are scaled in the process of training the convolution network, so that the receptive field of the convolution network is further improved, and the spatial information reflected on different scales by a plurality of feature images obtained through the output of a plurality of convolution layers is accurate.
In some embodiments, the plurality of outputs of the plurality of convolutional layers are taken as the plurality of feature maps.
In some embodiments, each of the plurality of outputs corresponds to a plurality of channels, as shown in fig. 3, the obtaining the plurality of feature maps based on the plurality of outputs of the plurality of convolutional layers comprises:
Step S310: for each of the plurality of outputs, obtaining a weight corresponding to the output for each of the plurality of channels;
step S320: for each of the plurality of outputs, obtaining an updated output based on a plurality of weights for the output corresponding to a plurality of channels; and
step S330: the plurality of feature maps are obtained based on a plurality of updated outputs corresponding to the plurality of outputs.
On feature maps of different scales, features on different channels have different importance to the detection result. The weight of each output corresponding to a plurality of channels is obtained, and a plurality of feature graphs are obtained based on the weight, so that different attention weights are given to different feature channels on the output of each scale, and the characteristics of the channels which should be more attention on each scale and are favorable for improving the accuracy of the detection result can be focused in the process of obtaining the detection result, so that the finally obtained detection result is further accurate.
In some embodiments, as shown in fig. 4, obtaining weights corresponding to the output for each of the plurality of channels includes:
step S410: inputting the output into a global pooling network for obtaining a first feature, the global pooling network for aggregating information of spatial dimensions in the output; and
Step S420: based on the first feature, a weight of each channel of a plurality of channels corresponding to the output is obtained.
And aggregating the information of the spatial dimension in the output of each dimension by using the global pooling network, so that the obtained weight is obtained aiming at the information of the spatial dimension, and improving the attention degree of the obtained feature degree of each dimension to the feature favorable for obtaining the information of the spatial dimension in each channel in the feature map, thereby further ensuring that the finally obtained detection result is accurate.
For example, for the first feature, the dimension is c×h×w, where C is the number of channels, and H and W are the height and width, respectively. After the first feature Fi is input into the global pooling network, information of the space dimension is aggregated through the global pooling network, and the feature of dimension Cx1 is obtained. By inputting the feature of dimension c×1 into the full link layer, scale information of dimension c×1 is obtained, which indicates the weight of each channel. By multiplying the scale information with the first feature Fi, a feature map at the scale is obtained.
In some embodiments, as shown in fig. 5, obtaining the multi-scale fusion feature based on the plurality of feature maps comprises:
step S510: adjusting the scale of each of the plurality of feature maps to a first scale, the first scale being not less than the largest scale of the plurality of scales; and
Step S520: and carrying out feature fusion on a plurality of adjusted feature graphs corresponding to the feature graphs to obtain the multi-scale fusion feature.
The multi-scale fusion characteristic is obtained by adjusting the characteristic diagrams of the multiple scales to the uniform scale and then fusing.
In some embodiments, the first scale is adjusted by inputting each feature map into a deconvolution network.
In some embodiments, feature fusion of the plurality of adjusted features corresponding to the plurality of feature maps includes: the plurality of adjusted features are stacked in a channel direction.
By directly stacking a plurality of adjusted features on the channel, the method for obtaining the multi-scale fusion features is simple, and the data processing amount is small. In some embodiments, the weights corresponding to the feature maps in the feature maps are also obtained, and based on the weights corresponding to the feature maps, the adjusted features are stacked in the channel direction to obtain the multi-scale fusion feature.
For example, each of the plurality of adjusted features is weighted with the weight of the feature map corresponding to the feature, and a weighted feature is obtained; stacking a plurality of weighted features corresponding to the plurality of adjusted features in the channel direction to obtain a multi-scale fusion feature.
In the process of obtaining the detection result based on the obtained multi-scale fusion characteristics, the adjusted characteristics corresponding to the characteristic diagram with larger influence on the detection result are focused more, and the accuracy of the detection result is further improved.
In some embodiments, as shown in fig. 6, obtaining the detection result of the target object based on the multi-scale fusion feature includes:
step S610: inputting the multi-scale fusion features into a feature extraction network to obtain second features; and
step S620: and obtaining the detection result based on the second characteristic.
And after the multi-scale fusion features are input into the feature extraction network, obtaining second features and obtaining detection results based on the second features, thereby realizing the acquisition of the detection results.
In some embodiments, the scale of the second feature is the smallest scale of the plurality of scales, and the detection result is obtained by inputting the second feature to the prediction module.
In some embodiments, the prediction module obtains the detection result by obtaining regression values of the target object corresponding to the respective 3D bounding boxes based on the second features.
In some embodiments, the above steps according to the present disclosure are implemented by an image detection model. Referring to fig. 7, a block diagram of an image detection model is shown, according to some embodiments of the present disclosure. The image detection model includes a plurality of convolution layers 710, a plurality of scale response modules 720, and a scale fusion module 730. Wherein, the plurality of convolution layers 711 sequentially perform feature extraction on the image F to obtain a plurality of first features Fi; each scale response module 720 includes a pooling network 721 and two fully connected layers 722 connected in sequence to obtain a plurality of weights for a plurality of channels of each first feature Fi and to output a feature map Si based on the plurality of weights. The scale fusion module 730 includes a deconvolution network 731 connected to each scale corresponding module for adjusting the feature maps Si (including S1-S4) to the same scale; the scale fusion module 730 further includes a corresponding feature extraction network 732 to perform feature extraction based on multi-scale fusion features obtained from multiple feature maps scaled to the same scale.
In some embodiments, the training image set is obtained by training with a training image set including a training image, and the training image set further includes a plurality of other training images obtained by scaling the size of the training image for the image detection model. For example, a training image of size H×W is adjusted to a first image of size 2H×2W and a second image of size 1/2H×1/2W to obtain a plurality of other training patterns.
The image with various sizes obtained after scaling is further adopted for training in the process of training the image detection model, so that the receptive field of the model is further improved, and the detection result is accurate.
According to another aspect of the present disclosure, there is also provided an image detection apparatus, as shown in fig. 8, an apparatus 800 including: an image acquisition unit 810 configured to acquire a target image including a target object; an image input unit 820 configured to input the target image to a convolution network including a plurality of convolution layers, with an output of each of the plurality of convolution layers; a feature map obtaining unit 830 configured to obtain a plurality of feature maps corresponding to a plurality of scales of the target image based on a plurality of outputs of the plurality of convolution layers; a fusion feature acquisition unit 840 configured to obtain a multi-scale fusion feature based on the plurality of feature maps; and a detection result acquisition unit 850 configured to obtain a detection result of the target object based on the multi-scale fusion feature, the detection result indicating three-dimensional spatial information of the target object.
In some embodiments, each of the plurality of outputs corresponds to a plurality of channels, and the feature map acquisition subunit comprises: a weight acquisition unit configured to obtain, for each of the plurality of outputs, a weight corresponding to each of the plurality of channels corresponding to the output; an updating unit configured to obtain, for each of the plurality of outputs, an updated output based on a plurality of weights of the output corresponding to a plurality of channels; and a first acquisition subunit configured to obtain the plurality of feature maps based on a plurality of updated outputs corresponding to the plurality of outputs.
In some embodiments, the convolutional network is obtained by training with a training image set comprising training images, the training image set further comprising a plurality of other training images obtained by scaling the size of the training images.
In some embodiments, the weight acquisition unit includes: a pooling unit configured to input the output into a global pooling network for aggregating information of spatial dimensions in the output to obtain a first feature; and a second acquisition subunit configured to obtain, based on the first feature, a weight of each of a plurality of channels corresponding to the output.
In some embodiments, the fusion feature acquisition unit includes: a scale adjustment unit configured to adjust a scale of each of the plurality of feature maps to a first scale, the first scale being not smaller than a maximum scale of the plurality of scales; and a fusion unit configured to perform feature fusion on a plurality of adjusted feature graphs corresponding to the plurality of feature graphs, so as to obtain the multi-scale fusion feature.
In some embodiments, the fusion unit comprises: and a stacking unit configured to stack the plurality of adjusted features in a channel direction.
In some embodiments, the detection result acquisition unit includes: a feature extraction unit configured to input the multi-scale fusion feature to a feature extraction network to obtain a second feature; and a third acquisition subunit configured to obtain the detection result based on the second feature.
In some embodiments, the three-dimensional spatial information includes position information, three-dimensional spatial dimension information, or orientation angle information.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 9, a block diagram of an electronic device 900 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 907 into a Random Access Memory (RAM) 903. In the RAM903, various programs and data required for the operation of the electronic device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906, an output unit 907, a storage unit 908, and a communication unit 909. The input unit 906 may be any type of device capable of inputting information to the electronic device 900, the input unit 906 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 907 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 908 may include, but is not limited to, magnetic disks, optical disks. The communication unit 909 allows the electronic device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into RAM 903 and executed by computing unit 901, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, computing unit 901 may be configured to perform method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (8)

1. An image detection method, comprising:
acquiring a target image, wherein the target image comprises a target object;
Inputting the target image to a convolutional network comprising a plurality of convolutional layers to obtain an output of each of the plurality of convolutional layers;
obtaining a plurality of feature maps of the target image corresponding to a plurality of scales based on a plurality of outputs of the plurality of convolution layers, wherein each of the plurality of outputs corresponds to a plurality of channels, the obtaining the plurality of feature maps of the target image corresponding to the plurality of scales based on the plurality of outputs of the plurality of convolution layers comprising:
for each of the plurality of outputs, obtaining a weight corresponding to the output for each of the plurality of channels, comprising:
inputting the output into a global pooling network to obtain a first feature, the global pooling network being configured to aggregate information of spatial dimensions on each of the plurality of channels in the output, the first feature comprising aggregated information of spatial dimensions of each of the plurality of channels; and
inputting the first characteristics into two fully-connected layers which are sequentially connected to obtain the weight of each channel in the plurality of channels corresponding to the output;
for each of the plurality of outputs, obtaining an updated output based on a plurality of weights for the output corresponding to a plurality of channels; and
Obtaining the plurality of feature maps based on a plurality of updated outputs corresponding to the plurality of outputs;
based on the plurality of feature maps, obtaining a multi-scale fusion feature, comprising:
adjusting the scale of each of the plurality of feature maps to a first scale that is not less than a largest scale of the plurality of scales, the adjusting the scale of each of the plurality of feature maps to the first scale comprising:
inputting each of the plurality of feature maps into a deconvolution network to adjust each feature map to the first scale; and
performing feature fusion on a plurality of adjusted feature graphs corresponding to the feature graphs to obtain the multi-scale fusion feature, including:
obtaining the corresponding weight of each feature map in the feature maps;
weighting the plurality of adjusted feature maps based on a plurality of weights corresponding to the plurality of feature maps to obtain a plurality of weighted features; and
stacking a plurality of weighted features corresponding to the plurality of adjusted feature graphs in a channel direction to obtain the multi-scale fusion feature; and
obtaining a detection result of the target object based on the multi-scale fusion feature, wherein the detection result indicates three-dimensional space information of the target object, and the obtaining the detection result of the target object based on the multi-scale fusion feature comprises:
Inputting the multi-scale fusion features into a feature extraction network to obtain second features, wherein the scale of the second features is the smallest scale of the multiple scales; and
and inputting the second characteristic into a prediction module to obtain the detection result, wherein the prediction module obtains the detection result by obtaining regression values of the target object corresponding to the three-dimensional bounding boxes based on the second characteristic.
2. The method of claim 1, wherein the convolutional network is obtained by training with a training image set comprising training images, the training image set further comprising a plurality of other training images obtained by scaling the size of the training images.
3. The method of claim 1, wherein the three-dimensional spatial information comprises position information, three-dimensional spatial dimension information, or orientation angle information.
4. An image detection apparatus comprising:
an image acquisition unit configured to acquire a target image including a target object;
an image input unit configured to input the target image to a convolution network including a plurality of convolution layers to obtain an output of each of the plurality of convolution layers;
A feature map acquisition unit configured to obtain a plurality of feature maps of the target image corresponding to a plurality of scales based on a plurality of outputs of the plurality of convolution layers, wherein each of the plurality of outputs corresponds to a plurality of channels, the feature map acquisition unit comprising:
a weight acquisition unit configured to obtain, for each of the plurality of outputs, a weight corresponding to the output corresponding to each of the plurality of channels, the weight acquisition unit including:
a pooling unit configured to input the output into a global pooling network for aggregating information of spatial dimensions on each of the plurality of channels in the output to obtain a first feature, the first feature comprising aggregated information of spatial dimensions of each of the plurality of channels; and
a second obtaining subunit, configured to input the first feature into two fully-connected layers connected in sequence, so as to obtain a weight of each channel in the plurality of channels corresponding to the output;
an updating unit configured to obtain, for each of the plurality of outputs, an updated output based on a plurality of weights of the output corresponding to a plurality of channels; and
A first acquisition subunit configured to obtain the plurality of feature maps based on a plurality of updated outputs corresponding to the plurality of outputs;
a fusion feature acquisition unit configured to obtain a multi-scale fusion feature based on the plurality of feature maps, wherein the fusion feature acquisition unit includes:
a scale adjustment unit configured to adjust a scale of each of the plurality of feature maps to a first scale, the first scale being not smaller than a maximum scale of the plurality of scales, the adjusting the scale of each of the plurality of feature maps to the first scale including:
inputting each of the plurality of feature maps into a deconvolution network to adjust each feature map to the first scale; and
the fusion unit is configured to perform feature fusion on a plurality of adjusted feature graphs corresponding to the feature graphs to obtain the multi-scale fusion feature, and includes:
obtaining the corresponding weight of each feature map in the feature maps;
weighting the plurality of adjusted feature maps based on a plurality of weights corresponding to the plurality of feature maps to obtain a plurality of weighted features; and
Stacking a plurality of weighted features corresponding to the plurality of adjusted feature graphs in a channel direction to obtain the multi-scale fusion feature; and
a detection result acquisition unit configured to obtain a detection result of the target object, the detection result indicating three-dimensional spatial information of the target object, based on the multi-scale fusion feature, wherein the detection result acquisition unit includes:
a feature extraction unit configured to input the multi-scale fusion feature to a feature extraction network to obtain a second feature, wherein a scale of the second feature is a smallest scale of the plurality of scales; and
and a third obtaining subunit configured to input the second feature to a prediction module to obtain the detection result, wherein the prediction module obtains the detection result by obtaining a regression value of the target object corresponding to each three-dimensional bounding box based on the second feature.
5. The apparatus of claim 4, wherein the convolutional network is obtained by training with a training image set comprising training images, the training image set further comprising a plurality of other training images obtained by scaling the size of the training images.
6. The apparatus of claim 4, wherein the three-dimensional spatial information comprises position information, three-dimensional spatial dimension information, or orientation angle information.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-3.
8. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-3.
CN202210861722.0A 2022-07-20 2022-07-20 Image detection method, device, electronic equipment and storage medium Active CN115511779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210861722.0A CN115511779B (en) 2022-07-20 2022-07-20 Image detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210861722.0A CN115511779B (en) 2022-07-20 2022-07-20 Image detection method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115511779A CN115511779A (en) 2022-12-23
CN115511779B true CN115511779B (en) 2024-02-20

Family

ID=84502774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210861722.0A Active CN115511779B (en) 2022-07-20 2022-07-20 Image detection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115511779B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818964A (en) * 2021-03-31 2021-05-18 中国民航大学 Unmanned aerial vehicle detection method based on FoveaBox anchor-free neural network
CN113936256A (en) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 Image target detection method, device, equipment and storage medium
CN114419519A (en) * 2022-03-25 2022-04-29 北京百度网讯科技有限公司 Target object detection method and device, electronic equipment and storage medium
CN114445667A (en) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 Image detection method and method for training image detection model
WO2022134996A1 (en) * 2020-12-25 2022-06-30 Zhejiang Dahua Technology Co., Ltd. Lane line detection method based on deep learning, and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105917354A (en) * 2014-10-09 2016-08-31 微软技术许可有限责任公司 Spatial pyramid pooling networks for image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134996A1 (en) * 2020-12-25 2022-06-30 Zhejiang Dahua Technology Co., Ltd. Lane line detection method based on deep learning, and apparatus
CN112818964A (en) * 2021-03-31 2021-05-18 中国民航大学 Unmanned aerial vehicle detection method based on FoveaBox anchor-free neural network
CN113936256A (en) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 Image target detection method, device, equipment and storage medium
CN114445667A (en) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 Image detection method and method for training image detection model
CN114419519A (en) * 2022-03-25 2022-04-29 北京百度网讯科技有限公司 Target object detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115511779A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN115147558B (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method and device
CN114743196B (en) Text recognition method and device and neural network training method
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN114445667A (en) Image detection method and method for training image detection model
CN115082740B (en) Target detection model training method, target detection device and electronic equipment
CN114723949A (en) Three-dimensional scene segmentation method and method for training segmentation model
CN114550313A (en) Image processing method, neural network, and training method, device, and medium thereof
CN115797660A (en) Image detection method, image detection device, electronic equipment and storage medium
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN114494797A (en) Method and apparatus for training image detection model
CN115331077B (en) Training method of feature extraction model, target classification method, device and equipment
CN115797455B (en) Target detection method, device, electronic equipment and storage medium
CN114821233B (en) Training method, device, equipment and medium of target detection model
CN115100431B (en) Target detection method, neural network, training method, training device and training medium thereof
CN115170536B (en) Image detection method, training method and device of model
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN116824609B (en) Document format detection method and device and electronic equipment
CN114118379B (en) Neural network training method, image processing method, device, equipment and medium
CN115620271B (en) Image processing and model training method and device
CN114140851B (en) Image detection method and method for training image detection model
CN117218297A (en) Human body reconstruction parameter generation method, device, equipment and medium
CN117274575A (en) Training method of target detection model, target detection method, device and equipment
CN117274370A (en) Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium
CN114882331A (en) Image processing method, apparatus, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant