CN116129243A - Object detection method, device and equipment for radar image and storage medium - Google Patents

Object detection method, device and equipment for radar image and storage medium Download PDF

Info

Publication number
CN116129243A
CN116129243A CN202310135890.6A CN202310135890A CN116129243A CN 116129243 A CN116129243 A CN 116129243A CN 202310135890 A CN202310135890 A CN 202310135890A CN 116129243 A CN116129243 A CN 116129243A
Authority
CN
China
Prior art keywords
deep learning
learning model
radar
target
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310135890.6A
Other languages
Chinese (zh)
Inventor
曾振达
彭发东
李鑫
叶杭
刘思源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Heyuan Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Heyuan Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Heyuan Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202310135890.6A priority Critical patent/CN116129243A/en
Publication of CN116129243A publication Critical patent/CN116129243A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the invention discloses an object detection method, device and equipment of radar images and a storage medium. The method comprises the following steps: acquiring a radar sample image, and determining initial detection contribution values of different depth information to an object to be detected in the radar sample image; inputting a radar sample image into a deep learning model to be trained, introducing an attention mechanism into the deep learning model to be trained according to an initial detection contribution value, and detecting an object of the radar sample image; according to a loss function of object detection of the deep learning model to be trained, model parameters and initial detection contribution values of the deep learning model to be trained are adjusted, and a target deep learning model and a target detection contribution value are obtained; and carrying out object detection on the target radar image by adopting a target deep learning model and a target detection contribution value. According to the method, attention mechanisms are introduced to different depth information in the radar image through detection of the contribution values, and object detection accuracy of the radar image can be improved.

Description

Object detection method, device and equipment for radar image and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting an object in a radar image.
Background
In engineering applications, radar images are used to provide reliable information for the application. The object in the radar image is accurately detected and identified, and engineering design, overhaul and the like can be better performed.
Because the subsurface medium is not uniform and there are impurities, useful information in the radar image tends to be hidden in a noisy image background, resulting in low accuracy of object detection in the radar image. Further, since the radar image is different from the conventional image. For example, radar images cannot have rotational semantic invariance like conventional images. Therefore, the radar image is detected by directly using the detection method of the conventional image, and the object detection accuracy is low.
Disclosure of Invention
The invention provides an object detection method, device and equipment of a radar image and a storage medium, so as to improve the object detection accuracy of the radar image.
According to an aspect of the present invention, there is provided an object detection method of a radar image, the method comprising:
acquiring a radar sample image, and determining initial detection contribution values of different depth information to an object to be detected in the radar sample image;
inputting the radar sample image into a deep learning model to be trained, introducing an attention mechanism into the deep learning model to be trained according to the initial detection contribution value, and detecting an object of the radar sample image;
according to the loss function of the object detection of the deep learning model to be trained, adjusting model parameters of the deep learning model to be trained and the initial detection contribution value to obtain a target deep learning model and a target detection contribution value;
and carrying out object detection on the target radar image by adopting the target deep learning model and the target detection contribution value.
Optionally, determining initial detection contribution values of different depth information to the object to be detected in the radar sample image includes:
determining a target detection depth range of the radar sample image according to the depth range of the object to be detected in the radar sample image;
and determining that the initial detection contribution value of the depth information in the target detection depth range to the object to be detected is zero in the radar sample image.
Optionally, introducing an attention mechanism in the deep learning model to be trained according to the initial detection contribution value includes:
in the deep learning model to be trained, a first convolution result is obtained after the first convolution operation of the radar sample image;
and carrying out weighting processing on the first convolution result according to the initial detection contribution value, and carrying out subsequent calculation on the weighted processing result instead of the first convolution result so as to introduce an attention mechanism into the deep learning model to be trained.
Optionally, adjusting the model parameters of the deep learning model to be trained and the initial detection contribution value according to the loss function of object detection of the deep learning model to be trained, including:
and according to a loss function comprising classification loss, anchor frame loss and cross entropy loss in the object detection of the deep learning model to be trained, adjusting model parameters of the deep learning model to be trained and the initial detection contribution value.
Optionally, the deep learning model to be trained includes: and taking the Mask R-CNN model as a deep learning model of the backbone network module.
Optionally, performing object detection on the target radar image by using the target deep learning model and the target detection contribution value includes:
inputting the target radar image into the target deep learning model, and obtaining a second convolution result after the first convolution operation of the target radar image;
and carrying out weighting processing on the second convolution result according to the target detection contribution value, and carrying out subsequent calculation on the weighting processing result instead of the second convolution result to obtain an object detection result of the target radar image.
According to another aspect of the present invention, there is provided an object detection apparatus of a radar image, the apparatus including:
the initial detection contribution value determining module is used for acquiring a radar sample image and determining initial detection contribution values of objects to be detected of different depth information in the radar sample image;
the attention mechanism introducing module is used for inputting the radar sample image into a deep learning model to be trained, introducing an attention mechanism into the deep learning model to be trained according to the initial detection contribution value, and detecting an object of the radar sample image;
the parameter adjustment module is used for adjusting the model parameters of the deep learning model to be trained and the initial detection contribution value according to the loss function detected by the object of the deep learning model to be trained to obtain a target deep learning model and a target detection contribution value;
and the object detection module is used for carrying out object detection on the target radar image by adopting the target deep learning model and the target detection contribution value.
Optionally, the initial detection contribution value determining module includes:
a target detection depth range determining unit, configured to determine a target detection depth range of the radar sample image according to a depth range of the object to be detected in the radar sample image;
and the initial detection contribution value determining unit is used for determining that the initial detection contribution value of the object to be detected is zero according to the depth information in the target detection depth range in the radar sample image.
Optionally, the attention mechanism introducing module includes:
the first convolution result determining unit is used for obtaining a first convolution result after the first convolution operation of the radar sample image in the deep learning model to be trained;
and the attention mechanism introducing unit is used for carrying out weighting processing on the first convolution result according to the initial detection contribution value and carrying out subsequent calculation on the weighted processing result instead of the first convolution result so as to introduce an attention mechanism in the deep learning model to be trained.
Optionally, the parameter adjustment module includes:
and the parameter adjustment unit is used for adjusting the model parameters of the deep learning model to be trained and the initial detection contribution value according to a loss function comprising classification loss, anchor frame loss and cross entropy loss in the object detection of the deep learning model to be trained.
Optionally, the deep learning model to be trained includes: and taking the Mask R-CNN model as a deep learning model of the backbone network module.
Optionally, the object detection module includes:
the second convolution result determining unit is used for inputting the target radar image into the target deep learning model, and obtaining a second convolution result after the first convolution operation of the target radar image;
and the object detection result determining unit is used for carrying out weighting processing on the second convolution result according to the target detection contribution value, and carrying out subsequent calculation on the weighting processing result instead of the second convolution result to obtain an object detection result of the target radar image.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the object detection method of radar images according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to execute an object detection method for radar image according to any one of the embodiments of the present invention.
According to the technical scheme, the radar sample image is obtained, and initial detection contribution values of different depth information to the object to be detected in the radar sample image are determined; inputting a radar sample image into a deep learning model to be trained, introducing an attention mechanism into the deep learning model to be trained according to an initial detection contribution value, and detecting an object of the radar sample image; according to a loss function of object detection of the deep learning model to be trained, model parameters and initial detection contribution values of the deep learning model to be trained are adjusted, and a target deep learning model and a target detection contribution value are obtained; object detection is carried out on a target radar image by adopting a target deep learning model and a target detection contribution value, the object detection problem in the radar image is solved, and the object detection accuracy of the radar image can be improved by introducing attention mechanisms for different depth information in the radar image through the detection contribution value.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an object detection method of a radar image according to a first embodiment of the present invention;
fig. 2 is a flowchart of an object detection method of a radar image according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a deep learning model to be trained for introducing an attention mechanism according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a radar sample image according to a second embodiment of the present invention;
fig. 5 is a schematic structural view of an object detection device for radar image according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device implementing an object detection method of a radar image according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a method for detecting an object in a radar image according to an embodiment of the present invention, where the method may be performed by an object detection device in a radar image, and the object detection device in a radar image may be implemented in hardware and/or software, and the object detection device in a radar image may be configured in an electronic device, such as a computer. As shown in fig. 1, the method includes:
step 110, acquiring a radar sample image, and determining initial detection contribution values of different depth information to an object to be detected in the radar sample image.
In this embodiment, the radar sample image may be a sagittal and cross-sectional imaging (sagittal and transverse section imaging, B-scan) radar image. The abscissa of the B-scan radar image is typically the horizontal distance and the ordinate is the travel time of the radar signal in the medium. The ordinate may reflect the depth of the detected object. The physical meaning of the abscissa and the ordinate of the B-scan radar image is different. The B-scan radar image is different from the conventional image. The B-scan radar image does not have post-rotation semantic invariance. The B-scan radar image has different degrees of importance of the signal at different ordinate (i.e. depth).
For example, an n-dimensional vector may be given to represent initial detection contribution values of different depth information in the radar sample image for the object to be detected. Where n may be the number of lines in the radar sample image. The sum of the n initial detection contributions is 1. Each initial detection contribution value may be any value, the sum of which is 1; alternatively, it may be the same value, i.e., 1/n.
And 120, inputting the radar sample image into a deep learning model to be trained, and introducing an attention mechanism into the deep learning model to be trained according to the initial detection contribution value to detect an object of the radar sample image.
The attention mechanism is introduced into the deep learning model to be trained according to the initial detection contribution value, and the attention to different depth information in the radar sample image can be enhanced through the initial detection contribution value. Specifically, attention mechanisms are introduced into the deep learning model to be trained, and characteristic information of different depths in the radar sample image can be weighted by using n-dimensional vectors. And carrying out object detection on the radar sample image by adopting the deep learning model to be trained, namely carrying out iterative training on the deep learning model to be trained so as to obtain a target deep learning model.
And 130, adjusting model parameters and initial detection contribution values of the deep learning model to be trained according to a loss function of object detection of the deep learning model to be trained, and obtaining a target deep learning model and a target detection contribution value.
Wherein the loss function may include, but is not limited to, a classification loss function. Training the deep learning model to be trained by the loss function may be by iterative training to minimize the loss function. And when the loss function is minimum, the corresponding model parameters and the detection contribution values can be respectively used as the model parameters and the target detection contribution values of the target deep learning model.
And 140, performing object detection on the target radar image by adopting a target deep learning model and a target detection contribution value.
The object detection on the target radar image may be that the target radar image is input into a target deep learning model, and weighting processing of different depth information is performed on the target radar image according to the target detection contribution value, that is, a attention mechanism is introduced. And finally obtaining an object detection result of the target radar image by the target deep learning model which is introduced into the attention mechanism.
According to the technical scheme, the radar sample image is obtained, and initial detection contribution values of different depth information to the object to be detected in the radar sample image are determined; inputting a radar sample image into a deep learning model to be trained, introducing an attention mechanism into the deep learning model to be trained according to an initial detection contribution value, and detecting an object of the radar sample image; according to a loss function of object detection of the deep learning model to be trained, model parameters and initial detection contribution values of the deep learning model to be trained are adjusted, and a target deep learning model and a target detection contribution value are obtained; the object detection method has the advantages that object detection is carried out on the target radar image by adopting the target deep learning model and the target detection contribution value, so that the object detection problem in the radar image is solved.
Example two
Fig. 2 is a flowchart of a radar image object detection method according to a second embodiment of the present invention, where the technical solution is further refined, and the technical solution in this embodiment may be combined with each of the alternatives in one or more embodiments. As shown in fig. 2, the method includes:
step 210, acquiring a radar sample image, and determining initial detection contribution values of different depth information to the object to be detected in the radar sample image.
In an optional implementation manner of the embodiment of the present invention, determining initial detection contribution values of different depth information to an object to be detected in a radar sample image includes: determining a target detection depth range of the radar sample image according to the depth range of the object to be detected in the radar sample image; in the radar sample image, determining that the initial detection contribution value of depth information in a non-target detection depth range to an object to be detected is zero.
Wherein, the uppermost part of the B-scan radar image represents the propagation condition of radar waves from air to the ground surface, and the part does not generally exist an object to be detected. Therefore, the object to be detected exists only in a certain depth range in the B-scan radar image. In order to improve the accuracy of object detection, an object may be detected within a target detection depth range of the radar sample image. Specifically, the initial detection contribution value corresponding to the non-target detection depth range of the radar sample image can be set to zero, so that the object detection accuracy can be improved.
Furthermore, in a specific application, before determining the initial detection contribution value of different depth information to the object to be detected in the radar sample image, the non-target detection depth range where the object to be detected does not exist in the radar sample image can be removed, so that the workload of image detection is reduced.
Step 220, inputting the radar sample image into a deep learning model to be trained, and obtaining a first convolution result after the first convolution operation of the radar sample image in the deep learning model to be trained.
Wherein, wait training degree of depth study model includes: and taking the Mask R-CNN model as a deep learning model of the backbone network module. The convolution operation can obtain the image characteristics of the radar sample image, and the attention mechanism can be better introduced into the deep learning model based on the image characteristics.
In this embodiment, the attention mechanism is introduced after the first convolution operation, so that original feature information of the radar image can be more reserved, the attention mechanism is introduced on the basis of the original feature information, and the object detection accuracy can be improved.
And 230, carrying out weighting processing on the first convolution result according to the initial detection contribution value, and carrying out subsequent calculation on the weighting processing result instead of the first convolution result so as to introduce an attention mechanism into the deep learning model to be trained and carry out object detection on the radar sample image.
The first convolution result may be an n×m matrix a, and each initial detection contribution value may form an n-dimensional column vector w. The weighting result of the weighting process for the first convolution result according to the initial detection contribution value may be b=w·a. And (3) carrying out subsequent calculation by replacing A with B, and introducing different depth information in the radar image into a deep learning model with different importance degrees so as to improve the target detection performance of the radar image.
Fig. 3 is a schematic structural diagram of a deep learning model to be trained for introducing an attention mechanism according to a second embodiment of the present invention. As shown in fig. 3, the attention mechanism can be introduced in the Mask R-CNN model in this embodiment. In particular, the attention mechanism may be introduced after the first convolution operation in the Mask R-CNN model.
And 240, according to a loss function comprising classification loss, anchor frame loss and cross entropy loss in the object detection of the deep learning model to be trained, adjusting model parameters and initial detection contribution values of the deep learning model to be trained to obtain a target deep learning model and a target detection contribution value.
In this embodiment, the loss function L may include three parts, i.e., classification loss, anchor box loss, and cross entropy loss. Specifically, fig. 4 is a schematic diagram of a radar sample image according to a second embodiment of the present invention. In the present embodiment, the loss function l=lcls+lbox+lmask. Where Lcls represents classification loss, lbox represents anchor box (bounding-box) loss, lmask represents cross entropy loss of template (mask). The anchor frame may be the smallest rectangle that can enclose the convex shape in fig. 4. The template may be the convex line of fig. 4.
Step 250, inputting the target radar image into the target deep learning model, and obtaining a second convolution result after the first convolution operation of the target radar image.
And 260, carrying out weighting processing on the second convolution result according to the target detection contribution value, and carrying out subsequent calculation on the weighting processing result instead of the second convolution result to obtain an object detection result of the target radar image.
The target deep learning model and the target detection contribution value are obtained by training the deep learning model to be trained through the radar sample image, so that object detection is carried out on the target radar image, and the object detection accuracy can be improved.
According to the technical scheme, the radar sample image is obtained, and initial detection contribution values of different depth information to the object to be detected in the radar sample image are determined; inputting a radar sample image into a deep learning model to be trained, and obtaining a first convolution result after the first convolution operation of the radar sample image in the deep learning model to be trained; weighting the first convolution result according to the initial detection contribution value, replacing the first convolution result with the weighting result to perform subsequent calculation so as to introduce a attention mechanism into the deep learning model to be trained, and performing object detection on the radar sample image; according to the object detection of the deep learning model to be trained, a loss function comprising classification loss, anchor frame loss and cross entropy loss is used for adjusting model parameters and initial detection contribution values of the deep learning model to be trained, and a target deep learning model and a target detection contribution value are obtained; inputting the target radar image into a target deep learning model, and obtaining a second convolution result after the first convolution operation of the target radar image; the second convolution result is weighted according to the target detection contribution value, the weighted processing result is used for replacing the second convolution result to perform subsequent calculation, so that an object detection result of the target radar image is obtained, the object detection problem in the radar image is solved, and the method can improve the object detection accuracy of the radar image by introducing attention mechanisms for different depth information in the radar image through the detection contribution value; and the attention mechanism is introduced after the first convolution operation, so that more original characteristics of the image can be reserved, and the object detection accuracy is further improved.
In the technical scheme of the embodiment of the invention, the acquisition, storage, application and the like of the related radar images accord with the regulations of related laws and regulations, and the public sequence is not violated.
Example III
Fig. 5 is a schematic structural diagram of an object detection device for radar image according to a third embodiment of the present invention. As shown in fig. 5, the apparatus includes: an initial detection contribution determination module 310, an attention mechanism introduction module 320, a parameter adjustment module 330, and an object detection module 340. Wherein:
an initial detection contribution value determining module 310, configured to acquire a radar sample image, and determine initial detection contribution values of objects to be detected of different depth information in the radar sample image;
the attention mechanism introducing module 320 is configured to input a radar sample image to a deep learning model to be trained, introduce an attention mechanism into the deep learning model to be trained according to an initial detection contribution value, and perform object detection on the radar sample image;
the parameter adjustment module 330 is configured to adjust model parameters and initial detection contribution values of the deep learning model to be trained according to a loss function of object detection of the deep learning model to be trained, so as to obtain a target deep learning model and a target detection contribution value;
the object detection module 340 is configured to perform object detection on the target radar image using the target deep learning model and the target detection contribution value.
Optionally, the initial detection contribution value determining module 310 includes:
the target detection depth range determining unit is used for determining a target detection depth range of the radar sample image according to the depth range of the object to be detected in the radar sample image;
and the initial detection contribution value determining unit is used for determining that the initial detection contribution value of the object to be detected is zero according to the depth information in the non-target detection depth range in the radar sample image.
Optionally, the attention mechanism introducing module 320 includes:
the first convolution result determining unit is used for obtaining a first convolution result after the first convolution operation of the radar sample image in the deep learning model to be trained;
and the attention mechanism introducing unit is used for carrying out weighting processing on the first convolution result according to the initial detection contribution value and carrying out subsequent calculation on the weighted processing result instead of the first convolution result so as to introduce an attention mechanism into the deep learning model to be trained.
Optionally, the parameter adjustment module 330 includes:
and the parameter adjusting unit is used for adjusting model parameters and initial detection contribution values of the deep learning model to be trained according to a loss function comprising classification loss, anchor frame loss and cross entropy loss in the object detection of the deep learning model to be trained.
Optionally, the deep learning model to be trained includes: and taking the Mask R-CNN model as a deep learning model of the backbone network module.
Optionally, the object detection module 340 includes:
the second convolution result determining unit is used for inputting the target radar image into the target deep learning model and obtaining a second convolution result after the first convolution operation of the target radar image;
and the object detection result determining unit is used for carrying out weighting processing on the second convolution result according to the target detection contribution value, and carrying out subsequent calculation on the weighted processing result instead of the second convolution result to obtain an object detection result of the target radar image.
The object detection device for the radar image provided by the embodiment of the invention can execute the object detection method for the radar image provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 6 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, for example, an object detection method of a radar image.
In some embodiments, the object detection method of the radar image may be implemented as a computer program, which is tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the above-described object detection method of a radar image may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the object detection method of the radar image in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of object detection of a radar image, the method comprising:
acquiring a radar sample image, and determining initial detection contribution values of different depth information to an object to be detected in the radar sample image;
inputting the radar sample image into a deep learning model to be trained, introducing an attention mechanism into the deep learning model to be trained according to the initial detection contribution value, and detecting an object of the radar sample image;
according to the loss function of the object detection of the deep learning model to be trained, adjusting model parameters of the deep learning model to be trained and the initial detection contribution value to obtain a target deep learning model and a target detection contribution value;
and carrying out object detection on the target radar image by adopting the target deep learning model and the target detection contribution value.
2. The method of claim 1, wherein determining initial detection contribution values of different depth information in the radar sample image for an object to be detected comprises:
determining a target detection depth range of the radar sample image according to the depth range of the object to be detected in the radar sample image;
and determining that the initial detection contribution value of the depth information in the target detection depth range to the object to be detected is zero in the radar sample image.
3. The method of claim 1, wherein introducing a mechanism of attention in the deep learning model to be trained based on the initial detection contribution value comprises:
in the deep learning model to be trained, a first convolution result is obtained after the first convolution operation of the radar sample image;
and carrying out weighting processing on the first convolution result according to the initial detection contribution value, and carrying out subsequent calculation on the weighted processing result instead of the first convolution result so as to introduce an attention mechanism into the deep learning model to be trained.
4. The method of claim 1, wherein adjusting model parameters of the deep learning model to be trained and the initial detection contribution value according to a loss function of object detection of the deep learning model to be trained comprises:
and according to a loss function comprising classification loss, anchor frame loss and cross entropy loss in the object detection of the deep learning model to be trained, adjusting model parameters of the deep learning model to be trained and the initial detection contribution value.
5. The method of claim 1, wherein the deep learning model to be trained comprises: and taking the Mask R-CNN model as a deep learning model of the backbone network module.
6. A method according to claim 3, wherein using the target deep learning model and the target detection contribution to object detection of a target radar image comprises:
inputting the target radar image into the target deep learning model, and obtaining a second convolution result after the first convolution operation of the target radar image;
and carrying out weighting processing on the second convolution result according to the target detection contribution value, and carrying out subsequent calculation on the weighting processing result instead of the second convolution result to obtain an object detection result of the target radar image.
7. An object detection device for radar images, the device comprising:
the initial detection contribution value determining module is used for acquiring a radar sample image and determining initial detection contribution values of objects to be detected of different depth information in the radar sample image;
the attention mechanism introducing module is used for inputting the radar sample image into a deep learning model to be trained, introducing an attention mechanism into the deep learning model to be trained according to the initial detection contribution value, and detecting an object of the radar sample image;
the parameter adjustment module is used for adjusting the model parameters of the deep learning model to be trained and the initial detection contribution value according to the loss function detected by the object of the deep learning model to be trained to obtain a target deep learning model and a target detection contribution value;
and the object detection module is used for carrying out object detection on the target radar image by adopting the target deep learning model and the target detection contribution value.
8. The apparatus of claim 7, wherein the initial detection contribution determination module comprises:
a target detection depth range determining unit, configured to determine a target detection depth range of the radar sample image according to a depth range of the object to be detected in the radar sample image;
and the initial detection contribution value determining unit is used for determining that the initial detection contribution value of the object to be detected is zero according to the depth information in the target detection depth range in the radar sample image.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the object detection method of radar images according to any one of claims 1-6.
10. A computer readable storage medium storing computer instructions for causing a processor to execute the object detection method of radar image according to any one of claims 1-6.
CN202310135890.6A 2023-02-17 2023-02-17 Object detection method, device and equipment for radar image and storage medium Pending CN116129243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310135890.6A CN116129243A (en) 2023-02-17 2023-02-17 Object detection method, device and equipment for radar image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310135890.6A CN116129243A (en) 2023-02-17 2023-02-17 Object detection method, device and equipment for radar image and storage medium

Publications (1)

Publication Number Publication Date
CN116129243A true CN116129243A (en) 2023-05-16

Family

ID=86299072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310135890.6A Pending CN116129243A (en) 2023-02-17 2023-02-17 Object detection method, device and equipment for radar image and storage medium

Country Status (1)

Country Link
CN (1) CN116129243A (en)

Similar Documents

Publication Publication Date Title
CN116051668B (en) Training method of diffusion model of draft map and image generation method based on text
CN114550177B (en) Image processing method, text recognition method and device
CN112949767B (en) Sample image increment, image detection model training and image detection method
CN113705362B (en) Training method and device of image detection model, electronic equipment and storage medium
CN115861462B (en) Training method and device for image generation model, electronic equipment and storage medium
CN114360074A (en) Training method of detection model, living body detection method, apparatus, device and medium
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN113947188A (en) Training method of target detection network and vehicle detection method
CN115359308B (en) Model training method, device, equipment, storage medium and program for identifying difficult cases
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN116188917B (en) Defect data generation model training method, defect data generation method and device
CN115457365B (en) Model interpretation method and device, electronic equipment and storage medium
CN114724144B (en) Text recognition method, training device, training equipment and training medium for model
CN116012859A (en) Text image rejection judgment method, device and equipment based on definition index
CN116052288A (en) Living body detection model training method, living body detection device and electronic equipment
CN116129243A (en) Object detection method, device and equipment for radar image and storage medium
CN115359322A (en) Target detection model training method, device, equipment and storage medium
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN114581711A (en) Target object detection method, apparatus, device, storage medium, and program product
CN113569707A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN117746069B (en) Graph searching model training method and graph searching method
CN114037865B (en) Image processing method, apparatus, device, storage medium, and program product
CN117115568B (en) Data screening method, device, equipment and storage medium
CN116580050A (en) Medical image segmentation model determination method, device, equipment and medium
CN114863207A (en) Pre-training method and device of target detection model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination