CN116152576A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116152576A
CN116152576A CN202310416282.2A CN202310416282A CN116152576A CN 116152576 A CN116152576 A CN 116152576A CN 202310416282 A CN202310416282 A CN 202310416282A CN 116152576 A CN116152576 A CN 116152576A
Authority
CN
China
Prior art keywords
bounding box
preset
regressor
generalized
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310416282.2A
Other languages
Chinese (zh)
Other versions
CN116152576B (en
Inventor
明安龙
梁文腾
薛峰
康学净
马华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202310416282.2A priority Critical patent/CN116152576B/en
Publication of CN116152576A publication Critical patent/CN116152576A/en
Application granted granted Critical
Publication of CN116152576B publication Critical patent/CN116152576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method acquires an image to be processed; extracting features of the image to be processed through a preset target detection model to obtain bounding boxes and bounding box features; detecting by a preset generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object; according to the bounding box and the feature of the bounding box, detection processing is carried out through a preset classifier and a preset bounding box displacement regression, a known object is obtained, the preset bounding box displacement regression is obtained through training of the feature of the bounding box and the displacement vector of the bounding box of an image sample, on the premise that the detection capability of the known object is basically unchanged, effective detection of the unknown object is achieved, the detection precision of the unknown object is improved, further, false detection of the non-object is reduced through negative energy suppression, and the unknown object is accurately positioned by utilizing a self-adaptive candidate frame screening algorithm.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
Object detection is one of the most basic tasks in computer vision, which aims at predicting the class and bounding box of objects in an input image. However, due to the large number of real world objects and high labeling cost, the target detection task can only be realized based on the assumption of a closed world, i.e. the detector only needs to detect a limited number of objects of the learned class. In recent years, the rapid development of autopilot and robotics has placed higher demands on target detection. The detector needs to find not only objects of a predefined class, i.e. known objects, but also objects that the detector never sees during training, i.e. unknown objects, in order for the unmanned car or robot to cope with a more challenging environment. Models designed based on the closed world assumption do not meet these requirements. This is because during training, even if an unknown object appears in the training image, the model learns it as a background. The detector cannot recognize the unknown object at the time of the test.
Currently, two main methods for identifying unknown objects are: open set classification and detection and open world object detection. Open set classification and detection finds unknown objects misclassified as known objects by the detector by designing an uncertainty method that measures feature differences between the unknown objects and the known objects. Open world object detection aims at letting the model detect known objects and unknown objects and improving the ability to detect unknown objects by automatically marking pseudo-unknown objects during training, and the model can learn new classes of annotations incrementally.
However, the detection accuracy of the unknown object in the prior art is low.
Disclosure of Invention
The application provides an image processing method, an image processing device, image processing equipment and a storage medium, so that the technical problem of low detection precision of an unknown object in the prior art is solved.
In a first aspect, the present application provides an image processing method, including:
acquiring an image to be processed;
extracting features of the image to be processed through a preset target detection model to obtain bounding boxes and bounding box features;
detecting according to the bounding box and the feature of the bounding box through a preset generalized object confidence coefficient regressor to obtain a bounding box of an unknown object, wherein the preset generalized object confidence coefficient regressor is obtained through training of the feature of the bounding box and the generalized object confidence coefficient of an image sample;
and detecting through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics to obtain a known object, wherein the preset classifier is obtained through training of bounding box characteristics and category probabilities of image samples, and the preset bounding box displacement regressor is obtained through training of bounding box characteristics and bounding box displacement vectors of the image samples.
The application provides a picture processing method capable of detecting known objects and unknown objects in images at the same time, aiming at the images needing to be subjected to object recognition, bounding boxes and bounding box features in the images are firstly extracted, the bounding boxes of the unknown objects are detected through a pre-built generalized object confidence coefficient regressor, generalized object features are learned from the known objects, the unknown objects are fully captured through the generalized object confidence coefficient regressor, the known objects can be detected through the pre-built classifier and the bounding box displacement regressor, accurate detection of the unknown objects and the known objects is achieved, effective detection of the unknown objects is achieved on the premise that the detection capability of the known objects is basically unchanged, and the detection precision of the unknown objects is improved.
Optionally, the detecting processing is performed by a preset generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box, so as to obtain a bounding box of the unknown object, including: calculating generalized object confidence coefficient of each bounding box through a preset generalized object confidence coefficient regressor according to the bounding boxes and the feature of the bounding box, and performing first screening treatment on the bounding boxes according to the generalized object confidence coefficient to obtain bounding boxes to be treated; and carrying out second screening treatment on the bounding box to be treated through a self-adaptive bounding box screening mechanism according to the generalized object confidence of the bounding box to obtain a bounding box of the unknown object.
Optionally, the performing, according to the generalized object confidence of the bounding box, a second screening process on the bounding box to be processed through an adaptive bounding box screening mechanism to obtain a bounding box of an unknown object, where the method includes: constructing the bounding box to be processed into a weighted undirected graph, wherein each node in the weighted undirected graph set represents one bounding box to be processed, and each side in the weighted undirected graph set is composed of overlapping degrees among the nodes; iteratively decomposing the whole image to be processed into N subgraphs through a recursive normalization cutting algorithm until the normalization cutting cost value of the subgraphs is lower than a preset segmentation threshold value, wherein N is any positive integer; and in each subgraph, determining the bounding box to be processed with the highest confidence score of the generalized object as the bounding box of the unknown object.
Optionally, before the detecting processing is performed by a preset generalized object confidence coefficient regressor according to the bounding box and the bounding box characteristics, the method further includes: acquiring an image sample; inputting the bounding box features and the generalized object confidence coefficient of the image sample to a two-stage target detector, and training to obtain a preset generalized object confidence coefficient regressor; inputting the bounding box features and the class probabilities of the image samples to a two-stage target detector, and training to obtain a preset classifier; inputting the bounding box features and the bounding box displacement vectors of the image samples to a two-stage target detector, and training to obtain the preset bounding box displacement regressor.
Optionally, after inputting the bounding box features and the bounding box displacement vectors of the image samples to a two-stage target detector and training to obtain the preset bounding box displacement regressor, the method further includes: optimizing the preset classifier through negative energy inhibition to obtain an optimized classifier; and/or, performing optimization treatment on the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor; and/or, optimizing the preset bounding box displacement regressor to obtain an optimized bounding box displacement regressor; correspondingly, the detecting processing is performed through a preset generalized object confidence coefficient regressor according to the bounding box and the bounding box characteristics to obtain a bounding box of an unknown object, which comprises the following steps: detecting by using an optimized generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object; and detecting the object through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics to obtain a known object, wherein the method comprises the following steps of: and detecting through an optimized classifier and an optimized bounding box displacement regressor according to the bounding box and the feature of the bounding box to obtain the known object.
Optionally, the optimizing the preset classifier through negative energy suppression to obtain an optimized classifier includes: training the preset classifier by negative energy suppression and combining a cross entropy loss function and an uncertainty measurement loss function synthesized based on a virtual sample to obtain an optimized classifier;
the optimizing processing is performed on the preset bounding box displacement regressor to obtain an optimized bounding box displacement regressor, and the optimizing processing comprises the following steps: training the preset bounding box displacement regressor through a preset regression loss function to obtain an optimized bounding box displacement regressor;
the optimizing processing is performed on the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor, which comprises the following steps: setting K examples in the image sample, wherein K is any positive integer; defining two indexes of an alternating prediction ratio and an alternating value ratio for the image sample, wherein the alternating prediction ratio and the alternating value ratio are obtained through calculation of the K examples and bounding box samples in the image sample; classifying bounding box samples in the image samples according to the cross prediction ratio and the cross transformation value ratio, distributing the bounding box samples containing the same object instance to the same group to obtain K groups of bounding box samples, and dividing the K groups of bounding box samples into a complete object set, a local object set, an out-of-limit object set and a non-object set; obtaining a first loss parameter according to a first preset generalized object confidence score and the complete object set; obtaining a second loss parameter according to a second preset generalized object confidence score and the local object set and/or according to the second preset generalized object confidence score and the out-of-limit object set; according to the complete object set, obtaining a third loss parameter through comparison and learning; and training the preset generalized object confidence coefficient regressor according to the first loss parameter, the second loss parameter and the third loss parameter to obtain the optimized generalized object confidence coefficient regressor.
Optionally, after the detecting processing is performed by a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics, the method further includes: and carrying out fusion processing on the bounding box of the unknown object and the known object to obtain an object set.
In a second aspect, the present application provides an image processing apparatus including:
the acquisition module is used for acquiring the image to be processed;
the feature extraction module is used for extracting features of the image to be processed through a preset target detection model to obtain a bounding box and bounding box features;
the first object identification module is used for carrying out detection processing through a preset generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object, wherein the preset generalized object confidence coefficient regressor is obtained through training of the feature of the bounding box and the generalized object confidence coefficient of an image sample;
the second object identification module is used for carrying out detection processing through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics to obtain a known object, wherein the preset classifier is obtained through training of the bounding box characteristics and the class probability of the image sample, and the preset bounding box displacement regressor is obtained through training of the bounding box characteristics and the bounding box displacement vector of the image sample.
Optionally, the first object identification module is specifically configured to: calculating generalized object confidence coefficient of each bounding box through a preset generalized object confidence coefficient regressor according to the bounding boxes and the feature of the bounding box, and performing first screening treatment on the bounding boxes according to the generalized object confidence coefficient to obtain bounding boxes to be treated; and carrying out second screening treatment on the bounding box to be treated through a self-adaptive bounding box screening mechanism according to the generalized object confidence of the bounding box to obtain a bounding box of the unknown object.
Optionally, the first object identification module is further specifically configured to: constructing the bounding box to be processed into a weighted undirected graph, wherein each node in the weighted undirected graph set represents one bounding box to be processed, and each side in the weighted undirected graph set is composed of overlapping degrees among the nodes; iteratively decomposing the whole image to be processed into N subgraphs through a recursive normalization cutting algorithm until the normalization cutting cost value of the subgraphs is lower than a preset segmentation threshold value, wherein N is any positive integer; and in each subgraph, determining the bounding box to be processed with the highest confidence score of the generalized object as the bounding box of the unknown object.
Optionally, before the first identifying module is configured to perform detection processing by using a preset generalized object confidence coefficient regressor according to the bounding box and the bounding box feature, to obtain a bounding box of an unknown object, the apparatus further includes: the sample acquisition module is used for acquiring an image sample; the first training module is used for inputting the bounding box features and the generalized object confidence of the image sample to the two-stage target detector, and training to obtain a preset generalized object confidence regressor; the second training module is used for inputting the bounding box features and the class probabilities of the image samples to the two-stage target detector, and training to obtain a preset classifier; and the third training module is used for inputting the bounding box characteristics and the bounding box displacement vectors of the image samples to the two-stage target detector, and training to obtain the preset bounding box displacement regressor.
Optionally, after the third training module inputs the bounding box features and the bounding box displacement vectors of the image samples to a two-stage target detector and trains to obtain the preset bounding box displacement regressor, the apparatus further includes: an optimization module for: optimizing the preset classifier through negative energy inhibition to obtain an optimized classifier; and/or, performing optimization treatment on the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor; and/or, optimizing the preset bounding box displacement regressor to obtain an optimized bounding box displacement regressor;
Correspondingly, the first object identification module is specifically configured to: detecting by using an optimized generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object; the second object identification module is specifically configured to: and detecting through an optimized classifier and an optimized bounding box displacement regressor according to the bounding box and the feature of the bounding box to obtain the known object.
Optionally, the optimization module is specifically configured to: training the preset classifier by negative energy suppression and combining a cross entropy loss function and an uncertainty measurement loss function synthesized based on a virtual sample to obtain an optimized classifier; and/or training the preset bounding box displacement regressor through a preset regression loss function to obtain an optimized bounding box displacement regressor; and/or, setting K examples in the image sample, wherein K is any positive integer; defining two indexes of an alternating prediction ratio and an alternating value ratio for the image sample, wherein the alternating prediction ratio and the alternating value ratio are obtained through calculation of the K examples and bounding box samples in the image sample; classifying bounding box samples in the image samples according to the cross prediction ratio and the cross transformation value ratio, distributing the bounding box samples containing the same object instance to the same group to obtain K groups of bounding box samples, and dividing the K groups of bounding box samples into a complete object set, a local object set, an out-of-limit object set and a non-object set; obtaining a first loss parameter according to a first preset generalized object confidence score and the complete object set; obtaining a second loss parameter according to a second preset generalized object confidence score and the local object set and/or according to the second preset generalized object confidence score and the out-of-limit object set; according to the complete object set, obtaining a third loss parameter through comparison and learning; and training the preset generalized object confidence coefficient regressor according to the first loss parameter, the second loss parameter and the third loss parameter to obtain the optimized generalized object confidence coefficient regressor.
Optionally, after the second object identification module performs detection processing through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics, the apparatus further includes: and the fusion module is used for carrying out fusion processing on the bounding box of the unknown object and the known object to obtain an object set.
In a third aspect, the present application provides an image processing apparatus comprising: at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the image processing method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the image processing method according to the first aspect and the various possible designs of the first aspect.
In a fifth aspect, the present invention provides a computer program product comprising a computer program which, when executed by a processor, implements the image processing method according to the first aspect and the various possible designs of the first aspect.
The image processing method, the device, the equipment and the storage medium provided by the application, wherein the method aims at an image needing to be subjected to object recognition, firstly extracts bounding boxes and bounding box characteristics in the image, detects the bounding boxes of unknown objects through a pre-built generalized object confidence coefficient regressor, learns generalized object characteristics from known objects, fully captures the unknown objects through the generalized object confidence coefficient regressor, can be used for detecting the known objects through the pre-built classifier and the bounding box displacement regressor, realizes accurate detection of the unknown objects and the known objects, realizes effective detection of the unknown objects on the premise of ensuring that the detection capability of the known objects is basically unchanged, further reduces false detection of the non-objects through negative energy inhibition, accurately positions the unknown objects by utilizing a self-adaptive candidate frame screening algorithm, and improves the detection precision of the unknown objects.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic diagram of an image processing system according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 3 is a flowchart of another image processing method according to an embodiment of the present application;
fig. 4 is a flowchart of another image processing method according to an embodiment of the present application;
FIG. 5 is a schematic image of a complete object, a local object, an out-of-boundary object, and a non-object according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, terms in the embodiments of the present application will be explained:
regional advice network (Region Proposal Network, RPN): and taking the characteristic diagram of the picture as input, outputting a series of target bounding boxes, wherein each bounding box has a target score. The method has the advantages of capability of detecting a certain class of irrelevant objects, low detection precision and quick and rough object finding.
Graph segmentation: for an undirected graph
Figure SMS_1
The point set is->
Figure SMS_2
Divided into dot sets->
Figure SMS_3
And Point set->
Figure SMS_4
Satisfies the following conditions
Figure SMS_5
And +.>
Figure SMS_6
I.e. the graph is divided into two sub-graphs by dividing the set of nodes of the graph into mutually exclusive groups.
Normalized Cut (Normalized Cut): normalized cut is a method of graph segmentation that measures both the overall dissimilarity between different subgraphs and the similarity within the subgraphs, and can segment the graph more accurately.
Uncertainty metric loss based on virtual sample synthesis: calculating the mean and covariance of the characteristics of various known samples, constructing a multi-element Gaussian distribution model, and generating a virtual sample at the boundary of the model to serve as the characteristics of a negative sample. And setting a two-class model by taking the energy value as an uncertainty measure, and judging the uncertainty of the virtual sample and the known sample, so as to realize constraint of the known class feature distribution and realize prediction of the unknown sample.
Aiming at object identification of images, the existing detection methods can be divided into two types of classification and detection of open world targets. Open set classification and detection finds unknown objects misclassified as known objects by the detector by designing an uncertainty method that measures feature differences between the unknown objects and the known objects. But to ensure accuracy of detection of known objects they suppress both unknown and non-objects during training, resulting in a lower recall of unknown objects. Open world object detection aims at letting the model detect known objects and unknown objects and improving the ability to detect unknown objects by automatically marking pseudo-unknown objects during training, and the model can learn new classes of annotations incrementally. However, many of the pseudo-unknown samples generated by the automatic labeling step are not actually representative of the true unknown object, which results in a limited ability to transfer knowledge from known to unknown. Therefore, in the reasoning process, many non-objects are erroneously detected as unknown objects, resulting in lower detection accuracy of the unknown objects. Therefore, a method for achieving both recall and precision is needed for unknown object detection tasks. The detection precision of the unknown object in the prior art is low.
In order to solve the above problems, embodiments of the present application provide an image processing method, apparatus, device, and storage medium, where the method first extracts bounding boxes and bounding box features in an image for an image to be subjected to object recognition, detects bounding boxes of unknown objects through a pre-constructed generalized object confidence regressor, learns generalized object features from known class objects, uses the generalized object confidence regressor to fully capture the unknown objects, and can be used to detect the known objects through the pre-established classifier and bounding box displacement regressor, thereby realizing accurate detection of the unknown objects and the known objects.
According to the method and the device, the unknown object can be fully captured by using a generalized object confidence regressor based on a closed-set object detection model and learning generalized object characteristics from known class objects, false detection of the non-object is reduced through negative energy inhibition, and the unknown object is accurately positioned by using a self-adaptive candidate frame screening algorithm.
Optionally, the application can construct a generalized object confidence regression device for extracting all square areas where objects may exist in an image, and designs a quantity-adaptive object candidate screening mechanism to accurately locate unknown objects from the square areas where objects may exist. Next, a training method for the object detector to learn generalized object representations is devised, allowing the model to learn generalized object class knowledge on a dataset of known class object labels. Finally, the object detected by the object detector of the known type in the detected generalized object is deleted, and the rest is the unknown object. According to the method, the generalized object confidence module can measure the scores of the generalized object areas in the image through the training method of the generalized object characterization, so that high scores are simultaneously given to the known object and the unknown object, and the areas which are most likely to be the objects are screened out through the object candidate screening mechanism with quantity self-adaption.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Optionally, fig. 1 is a schematic diagram of an image processing system architecture according to an embodiment of the present application. In fig. 1, the above architecture includes at least one of a data acquisition device 101, a processing device 102, and a display device 103.
It should be understood that the architecture illustrated in the embodiments of the present application does not constitute a specific limitation on the architecture of the image processing system. In other possible embodiments of the present application, the architecture may include more or fewer components than those illustrated, or some components may be combined, some components may be separated, or different component arrangements may be specifically determined according to the actual application scenario, and the present application is not limited herein. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
In a specific implementation, the data acquisition device 101 may include an input/output interface, or may include a communication interface, where the data acquisition device 101 may be connected to the processing device through the input/output interface or the communication interface.
The processing device 102 may extract bounding boxes and bounding box features in the image, detect bounding boxes of unknown objects through pre-built generalized object confidence regressors, learn generalized object features from known classes of objects, use the generalized object confidence regressors to adequately capture unknown objects, use pre-built classifier and bounding box displacement regressors to detect known objects, and further extract accurate bounding boxes of unknown objects using an adaptive object candidate screening mechanism.
The display device 103 may also be a touch display screen or a screen of a terminal device for receiving a user instruction while displaying the above content to enable interaction with a user.
It will be appreciated that the processing device described above may be implemented by a processor reading instructions in a memory and executing the instructions, or by a chip circuit.
In addition, the network architecture and the service scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided in the embodiments of the present application, and as a person of ordinary skill in the art can know, with evolution of the network architecture and appearance of a new service scenario, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The following describes the technical scheme of the present application in detail with reference to specific embodiments:
optionally, fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application. The execution body of the embodiment of the present application may be the processing device 102 in fig. 1, and the specific execution body may be determined according to an actual application scenario. As shown in fig. 2, the method comprises the steps of:
s201: and acquiring an image to be processed.
S202: and extracting features of the image to be processed through a preset target detection model to obtain bounding boxes and bounding box features.
Alternatively, the preset target detection model may be any detection model that extracts bounding boxes through images. For example, the trained two-stage object detector Faster-RCNN, the input and output of the model may be the image sample and bounding box features corresponding to the image sample, respectively.
Alternatively, a two-stage object detector Faster-RCNN which is already trained is taken as a feature extractor and a bounding box extractor, and a bounding box possibly existing in an object and feature vectors of the bounding box are extracted.
S203: and detecting by a preset generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain the bounding box of the unknown object.
The preset generalized object confidence coefficient regressor is obtained through training of bounding box features and generalized object confidence coefficients of the image samples.
Optionally, according to the bounding box and the feature of the bounding box, detecting by a preset generalized object confidence regressor to obtain a bounding box of the unknown object, including:
calculating generalized object confidence coefficient of each bounding box through a preset generalized object confidence coefficient regressor according to the bounding boxes and the feature of the bounding box, and performing first screening treatment on the bounding boxes according to the generalized object confidence coefficient to obtain bounding boxes to be treated; and carrying out second screening treatment on the bounding box to be treated through a self-adaptive bounding box screening mechanism according to the generalized object confidence of the bounding box, so as to obtain the bounding box of the unknown object.
Optionally, performing a first screening process on the bounding box according to the confidence coefficient of the generalized object to obtain a bounding box to be processed, which specifically includes: and reserving the bounding box with the generalized object confidence coefficient larger than the first threshold value to obtain a bounding box to be processed.
In one possible implementation, the trained generalized object confidence regressors are used to predict the object confidence of these bounding boxes and score confidence greater than
Figure SMS_7
Is kept, denoted +.>
Figure SMS_8
Its generalized object confidence score is +.>
Figure SMS_9
. Wherein (1)>
Figure SMS_10
As the first threshold, it may be determined according to practical situations, and the embodiment of the present application is not particularly limited. M is any positive number.
Optionally, according to the generalized object confidence of the bounding box, performing second screening processing on the bounding box to be processed through a self-adaptive bounding box screening mechanism to obtain a bounding box of the unknown object, including: constructing the bounding box to be processed into a weighted undirected graph, wherein each node in the weighted undirected graph set represents the bounding box to be processed, and each side in the weighted undirected graph set is composed of the overlapping degree between the nodes; iteratively decomposing the whole image to be processed into N subgraphs through a recursive normalization cutting algorithm until the normalization cutting cost value of the subgraphs is lower than a preset segmentation threshold value, wherein N is any positive integer; in each subgraph, the bounding box to be processed with the highest confidence score of the generalized object is determined as the bounding box of the unknown object.
In one possible implementation, bounding boxes where objects may be present are screened out by an adaptive bounding box screening mechanism. Specifically, the bounding box is to
Figure SMS_12
Constructed as a weighted undirected graph +.>
Figure SMS_16
Set->
Figure SMS_18
Each node in (a) represents a bounding box +.>
Figure SMS_13
Set->
Figure SMS_15
Each edge of (a) is defined by the degree of overlap between nodes (Intersection over Union, ioU) -by ∈10>
Figure SMS_17
The composition is formed. Next, the entire map is treated with a recursive normalization cutting algorithm>
Figure SMS_19
Iteratively decomposing into several subgraphs until the normalized cut cost value of the subgraphs is below a threshold +.>
Figure SMS_11
And then terminate. And finally, taking the generalized object confidence score of each sub-graph as a bounding box of the predicted unknown object. Wherein (1)>
Figure SMS_14
For the preset segmentation threshold, it may be determined according to practical situations, and the embodiment of the present application is not specifically limited.
According to the method and the device, the unknown object is fully captured by using the generalized object confidence coefficient regressor, the bounding boxes are screened according to the generalized object confidence coefficient, the bounding boxes to be processed are screened by the self-adaptive bounding box screening mechanism, the unknown object is accurately positioned by utilizing the self-adaptive candidate frame screening algorithm, and the accuracy of unknown object identification is further improved.
S204: and detecting through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the feature of the bounding box to obtain the known object.
The preset classifier is obtained through training of bounding box features and class probabilities of the image samples, and the preset bounding box displacement regressor is obtained through training of bounding box features and bounding box displacement vectors of the image samples.
In one possible implementation, the steps of extracting known objects follow the Faster-RCNN, extracting with trained pairs of bounding box displacement regressors and classifiers, respectivelyAnd predicting the obtained bounding box. Then, bounding boxes with extremely low scores are removed with non-maximal suppression, and several bounding boxes with highest scores are retained. From the result output by the classifier, the energy value is smaller than
Figure SMS_20
Possibly a bounding box deletion of an unknown object. The resulting bounding box is the predicted known object. Wherein (1)>
Figure SMS_21
For the preset energy value, it may be determined according to practical situations, and the embodiment of the present application is not particularly limited.
Optionally, after performing detection processing by a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics, obtaining the known object, the method further includes: and carrying out fusion processing on the bounding box of the unknown object and the known object to obtain an object set.
Optionally, the known object is fused with the set of unknown object bounding boxes. Specifically, if the overlap (Intersection over Union, ioU) between an unknown object bounding box and any one of the known object bounding boxes exceeds the overlap threshold, then the object bounding box is deleted, otherwise the object bounding box is retained.
It will be appreciated that the overlapping degree threshold may be determined according to practical situations, which is not particularly limited in the embodiments of the present application. For example, the overlap threshold may be 95%.
Here, the embodiment of the application fuses the known object and the unknown object bounding box set, can effectively remove redundant data or misjudgment data, and realizes effective detection of the unknown object on the premise of ensuring that the detection capability of the known object is basically unchanged.
The embodiment of the application provides a picture processing method capable of simultaneously detecting a known object and an unknown object in an image, aiming at the image needing object identification, firstly extracting bounding boxes and bounding box characteristics in the image, detecting the bounding boxes of the unknown object through a pre-constructed generalized object confidence coefficient regressor, learning generalized object characteristics from the known class object, fully capturing the unknown object through the generalized object confidence coefficient regressor, and detecting the known object through a pre-established classifier and the bounding box displacement regressor, thereby realizing accurate detection of the unknown object and the known object, realizing effective detection of the unknown object on the premise of ensuring that the detection capability of the known object is basically unchanged, and improving the detection precision of the unknown object.
Optionally, a model for detecting an object may be pre-established in the embodiment of the present application to achieve the purpose of accurate identification, and accordingly, fig. 3 is a schematic flow chart of another image processing method provided in the embodiment of the present application, as shown in fig. 3, where the method includes:
s301: and acquiring an image to be processed.
S302: and extracting features of the image to be processed through a preset target detection model to obtain bounding boxes and bounding box features.
The implementation of step S301 to step S302 is similar to that of step S201 to step S202, and the embodiment of the present application is not particularly limited herein.
S303: an image sample is acquired.
The image sample may include a history image, and further include one or more of a bounding box, a bounding box feature, and a known object corresponding to the history image, and may also be a simulation image, and one or more of a bounding box, a bounding box feature, and a known object corresponding to the simulation image.
S304: inputting the bounding box features and the generalized object confidence of the image sample to a two-stage target detector, and training to obtain a preset generalized object confidence regressor.
S305: inputting the bounding box features and the class probability of the image sample to a two-stage target detector, and training to obtain a preset classifier.
S306: inputting bounding box features and bounding box displacement vectors of the image samples to a two-stage target detector, and training to obtain the preset bounding box displacement regressor.
Optionally, after inputting the bounding box features and the bounding box displacement vectors of the image samples to the two-stage target detector and training to obtain the preset bounding box displacement regressor, the method further includes:
optimizing the preset classifier through negative energy inhibition to obtain an optimized classifier; and/or, optimizing the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor; and/or, carrying out optimization treatment on the preset bounding box displacement regressor to obtain the optimized bounding box displacement regressor.
Correspondingly, according to the bounding box and the feature of the bounding box, detecting by a preset generalized object confidence coefficient regressor to obtain the bounding box of the unknown object, including: and detecting by using the optimized generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain the bounding box of the unknown object.
According to the bounding box and the feature of the bounding box, detecting through a preset classifier and a preset bounding box displacement regressor to obtain a known object, wherein the method comprises the following steps: and detecting by using the optimized classifier and the optimized bounding box displacement regressor according to the bounding box and the feature of the bounding box to obtain the known object.
After each model is established, each model can be optimized, the purpose of distributing high confidence coefficient for surrounding boxes surrounding unknown objects and the purpose of expanding characteristic response gaps between the non-objects and the objects are achieved through applying different losses to collected samples, so that the purpose of misdetection of the non-objects is reduced, and the accuracy of object identification and the accuracy of image processing are further improved.
Optionally, the optimizing process is performed on the preset classifier through negative energy suppression, so as to obtain an optimized classifier, which comprises the following steps: and training the preset classifier by negative energy inhibition and combining a cross entropy loss function and an uncertainty measurement loss function synthesized based on a virtual sample to obtain an optimized classifier.
In one possible implementation, the classifier is supervised, in addition to the cross entropy loss function supervision commonly used in the field of object detection, by designing an additional loss function, comprising the sub-steps of:
first, a bounding box set of a current image (image sample)
Figure SMS_22
According to the negative energy value->
Figure SMS_23
And (5) sequencing.
Next, from the collection
Figure SMS_24
Is selected from the group consisting of +.>
Figure SMS_25
The bounding boxes are used to suppress the characteristic responses of non-objects.
Then, a suppression loss is applied to constrain the negative energy score to be lowest
Figure SMS_26
A bounding box, inhibit loss->
Figure SMS_27
The formula is as follows:
Figure SMS_28
total energy loss includes the above
Figure SMS_29
And uncertainty metric loss based on virtual sample synthesis
Figure SMS_30
Total energy loss->
Figure SMS_31
The formula is as follows:
Figure SMS_32
/>
wherein uncertainty metric loss based on virtual sample synthesis
Figure SMS_33
A self-opening collection object Detection (Open-Set Detection) algorithm VOS (Virtual Outlier Synthesis) is referenced. Through loss->
Figure SMS_34
After training of (a), the negative energy distribution of the non-object is significantly different from the unknown negative energy distribution, i.e. the non-object is suppressed. The method simultaneously reduces the characteristic response of the non-object bounding box, and further expands the generalized object confidence (Generalized Object Confidence, GOC) difference between non-objects.
Optionally, performing optimization processing on the preset bounding box displacement regressor to obtain an optimized bounding box displacement regressor, including: and training the preset bounding box displacement regressor through a preset regression loss function to obtain the optimized bounding box displacement regressor.
The regression loss function commonly used in the field of object detection is adopted for supervision on the bounding box displacement regressor.
Optimizing the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor, comprising:
Setting an image sample to comprise K examples, wherein K is any positive integer; defining two indexes of an alternating prediction ratio and an alternating value ratio for an image sample, wherein the alternating prediction ratio and the alternating value ratio are obtained by calculating K examples and bounding box samples in the image sample; classifying bounding box samples in the image samples according to the cross prediction ratio and the cross transformation value ratio, distributing the bounding box samples containing the same object instance to the same group to obtain K groups of bounding box samples, and dividing the K groups of bounding box samples into a complete object set, a local object set, an out-of-limit object set and a non-object set; obtaining a first loss parameter according to a first preset generalized object confidence score and a complete object set; obtaining a second loss parameter according to the second preset generalized object confidence score and the local object set and/or according to the second preset generalized object confidence score and the out-of-limit object set; according to the complete object set, obtaining a third loss parameter through comparison and learning; training the preset generalized object confidence coefficient regressor according to the first loss parameter, the second loss parameter and the third loss parameter to obtain the optimized generalized object confidence coefficient regressor.
In one possible implementation, the generalized object confidence regressor design is supervised for multiple sets of loss functions, comprising the sub-steps of:
the fast-RCNN network which is trained in advance is trained in two stages according to the following loss function setting:
let the current image include
Figure SMS_35
An example is->
Figure SMS_36
Two indices, i.e. a cross prediction ratio (Intersection Over The Predicted Bounding Box, ioP) and a cross value ratio (Intersection Over The Correct Bounding Box, ioC), are defined:
Figure SMS_37
next, for a bounding box set
Figure SMS_39
Each bounding box->
Figure SMS_44
From the instance set->
Figure SMS_48
Find and +.>
Figure SMS_40
Is->
Figure SMS_42
(Intersection over Union, cross-over ratio) the largest example. Assigning bounding boxes containing the same object instance to the same group, get +.>
Figure SMS_46
Group bounding box:>
Figure SMS_50
. Then, according to the following formula +.>
Figure SMS_38
、/>
Figure SMS_43
And->
Figure SMS_47
Will->
Figure SMS_51
Is divided into a complete object set +.>
Figure SMS_41
Local object set->
Figure SMS_45
Out-of-bounds object set->
Figure SMS_49
And non-object set->
Figure SMS_52
Figure SMS_53
Figure SMS_54
Figure SMS_55
Figure SMS_56
Wherein the method comprises the steps of
Figure SMS_57
The constant threshold value can be determined according to actual conditions.
Designing a first loss parameter, and tending the generalized object confidence score of the bounding box of the complete object to be 1:
Figure SMS_58
then, designing a second loss parameter to suppress the generalized object confidence score of the local object or the out-of-limit object to a constant
Figure SMS_59
The following steps are provided:
Figure SMS_60
wherein the constant is
Figure SMS_61
. Then, contrast learning is employed to improve the model's ability to capture bounding boxes containing more complete objects, and the third loss parameter formula is as follows:
Figure SMS_62
wherein when
Figure SMS_63
When (I)>
Figure SMS_64
The method comprises the steps of carrying out a first treatment on the surface of the Otherwise->
Figure SMS_65
. And->
Figure SMS_66
。/>
Figure SMS_67
Is a small constant which is used to control the temperature of the liquid,set to 0.01.
Finally, calculating the total generalized object confidence loss includes the following three parts:
Figure SMS_68
through the above sampling and training, the generalized object confidence scores of both unknown and known objects are pushed to very high scores.
Here, the embodiment of the application monitors the regression loss function commonly used in the object detection field of the bounding box displacement regressor, monitors the design of multiple groups of loss functions of the generalized object confidence regressor, monitors the classifier by adopting the cross entropy loss function commonly used in the object detection field, and also designs additional loss functions, so that accurate loss is applied to each detection model, the detection precision is further improved, and the accuracy of image processing is improved.
S307: and detecting by a preset generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain the bounding box of the unknown object.
The preset generalized object confidence coefficient regressor is obtained through training of bounding box features and generalized object confidence coefficients of the image samples.
S308: and detecting through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the feature of the bounding box to obtain the known object.
The preset classifier is obtained through training of bounding box features and class probabilities of the image samples, and the preset bounding box displacement regressor is obtained through training of bounding box features and bounding box displacement vectors of the image samples.
The implementation of step S307 to step S308 is similar to the implementation of step S203 to step S204, and the embodiment of the present application is not particularly limited herein.
Here, the embodiment of the application learns generalized object characteristics from known class objects, so that a model learns generalized object class knowledge on a data set marked by the known class objects, a preset generalized object confidence coefficient regressor is established, the unknown objects are fully captured by using the generalized object confidence coefficient regressor, a preset classifier and a preset bounding box displacement regressor are established by combining the known class objects, the known objects are extracted, the characteristics of the objects are fully combined to establish each model for object identification, and the accuracy of image identification is further improved.
Exemplary, fig. 4 is a flow chart of still another image processing method according to an embodiment of the present application, as shown in fig. 4, where a solid line represents a training and testing process, a dashed line represents a training loss function, a rounded rectangle represents an operation process, and a right rectangle represents a specific result obtained.
As shown in fig. 4, the method for detecting the object of the known and unknown class according to the embodiment of the application comprises the following steps:
step one: a trained two-stage target detector Faster-RCNN is used as a feature extractor and a bounding box extractor to extract bounding boxes and feature vectors of the bounding boxes where objects may exist. Let Faster-RCNN extract bounding box from current image as
Figure SMS_69
And bounding box->
Figure SMS_70
Is expressed as +.>
Figure SMS_71
Wherein->
Figure SMS_72
For the dimension of bounding box feature +.>
Figure SMS_73
Is an index of the bounding box.
Step two: and designing a pre-measuring head structure on the trained Faster-RCNN, and using the characteristics of each bounding box as a sample to train three pre-measuring heads of the object detector, namely a generalized object confidence coefficient regressor, a bounding box displacement regressor and a classifier.
Comprises the following substeps:
step 2.1: the structure of the bounding box displacement regressor and the classifier are consistent with that of the Faster-RCNN, and are all of a layer of linear transformation structure. Their inputs are all features of bounding boxes
Figure SMS_74
The outputs are the displacement vector of each bounding box and the class probabilities of the bounding boxes, respectively.
Step 2.2: designing a generalized object confidence regressor, expressed as
Figure SMS_75
. The module structure is a linear transformation, and the input of the module structure is the characteristic of a bounding box +.>
Figure SMS_76
And outputs a constant +.>
Figure SMS_77
The constant is the generalized object confidence of the bounding box.
Step 2.3: let the input of the classifier be the feature of a bounding box
Figure SMS_78
Output is +.>
Figure SMS_79
Probability of individual category, referring to open object detection algorithm VOS, calculating bounding box +.>
Figure SMS_80
Is defined as the negative of the weighted sum of the outputs of the bounding box in exponential space:
Figure SMS_81
/>
wherein the method comprises the steps of
Figure SMS_82
For classifying head class->
Figure SMS_83
Is a logic output of>
Figure SMS_84
,/>
Figure SMS_85
To mitigate class imbalance.
Step three: by applying different losses to the collected samples, the purpose of assigning high confidence to the bounding box bounding the unknown object and the purpose of expanding the characteristic response gap between the non-object and the object to reduce false detection of the non-object are respectively achieved.
Comprises the following substeps:
step 3.1: and monitoring the regression loss function of the bounding box displacement regressor commonly used in the field of object detection.
Step 3.2: monitoring a plurality of groups of loss functions designed by a generalized object confidence regressor, comprising the following substeps:
step 3.2.1: the fast-RCNN network which has been trained in advance is trained in two stages according to the following loss function setting.
Step 3.2.2: let the current image include
Figure SMS_86
An example is->
Figure SMS_87
Two indices are defined, ioP and IoC:
Figure SMS_88
next, for the following
Figure SMS_92
Each bounding box->
Figure SMS_96
From->
Figure SMS_100
Find and +.>
Figure SMS_91
Is->
Figure SMS_95
(Intersection over Union, cross-over ratio) the largest example. Assigning bounding boxes containing the same object instance to the same group, get +.>
Figure SMS_99
Group bounding box:>
Figure SMS_103
. Then, according to the following formula +.>
Figure SMS_89
、/>
Figure SMS_93
And->
Figure SMS_97
Will->
Figure SMS_101
Is divided into a complete object set +.>
Figure SMS_90
Local object set->
Figure SMS_94
Out-of-bounds object set->
Figure SMS_98
And non-object set->
Figure SMS_102
Figure SMS_104
Figure SMS_105
Figure SMS_106
Figure SMS_107
Wherein the method comprises the steps of
Figure SMS_108
Is a constant threshold.
Step 3.2.3: design penalty, generalized object confidence score for bounding box of complete object tends to be 1:
Figure SMS_109
then, designing a second loss, and pressing the generalized object confidence score of the local object or the out-of-limit object to be constant
Figure SMS_110
The following steps are provided:
Figure SMS_111
/>
wherein the constant is
Figure SMS_112
. Contrast learning is then employed to enhance the ability of the model to capture bounding boxes containing more complete objects:
Figure SMS_113
wherein when
Figure SMS_114
When (I)>
Figure SMS_115
The method comprises the steps of carrying out a first treatment on the surface of the Otherwise->
Figure SMS_116
. And->
Figure SMS_117
。/>
Figure SMS_118
Is a small constant, set to 0.01.
Finally, calculating the total generalized object confidence loss includes the following three parts:
Figure SMS_119
through the above sampling and training, the generalized object confidence scores of both unknown and known objects are pushed to very high scores.
Exemplary, fig. 5 is an image schematic diagram of a complete object, a local object, an out-of-boundary object, and a non-object according to an embodiment of the present application, where an example box is a solid line, and a prediction box is a light dotted line.
Step 3.3: the method comprises the following substeps of:
step 3.3.1: first, bounding box set for current image
Figure SMS_120
According to the negative energy value->
Figure SMS_121
And (5) sequencing.
Step 3.3.2: next, from the collection
Figure SMS_122
Is selected from the group consisting of +.>
Figure SMS_123
The bounding boxes are used to suppress the characteristic responses of non-objects.
Step 3.3.3: then, a suppression loss is applied to constrain the negative energy score to be lowest
Figure SMS_124
The bounding boxes:
Figure SMS_125
total energy loss includes the above
Figure SMS_126
And uncertainty metric loss based on virtual sample synthesis
Figure SMS_127
:
Figure SMS_128
Wherein uncertainty metric loss based on virtual sample synthesis
Figure SMS_129
Reference is made to the Open-Set Detection (Open-Set Detection) algorithm VOS. Through loss->
Figure SMS_130
After training of (a), the negative energy distribution of the non-object is significantly different from the unknown negative energy distribution, i.e. the non-object is suppressed. The method simultaneously reduces the characteristic response of the non-object bounding box and further expands the GOC difference between non-objects.
Step four: and detecting the object of the image of the natural scene by using the trained model.
Comprises the following substeps:
step 4.1: extracting bounding boxes and features of the bounding boxes from the image through a backbone network of the trained Faster-RCNN structure.
Step 4.2: detecting an unknown object by a generalized object confidence regressor, comprising the sub-steps of:
and 4, step 4.2.1: predicting the object confidence of the bounding boxes by using a trained generalized object confidence regressor, and scoring the confidence with a confidence score greater than
Figure SMS_131
Is kept, denoted +.>
Figure SMS_132
Its generalized object confidence score is +.>
Figure SMS_133
Step 4.2.2: and screening out bounding boxes in which objects possibly exist through an adaptive bounding box screening mechanism. Specifically, the bounding box is to
Figure SMS_134
Constructed as a weighted undirected graph +.>
Figure SMS_135
Each node in the set represents a bounding box +.>
Figure SMS_136
Set->
Figure SMS_137
Is made up of the degree of overlap between nodes. Next, the entire map is treated with a recursive normalization cutting algorithm>
Figure SMS_138
Iteratively decomposing into several subgraphs until the normalized cut cost value of the subgraphs is below a threshold +.>
Figure SMS_139
And then terminate. And finally, taking the generalized object confidence score of each sub-graph as a bounding box of the predicted unknown object.
Step 4.3: detecting a known object by a classifier and a bounding box displacement regressor, comprising the sub-steps of:
step 4.3.1: following the steps of Faster-RCNN extracting known objects, the extracted bounding boxes are predicted with a bounding box displacement regressor and classifier, respectively. Then, bounding boxes with extremely low scores are removed with non-maximal suppression, and several bounding boxes with highest scores are retained.
Step 4.3.2: from the result output by the classifier, the energy value is smaller than
Figure SMS_140
Possibly a bounding box deletion of an unknown object. The resulting bounding box is the predicted known object.
Step 4.3.3: the known object is fused with the set of unknown object bounding boxes. Specifically, if IoU between an unknown object bounding box and any one of the known object bounding boxes exceeds 95%, then it is deleted, otherwise it is retained.
According to the method and the device, the generalized object representation is trained from the limited object types through the generalized object confidence module and the corresponding sampling and training strategies, false detection of non-objects is further reduced through negative energy suppression, and the unknown objects are captured by utilizing the self-adaptive object candidate screening mechanism during reasoning. Compared with the prior art, the invention can realize accurate and full detection of the unknown object on the premise of ensuring that the detection capability of the known object is basically unchanged.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus according to the embodiment of the present application includes: an acquisition module 601, a feature extraction module 602, a first object identification module 603, and a second object identification module 604. The image processing apparatus here may be the processing apparatus described above, the processor itself, or a chip or an integrated circuit that realizes the functions of the processor. Here, the division of the acquisition module 601, the feature extraction module 602, the first object identification module 603, and the second object identification module 604 is only a logical division, and both may be integrated or independent physically.
The acquisition module is used for acquiring the image to be processed;
the feature extraction module is used for extracting features of the image to be processed through a preset target detection model to obtain bounding boxes and bounding box features;
the first object recognition module is used for carrying out detection processing through a preset generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object, wherein the preset generalized object confidence coefficient regressor is obtained through training of the feature of the bounding box and the generalized object confidence coefficient of an image sample;
The second object identification module is used for carrying out detection processing through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics to obtain a known object, wherein the preset classifier is obtained through training of the bounding box characteristics and the category probability of the image sample, and the preset bounding box displacement regressor is obtained through training of the bounding box characteristics and the bounding box displacement vector of the image sample.
Optionally, the first object identification module is specifically configured to: calculating generalized object confidence coefficient of each bounding box through a preset generalized object confidence coefficient regressor according to the bounding boxes and the feature of the bounding box, and performing first screening treatment on the bounding boxes according to the generalized object confidence coefficient to obtain bounding boxes to be treated; and carrying out second screening treatment on the bounding box to be treated through a self-adaptive bounding box screening mechanism according to the generalized object confidence of the bounding box, so as to obtain the bounding box of the unknown object.
Optionally, the first object identification module is further specifically configured to: constructing the bounding box to be processed into a weighted undirected graph, wherein each node in the weighted undirected graph set represents the bounding box to be processed, and each side in the weighted undirected graph set is composed of the overlapping degree between the nodes; iteratively decomposing the whole image to be processed into N subgraphs through a recursive normalization cutting algorithm until the normalization cutting cost value of the subgraphs is lower than a preset segmentation threshold value, wherein N is any positive integer; in each subgraph, the bounding box to be processed with the highest confidence score of the generalized object is determined as the bounding box of the unknown object.
Optionally, before the first identifying module is configured to perform detection processing by using a preset generalized object confidence coefficient regressor according to the bounding box and the bounding box characteristics, to obtain a bounding box of the unknown object, the apparatus further includes: the sample acquisition module is used for acquiring an image sample; the first training module is used for inputting the bounding box features of the image samples and the generalized object confidence to the two-stage target detector, and training to obtain a preset generalized object confidence regressor; the second training module is used for inputting the bounding box features and the class probability of the image sample to the two-stage target detector, and training to obtain a preset classifier; and the third training module is used for inputting the bounding box features and the bounding box displacement vectors of the image samples to the two-stage target detector, and training to obtain the preset bounding box displacement regressor.
Optionally, after the third training module inputs the bounding box feature and the bounding box displacement vector of the image sample to the two-stage target detector and trains to obtain the preset bounding box displacement regressor, the apparatus further includes: an optimization module for: optimizing the preset classifier through negative energy inhibition to obtain an optimized classifier; and/or, optimizing the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor; and/or, carrying out optimization treatment on the preset bounding box displacement regressor to obtain an optimized bounding box displacement regressor; accordingly, the first object identification module is specifically configured to: detecting by using an optimized generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object;
The second object identification module is specifically configured to: and detecting by using the optimized classifier and the optimized bounding box displacement regressor according to the bounding box and the feature of the bounding box to obtain the known object.
Optionally, the optimization module is specifically configured to: training a preset classifier by negative energy suppression and combining a cross entropy loss function and an uncertainty measurement loss function synthesized based on a virtual sample to obtain an optimized classifier; and/or training the preset bounding box displacement regressor through a preset regression loss function to obtain an optimized bounding box displacement regressor; and/or, setting the image sample to comprise K examples, wherein K is any positive integer; defining two indexes of an alternating prediction ratio and an alternating value ratio for an image sample, wherein the alternating prediction ratio and the alternating value ratio are obtained by calculating K examples and bounding box samples in the image sample; classifying bounding box samples in the image samples according to the cross prediction ratio and the cross transformation value ratio, distributing the bounding box samples containing the same object instance to the same group to obtain K groups of bounding box samples, and dividing the K groups of bounding box samples into a complete object set, a local object set, an out-of-limit object set and a non-object set; obtaining a first loss parameter according to a first preset generalized object confidence score and a complete object set; obtaining a second loss parameter according to the second preset generalized object confidence score and the local object set and/or according to the second preset generalized object confidence score and the out-of-limit object set; according to the complete object set, obtaining a third loss parameter through comparison and learning; training the preset generalized object confidence coefficient regressor according to the first loss parameter, the second loss parameter and the third loss parameter to obtain the optimized generalized object confidence coefficient regressor.
Optionally, after the second object identification module performs detection processing through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics, the apparatus further includes: and the fusion module is used for carrying out fusion processing on the bounding box of the unknown object and the known object to obtain an object set.
Referring to fig. 7, there is shown a schematic diagram of a structure of an image processing apparatus 700 suitable for use in implementing an embodiment of the present disclosure, the image processing apparatus 700 may be a terminal apparatus or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable AndroidDevice, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The image processing apparatus shown in fig. 7 is only one example, and should not bring any limitation to the functions and the use ranges of the embodiments of the present disclosure.
As shown in fig. 7, the image processing apparatus 700 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 701 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage device 708 into a random access Memory (Random Access Memory, RAM) 703. In the RAM 703, various programs and data required for the operation of the image processing apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the image processing apparatus 700 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 7 shows an image processing apparatus 700 having various devices, it is to be understood that not all illustrated devices are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the image processing apparatus; or may exist alone without being incorporated into the image processing apparatus.
The computer-readable medium carries one or more programs which, when executed by the image processing apparatus, cause the image processing apparatus to execute the method shown in the above embodiment.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The image processing device of the embodiment of the present application may be used to execute the technical solutions of the embodiments of the methods of the present application, and its implementation principle and technical effects are similar, and are not repeated here.
The embodiment of the application also provides a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, and the computer executable instructions are used for realizing the image processing method of any one of the above when being executed by a processor.
Embodiments of the present application also provide a computer program product, including a computer program, which when executed by a processor is configured to implement the image processing method of any one of the above.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
Acquiring an image to be processed;
extracting features of the image to be processed through a preset target detection model to obtain bounding boxes and bounding box features;
detecting according to the bounding box and the feature of the bounding box through a preset generalized object confidence coefficient regressor to obtain a bounding box of an unknown object, wherein the preset generalized object confidence coefficient regressor is obtained through training of the feature of the bounding box and the generalized object confidence coefficient of an image sample;
and detecting through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics to obtain a known object, wherein the preset classifier is obtained through training of bounding box characteristics and category probabilities of image samples, and the preset bounding box displacement regressor is obtained through training of bounding box characteristics and bounding box displacement vectors of the image samples.
2. The method according to claim 1, wherein the detecting, according to the bounding box and the bounding box features, by a preset generalized object confidence regressor, obtains a bounding box of an unknown object, including:
calculating generalized object confidence coefficient of each bounding box through a preset generalized object confidence coefficient regressor according to the bounding boxes and the feature of the bounding box, and performing first screening treatment on the bounding boxes according to the generalized object confidence coefficient to obtain bounding boxes to be treated;
And carrying out second screening treatment on the bounding box to be treated through a self-adaptive bounding box screening mechanism according to the generalized object confidence of the bounding box to obtain a bounding box of the unknown object.
3. The method according to claim 2, wherein the performing, according to the generalized object confidence of the bounding box, a second screening process on the bounding box to be processed by using an adaptive bounding box screening mechanism to obtain a bounding box of an unknown object includes:
constructing the bounding box to be processed into a weighted undirected graph, wherein each node in the weighted undirected graph set represents one bounding box to be processed, and each side in the weighted undirected graph set is composed of overlapping degrees among the nodes;
iteratively decomposing the whole image to be processed into N subgraphs through a recursive normalization cutting algorithm until the normalization cutting cost value of the subgraphs is lower than a preset segmentation threshold value, wherein N is any positive integer;
and in each subgraph, determining the bounding box to be processed with the highest confidence score of the generalized object as the bounding box of the unknown object.
4. The method according to claim 3, further comprising, before the detecting, by a preset generalized object confidence regressor, according to the bounding box and the bounding box features, obtaining a bounding box of an unknown object:
Acquiring an image sample;
inputting the bounding box features and the generalized object confidence coefficient of the image sample to a two-stage target detector, and training to obtain a preset generalized object confidence coefficient regressor;
inputting the bounding box features and the class probabilities of the image samples to a two-stage target detector, and training to obtain a preset classifier;
inputting the bounding box features and the bounding box displacement vectors of the image samples to a two-stage target detector, and training to obtain the preset bounding box displacement regressor.
5. The method of claim 4, further comprising, after the inputting bounding box features and bounding box displacement vectors of the image samples to a two-stage object detector, training to obtain the preset bounding box displacement regressor:
optimizing the preset classifier through negative energy inhibition to obtain an optimized classifier;
and/or, the number of the groups,
performing optimization treatment on the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor;
and/or, the number of the groups,
optimizing the preset bounding box displacement regressor to obtain an optimized bounding box displacement regressor;
correspondingly, the detecting processing is performed through a preset generalized object confidence coefficient regressor according to the bounding box and the bounding box characteristics to obtain a bounding box of an unknown object, which comprises the following steps:
Detecting by using an optimized generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object;
and detecting the object through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics to obtain a known object, wherein the method comprises the following steps of:
and detecting through an optimized classifier and an optimized bounding box displacement regressor according to the bounding box and the feature of the bounding box to obtain the known object.
6. The method of claim 5, wherein the optimizing the preset classifier by negative energy suppression to obtain an optimized classifier comprises:
training the preset classifier by negative energy suppression and combining a cross entropy loss function and an uncertainty measurement loss function synthesized based on a virtual sample to obtain an optimized classifier;
the optimizing processing is performed on the preset bounding box displacement regressor to obtain an optimized bounding box displacement regressor, and the optimizing processing comprises the following steps:
training the preset bounding box displacement regressor through a preset regression loss function to obtain an optimized bounding box displacement regressor;
The optimizing processing is performed on the preset generalized object confidence coefficient regressor to obtain an optimized generalized object confidence coefficient regressor, which comprises the following steps:
setting the image sample to comprise K examples, wherein K is any positive integer;
defining two indexes of an alternating prediction ratio and an alternating value ratio for the image sample, wherein the alternating prediction ratio and the alternating value ratio are obtained through calculation of the K examples and bounding box samples in the image sample;
classifying bounding box samples in the image samples according to the cross prediction ratio and the cross transformation value ratio, distributing the bounding box samples containing the same object instance to the same group to obtain K groups of bounding box samples, and dividing the K groups of bounding box samples into a complete object set, a local object set, an out-of-limit object set and a non-object set;
obtaining a first loss parameter according to a first preset generalized object confidence score and the complete object set;
obtaining a second loss parameter according to a second preset generalized object confidence score and the local object set and/or according to the second preset generalized object confidence score and the out-of-limit object set;
according to the complete object set, obtaining a third loss parameter through comparison and learning;
And training the preset generalized object confidence coefficient regressor according to the first loss parameter, the second loss parameter and the third loss parameter to obtain the optimized generalized object confidence coefficient regressor.
7. The method according to any one of claims 1 to 6, further comprising, after the detecting processing by a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics,:
and carrying out fusion processing on the bounding box of the unknown object and the known object to obtain an object set.
8. An image processing apparatus, comprising:
the acquisition module is used for acquiring the image to be processed;
the feature extraction module is used for extracting features of the image to be processed through a preset target detection model to obtain a bounding box and bounding box features;
the first object identification module is used for carrying out detection processing through a preset generalized object confidence coefficient regressor according to the bounding box and the feature of the bounding box to obtain a bounding box of an unknown object, wherein the preset generalized object confidence coefficient regressor is obtained through training of the feature of the bounding box and the generalized object confidence coefficient of an image sample;
The second object identification module is used for carrying out detection processing through a preset classifier and a preset bounding box displacement regressor according to the bounding box and the bounding box characteristics to obtain a known object, wherein the preset classifier is obtained through training of the bounding box characteristics and the class probability of the image sample, and the preset bounding box displacement regressor is obtained through training of the bounding box characteristics and the bounding box displacement vector of the image sample.
9. An image processing apparatus, characterized by comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which computer-executable instructions are stored, which when executed by a processor are adapted to carry out the image processing method according to any one of claims 1 to 7.
CN202310416282.2A 2023-04-19 2023-04-19 Image processing method, device, equipment and storage medium Active CN116152576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310416282.2A CN116152576B (en) 2023-04-19 2023-04-19 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310416282.2A CN116152576B (en) 2023-04-19 2023-04-19 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116152576A true CN116152576A (en) 2023-05-23
CN116152576B CN116152576B (en) 2023-08-01

Family

ID=86362129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310416282.2A Active CN116152576B (en) 2023-04-19 2023-04-19 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116152576B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315375A (en) * 2023-11-20 2023-12-29 腾讯科技(深圳)有限公司 Virtual part classification method, device, electronic equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257120A (en) * 2018-01-09 2018-07-06 东北大学 A kind of extraction method of the three-dimensional liver bounding box based on CT images
WO2018224634A1 (en) * 2017-06-08 2018-12-13 Renault S.A.S Method and system for identifying at least one moving object
CN109101897A (en) * 2018-07-20 2018-12-28 中国科学院自动化研究所 Object detection method, system and the relevant device of underwater robot
CN111144366A (en) * 2019-12-31 2020-05-12 中国电子科技集团公司信息科学研究院 Strange face clustering method based on joint face quality assessment
CN112446296A (en) * 2020-10-30 2021-03-05 杭州易现先进科技有限公司 Gesture recognition method and device, electronic device and storage medium
CN112906502A (en) * 2021-01-29 2021-06-04 北京百度网讯科技有限公司 Training method, device and equipment of target detection model and storage medium
KR102301635B1 (en) * 2021-02-04 2021-09-13 주식회사 에이모 Method of inferring bounding box using artificial intelligence model and computer apparatus of inferring bounding box
CN114241260A (en) * 2021-12-14 2022-03-25 四川大学 Open set target detection and identification method based on deep neural network
CN115376101A (en) * 2022-08-25 2022-11-22 天津大学 Incremental learning method and system for automatic driving environment perception

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018224634A1 (en) * 2017-06-08 2018-12-13 Renault S.A.S Method and system for identifying at least one moving object
CN108257120A (en) * 2018-01-09 2018-07-06 东北大学 A kind of extraction method of the three-dimensional liver bounding box based on CT images
CN109101897A (en) * 2018-07-20 2018-12-28 中国科学院自动化研究所 Object detection method, system and the relevant device of underwater robot
CN111144366A (en) * 2019-12-31 2020-05-12 中国电子科技集团公司信息科学研究院 Strange face clustering method based on joint face quality assessment
CN112446296A (en) * 2020-10-30 2021-03-05 杭州易现先进科技有限公司 Gesture recognition method and device, electronic device and storage medium
CN112906502A (en) * 2021-01-29 2021-06-04 北京百度网讯科技有限公司 Training method, device and equipment of target detection model and storage medium
KR102301635B1 (en) * 2021-02-04 2021-09-13 주식회사 에이모 Method of inferring bounding box using artificial intelligence model and computer apparatus of inferring bounding box
CN114241260A (en) * 2021-12-14 2022-03-25 四川大学 Open set target detection and identification method based on deep neural network
CN115376101A (en) * 2022-08-25 2022-11-22 天津大学 Incremental learning method and system for automatic driving environment perception

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
卢艺帆;张松海;: "基于卷积神经网络的光学遥感图像目标检测", 中国科技论文, no. 14 *
曾华;潘毅铃;杨泽曦;王斌;: "使用特征空间归一化主类距离的智能零售场景开放集分类方法", 计算机辅助设计与图形学学报, no. 05 *
李世林;李红军;: "改进的最小包围球随机增量算法", 图学学报, no. 02 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315375A (en) * 2023-11-20 2023-12-29 腾讯科技(深圳)有限公司 Virtual part classification method, device, electronic equipment and readable storage medium
CN117315375B (en) * 2023-11-20 2024-03-01 腾讯科技(深圳)有限公司 Virtual part classification method, device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN116152576B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN111476284B (en) Image recognition model training and image recognition method and device and electronic equipment
KR102513089B1 (en) A method and an apparatus for deep learning networks training using soft-labelling
CN110472675B (en) Image classification method, image classification device, storage medium and electronic equipment
CN112052186B (en) Target detection method, device, equipment and storage medium
CN109189767B (en) Data processing method and device, electronic equipment and storage medium
CA3066029A1 (en) Image feature acquisition
CN111079638A (en) Target detection model training method, device and medium based on convolutional neural network
CN116152576B (en) Image processing method, device, equipment and storage medium
Arya et al. Object detection using deep learning: a review
CN115713715A (en) Human behavior recognition method and system based on deep learning
CN116958873A (en) Pedestrian tracking method, device, electronic equipment and readable storage medium
CN110728229A (en) Image processing method, device, equipment and storage medium
CN113837255B (en) Method, apparatus and medium for predicting cell-based antibody karyotype class
CN115393755A (en) Visual target tracking method, device, equipment and storage medium
CN115700790A (en) Method, apparatus and storage medium for object attribute classification model training
CN115131291A (en) Object counting model training method, device, equipment and storage medium
CN116777814A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN113177479A (en) Image classification method and device, electronic equipment and storage medium
CN110059180B (en) Article author identity recognition and evaluation model training method and device and storage medium
CN111414895A (en) Face recognition method and device and storage equipment
CN113792569A (en) Object identification method and device, electronic equipment and readable medium
CN112990145B (en) Group-sparse-based age estimation method and electronic equipment
CN110472728B (en) Target information determining method, target information determining device, medium and electronic equipment
CN111860573B (en) Model training method, image category detection method and device and electronic equipment
CN117475291B (en) Picture information identification method, apparatus, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant